European Union lawmaker Brando Benifei, a key contributor to the EU’s artificial intelligence (AI) regulations, has said that he hopes the EU’s AI Act will set the standard for AI legislation worldwide.
The EU has taken a lead in setting safeguards around the new technology, and is now pushing to establish its framework as the regulatory benchmark, in a similar vein to the example set by the GDPR. The EU’s draft rules are expected to be approved before the end of the year.
During the recent Reuters NEXT conference in New York, industry leaders came together to highlight the need to establish AI parameters to protect societies and democracies around the world.
In the US, discussions are ongoing in Congress regarding potential legislation to mitigate AI-related harms, including impacts on elections. The talks follow President Joe Biden’s signing of an executive order last month requiring developers of potentially risky AI systems to share safety test results with the US government before going live with new offerings.
Liz O’Sullivan, a member of the National AI Advisory Committee in the US, has emphasised the inherent dangers in AI, particularly the ways in which the technology draws information from learned situations and existing human biases. She spoke of proposed AI regulations, that might concern external audits, risk impact assessments, and the ability to turn off AI.
The US and China were two of 28 countries to join the UK in getting behind Rishi Sunak’s “Bletchley Declaration” recently – a pact that encourages formal collaboration between nations to build a safer AI future. In October, the G7 agreed on a voluntary code of conduct for AI development, setting a precedent for major countries in governing AI against a backdrop of anxieties over privacy and security risk.
Benifei has noted alignment with global commitments but has also drawn attention to the importance of enforcing regulations through law, as opposed to commitments being voluntary. Such an approach would be needed, he argues, to address higher-level challenges in AI development, such as the risk of AI being weaponised.
Know the risks
As global regimes work together to ensure safe development of AI, it’s never been more important for business communities to understand evolving regulatory landscapes, and how operations stand to be impacted.
The issues take centre stage at PrivSec Global this month, where experts will debate AI and discuss what companies need to do to embrace emerging technologies safely and effectively.
AI is among focus topics, with exclusive sessions including:
Related Sessions:
→ U.S. Data Privacy laws launch a new era in 2023
- Day 1: Wednesday 29th November 2023
- 13:30 - 14:15pm GMT
There is no doubt that AI is the new revolution. It is developing rapidly, both technologically and legally, and many organisations are facing the big question: How do you remain compliant, while gaining the commercial benefit of using AI?
This interactive session will provide a practical roadmap to avoid AI chaos, how to overcome challenges and pitfalls, and build a responsible AI strategy in the workplace.
→ Ethical AI in principle: Innovation overtaking human rights?
- Day 2: Thursday 30th November 2023
- 12:30 - 13:15pm GMT
The intersection of artificial intelligence and privacy continues to attract attention, with a focus on ensuring that AI systems respect individual privacy rights and avoid discriminatory practices.
As technology races forward and regulatory efforts strive to catch up, the question emerges: Are we shaping a sustainable ethical future amid rapid advancement?
Discover more at PrivSec Global
As regulation gets stricter – and data and tech become more crucial – it’s increasingly clear that the skills required in each of these areas are not only connected, but inseparable.
Exclusively at PrivSec Global on 29 & 30 November 2023, industry leaders, academics and subject-matter experts unite to explore these skills and the central role they play within privacy, security and GRC.
No comments yet