The competition to create more powerful and more effective AI services is getting fierce, and many experts have vocalised their fears that tools such as ChatGPT may cause more problems than they solve if left to blossom without much-needed regulation.
Risks have been spelt out, but in their bid to harness wizardry that might turn standard business practice on its head, lawmakers are aiming to use existing regulatory frameworks that may not be fit for purpose in the age of rapid digital transformation.
Politicians in Europe are currently wrapped up in drafting new rules to govern AI and ML services. The lawmakers’ work could follow the path of the GDPR and set a global standard in the tackling of privacy and safety risks that have evolved in line with the speedy growth of generative AI. However, the enforcement of such legislation won’t come anytime soon.
At a recent consultancy, European data governance leader, Massimiliano Cimnaghi said:
”In absence of regulations, the only thing governments can do is to apply existing rules. If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”
Last month, privacy regulators in Europe founded a task force to look at key potential privacy and security risk surrounding ChatGPT, following Italy’s decision to ban the technology. Explaining reasons for their recent ban, the Italian Data Protection Authority (Garante per la protezione dei dati personali), said that OpenAI – the company behind ChatGPT – had broken GDPR regulations.
In the USA, ChatGPT came back into public use after OpenAI brought in new measures to verify the age of users, and introduced a function to allow users in Europe to opt out of their data being harvested to help the AI model’s development.
Addressing the risk
The generative AI tools currently available are far from perfect, in many users’ experience, with platforms often making mistakes or spreading misinformation. The potential for error presents a major risk to individual users, as well as organisations across the private, public and third sectors.
As things stand, the technology’s reliability is undermined by the potential for bias, as evidenced in the decision of tech giants to put a halt on employing AI to govern ethical grey areas such as financial products.
Get to the edge of the AI debate
AI and ML present many opportunities for innovation and growth, but they also bring potential risks and challenges that can impact the safety, security, and privacy of individuals, organisations and societies.
Business leaders can access the very latest conversation and curated content on these issues in the #RISK AI & ML zone, part of the Privacy and Data Protection Theatre at #RISK London.
Taking place October 18 and 19, #RISK London brings high-profile subject-matter experts together for a series of keynotes, engaging panel debates and presentations dedicated to breaking down the challenges and opportunities that businesses face in times of unprecedented change.
“#RISK is such an important event as it looks at the broad perspective or risk. Risks are now more interconnected and the risk environment is bigger than ever before.”
Michael Rasmussen, GRC Analyst & Pundit, GRC 20/20 Research
No comments yet