We are delighted to announce that AI Data Protection and Privacy expert, Isabel Barberá is to speak at #RISK Amsterdam, opening this month.
Taking place at RAI Amsterdam on September 27 and 28, #RISK Amsterdam explores the trends and best practices organisations are employing to navigate today’s rapidly evolving risk landscape.
Isabel Barberá is the co-founder of Rhite, a legal & technical consultancy firm based in The Netherlands that specialises in Responsible AI and Privacy Engineering. She is a long-time advocate of privacy and security by design, and has always been passionate about the protection of human rights.
Related Session:
- The EU’s Game-Changing AI Act: What it means and where it’ll take us - Thursday 28th September, 11:00 - 12:00pm (CEST) - Privacy & Security Hub
BOOK YOUR PLACE AT #RISK Amsterdam
We spoke with Isabel about her professional journey and for an introduction to the themes under the spotlight at her #RISK Amsterdam session.
Could you outline your career pathway so far?
It is very long ago, but I studied computational linguistics and automatic translation at university. It’s quite amusing now, when I look at the developments in the field, and of course at all the innovation with Large Language Models.
After my studies, I got involved in software development and architecture design. I’ve always been an autodidact, creative and very curious about how things work, and I am also a passionate defender of human rights. That is probably why over the years I got attracted to the field of Cybersecurity, Privacy and Data Protection. A few years ago, I also decided to pursue a Master’s degree in law and digital technologies.
During my career, I have been involved in several high impact AI projects. One of the things that worried me the most was the wide and varied number of things that could go wrong, and how this could have an adverse impact on individuals and society. This was when I started to research AI privacy threat modelling and this resulted, after three years’ research, in my 2002 publication of PLOT4ai, an open source AI privacy threat identification tool and methodology that is now internationally recognised.
I worked for a long time for IBM until I decided to start my own business in 2016. In the summer of 2022, I co-founded Rhite, a legal and technical consultancy firm specialising in Responsible & Trustworthy AI and in the field of Privacy Engineering.
Besides my work as Privacy Engineer and AI advisor for different clients, I am a member of the EU Agency for Cybersecurity (ENISA) Ad Hoc Working Group on Data Protection Engineering and a member of the European Data Protection Board (EDPB) pool of experts where I work on ad hoc basis as an advisor.
What are the key areas of AI technology that the EU’s AI Act seeks to address?
I would say that the two key areas are accountability and the protection of fundamental rights.
The AI Act, as product regulation, wants AI products to comply with certain rules in order to protect individuals’ fundamental rights and society. These rules try to establish accountability requirements like data governance and risk management, and the compliance with the seven ethical principles for trustworthy AI.
The regulation also tries to implement the requirements based on a risk classification of AI systems. I would say the AIA tries to establish a baseline for the creation of trustworthy AI systems based on a risk-based approach, establishing specific requirements and raising awareness about the impact on fundamental human rights.
What are the primary challenges to the Act’s implementation?
One of the main challenges is the current state of the art of AI. We do not yet have approved standards for developing and auditing this technology; there is not enough expertise to verify compliance and safety, and some fields of AI are in constant development.
Industries in general, but especially start-ups, still have to increase their maturity level in matters such as data governance, risk management, privacy, security, and safety. Another aspect is the regulatory uncertainty, especially when it comes to the classification of systems based on their risk classification and the implementation of some specific requirements.
Don’t miss Isabel Barberá exploring these issues in depth in the #RISK Amsterdam panel debate: The EU’s Game-Changing AI Act: What it means and Where it’ll take us.
Recently passed by the European Parliament, the AI Act is designed to shape the future of artificial intelligence. It addresses critical aspects of AI technology, including ethics, safety, transparency, and accountability.
The session aims to shed light on the Act’s implications and the profound impact it will have on AI development and adoption within the EU and beyond.
Also on the panel
- Alex Gheorghe, Data Protection & Privacy Consultant and Cybersecurity Program Implementer, Inperspective Business
- Iris Kampers, Associate Director, ESG and Sustainability Lead, Savills
- Marcus Westberg, postdoctoral researcher at Delft University of Technology
Details
- Session: Day 2, The EU’s Game-Changing AI Act: What it means and where it’ll take us
- Theatre: Privacy & Security Hub
- Time: 11:00 – 12:00pm (CEST)
- Date: Thursday 28 September 2023
#RISK Amsterdam is also available on-demand for global viewing.
Book Your Place at #RISK Amsterdam
No comments yet