We are very happy to announce that data protection expert, Lesley Holmes will speak at #RISK London, this October.
Taking place on October 18 and 19 at ExCeL London, #RISK London addresses the issues impacting organisational risk today, from Governance, Risk and Compliance (GRC), to Environmental, Social and Governance (ESG), organisational culture, and much more.
The event builds on the success of #RISK 2022, allowing organisations to examine the cumulative nature of risk, unite GRC specialities and share views with subject-matter experts.
Lesley Holmes is DPO at MHR Global, and will be attending #RISK London to discuss Artificial Intelligence (AI) in the workplace.
Related Session:
- AI in the workplace – the DPO framework and roadmap to avoid chaos - Thursday 19th October, 15:00 - 16:00 - Data Protection & Privacy Theatre
BOOK YOUR PLACE AT #RISK LONDON
We spoke with Lesley to hear more about her professional journey so far and for insight into the themes under the spotlight in her #RISK London session.
Could you outline your career pathway to date?
I started my career in the late 70s, early 80s, working in local authority revenues and benefits. It was an interesting time because we had just picked up from the 1984 Data Protection Act, which meant that we had to be extra careful with personal data. In my role, I was dealing with personal data all the time, whether it was someone’s income details or just their name and address.
I continued to work in revenues and benefits for quite a long time, but eventually, I moved into management consultancy, still mainly in the public sector. I was still dealing with personal data, but I also started to look at document process automation – document management systems and implementing them.
As I progressed, I moved into information management and governance, learning how to manage data effectively. And then I found my way into data protection. I became particularly interested in privacy and how it relates to data protection.
For the last ten years, I have been working purely in data protection, and I must say that everything I’ve done in the past has helped me to support what I do now. Data protection is not just about privacy; it’s much more than that. I find it fascinating, and I am grateful for the opportunity to work in this field. It has been a long and varied career journey.
In your sector, what are organisations’ chief concerns as they bid to balance increased adoption of AI with compliant data handling practices?
As an HR and payroll company, we analyse AI technology not only from a technological standpoint but also from the perspective of HR. It is important for us to consider whether the HR sector actually desires the implementation of AI solutions, as there is no point in developing something that isn’t desired by HR professionals.
One area of concern relates to using AI to generate resumes, CVs, and cover letters. It poses a risk because if everyone relies on AI for these tasks, it diminishes the ability to stand out. Moreover, AI has the tendency to “hallucinate” – to fabricate – information, making it somewhat unreliable.
Additionally, when AI is employed to generate job descriptions or similar content, the lack of innovation and creativity becomes apparent due to the nature of generative AI and large language models. This can lead to uninteresting and monotonous job postings, ultimately deterring suitable candidates and failing to convey the organisation’s unique impression.
Another issue is the presence of bias during AI training. When training an AI model, you have to ensure to remove biases. This raises questions regarding the appropriate choice of training material and the careful consideration of what is being used for training
If the AI is trained on self-created dummy data, you need to acknowledge and address the biases that may be in that data. The creator’s biases can unintentionally influence the training process. On the other hand, organisations may use anonymous data from extensive datasets, whether obtained from open data sources or customer data sets.
However, when incorporating live data, preferably anonymised, it becomes necessary to handle biases effectively. Accessible data may be limited to specific types of organisations, such as those in the public or private sector, each with its own unique organisational culture.
Public sector data is often subject to government regulations and requirements, leading to significant datasets that focus on reporting aspects like gender equality, ethnicity, inclusion, diversity, and more. However, these specific reporting requirements may not apply uniformly to the private sector, especially among smaller organisations, which may possess different datasets.
Thirdly, when using AI for decision-making processes, such as automated decision-making or profiling, it is crucial to possess the necessary skills to understand how the AI arrived at a particular decision. It’s just as important to be able to explain the decision in a language that is understandable to the intended audience. You can’t just say, “computer says no” or “computer says yes”.
Data protection legislation imposes requirements that decisions made by AI should be explainable and subject to potential reconsideration. Therefore, understanding how the AI system reached its decision is vital. It is important to determine whether the decision was accurate or erroneous. In cases of incorrect decisions, it becomes necessary to retrain the AI system. Fully grasping this process can be challenging.
An additional aspect to consider is how AI models impact performance management. For example, after conducting a performance interview, there might be an intention to analyse all performance interviews collectively to evaluate the overall performance within the organisation. The data collected or generated serves a purpose. It could provide insights on various aspects, such as identifying skill gaps, determining training needs, increasing productivity, or assessing areas where AI can be used to enhance performance.
Using AI to automate routine tasks can optimise productivity, potentially requiring fewer personnel to achieve the same output. This situation presents a parallel to what transpired in the automotive industry during the 1980s when robots were introduced to production lines. Although it reduces the need for mundane work, it also reduces the workforce required.
Another important aspect to consider is leveraging AI technology to enhance quality. For example, using AI for routine testing can be highly beneficial. You can identify the specific X, Y, and Z tests that need to be conducted consistently. When these tests need to be performed at scale, AI can be employed to handle them efficiently.
By offloading routine testing to AI, your skilled human testers can focus their expertise on tasks that require more unique and unconventional approaches. These tasks demand thoughtful analysis to generate the necessary test results for ensuring software quality. This way, AI can be used as a complementary tool in the testing process.
But there’s high risk involved. It is not appropriate to rely on AI for tasks that should not be delegated to it, such as conducting performance management for individuals. While generating a script for a performance management review may be a suitable application, it is crucial to evaluate whether it aligns with your organisation’s culture, goals, and ambitions, and to ensure it avoids any potential discrimination.
What compliance benefits will an “AI revolution” bring to data protection and security professionals?
AI technology is extensively used in the security industry, particularly for monitoring activities within firewalls and similar systems. Its primary function is to identify patterns that are considered normal and raise flags for any abnormal behaviour.
While AI can detect anomalies amidst a large volume of data, it can lack the ability to determine whether those anomalies are reasonable, risky, or otherwise. Human intervention remains necessary to assess and interpret the flagged anomalies effectively. Nevertheless, AI offers the advantage of reducing the need to filter through vast amounts of data, preventing individuals from becoming desensitized to deviations.
The key benefits of AI in security lie in its speed and consistency. It can rapidly assess and bring attention to potential security issues, providing consistent actions based on its programmed instructions.
However, whether these actions are right or wrong is subject to debate. From a compliance perspective, the consistent and reliable nature of AI’s actions can contribute to advancing compliance objectives. Its ability to monitor and process millions of transactions across firewalls within seconds or milliseconds is truly remarkable. This capability proves especially beneficial for compliance-related tasks, although its value for creative endeavours may be limited.
AI can provide security professionals with valuable opportunities for research and insights into industry practices. However, it is important to note that AI cannot replace the creative aspects that marketing professionals bring to the table.
While AI can assist in gathering information available on the internet, it is necessary to consider intellectual property rights and related considerations. For instance, one can explore current trends in e-learning for topics like data protection.
However, the availability of such information on the internet may be limited due to intellectual property restrictions as organisations may sell their proprietary content. Therefore, the compliance benefits of AI lie more in areas such as research and processing large datasets, rather than in generating creative content.
To make topics like data protection and security engaging, you need a creative mindset. AI may excel in processing vast amounts of information, but it doesn’t make subject matter appealing at the moment. Ultimately, while AI can support research and data processing, the role of human creativity remains crucial in making compliance topics interesting and engaging.
Lesley Holmes joins fellow experts to explore these issues in depth at #RISK London, in the session: “AI in the workplace – the DPO framework and roadmap to avoid chaos.”
The session sits within a packed two-day agenda of insight and guidance at #RISK London, taking place on October 18 and 19 at ExCeL London.
#RISK London unites thought leaders and subject matter experts for a deep-dive into organisational approaches to handling risk. Content is delivered through keynotes, presentations and panel discussions.
Details
- Session: Day 2, AI in the workplace – the DPO framework and roadmap to avoid chaos
- Theatre: Privacy & Data Protection
- Time: 15:00 – 16:00 GMT
- Date: Thursday 19 October 2023
#RISK London is also available on-demand for global viewing.
Book Your Place at #RISK London
No comments yet