The UK’s National Cyber Security Centre (NCSC) has issued a warning about the potential misuse of chatbots through “prompt injection” strikes, which could lead to scams and data thefts.
Do you know what data is being used to ‘train’ the AI in your organisation?
Do you have a process for managing ‘risk’ in the use of AI?
Are employees being trained in the use of AI?
Attend #RISK to learn & knowledge share:
Learn more about #RISK Amsterdam – 27th & 28th September 2023
LEARN MORE ABOUT #RISK LONDON – 18th & 19th October 2023
Chatbots, powered by artificial intelligence, are designed to respond to user prompts, and are often used in online banking and shopping. These chatbots operate based on large language models (LLMs) like ChatGPT and Google’s Bard, trained on data to generate responses that mimic human conversation.
Anxieties are now arising from the fact that chatbots interact with third-party applications and services, making them susceptible to malicious prompt injection. In these attacks, bad actors can input prompts to control and manipulate the chatbot’s behaviour, causing the tool to act in unintended ways. For instance, inputting unfamiliar statements or specific combinations of words can lead chatbots to generate inappropriate or harmful content, or even expose private information.
In one example highlighted by the NCSC, a student found a way to hack Microsoft’s Bing Chat by requesting it to “ignore previous instructions,” inadvertently revealing the chatbot’s entire prompt list. Similarly, security researcher Johann Rehberger found that ChatGPT could be prompted to respond to new prompts through third-party inputs, potentially accessing sensitive data, such as YouTube transcripts.
The NCSC has emphasised how prompt injection attacks can have consequences that very much hit home in the real world, if cyber defences and protection strategies are not as strong as they could be. The NCSC also stressed the importance of designing systems with IT security prioritised at every juncture, implementing rules-based systems as well as machine learning models to prevent damaging actions even when prompted.
Know the risks
As we move deeper into the era of AI, business leaders have to understand the risks that come with the application of evolving technologies and discover the best strategies to shore up digital infrastructures.
The issues take centre stage at #RISK Amsterdam this month, where experts discuss the threats, challenges and solutions defining the risk landscape of today.
Not to be missed…
Session: AI Unleashed - The Cyber Security Battle Against Weaponised AI
Date: Wednesday 27 September, 2023
Location: Privacy, Security & ESG Theatre
Time: 14:00pm – 15:00pm (CET)
Panellists will discuss the opportunities that AI presents for improving security and detecting threats, as well as the challenges that organisations face when implementing AI solutions.
Experts will debate the use of AI to protect against cyberattacks, discussing topics such as the potential of AI to automate security processes, the limitations of current AI technology, and the ethical considerations surrounding the use of AI in security.
Session: Streamlining Risk Management Across the Customer Journey with AI and process automation
Date: Wednesday 27 September, 2023
Location: GRC & Financial Risk Theatre
Time: 11:00pm – 11:30pm (CET)
Explore the latest strategies for fraud detection as we delve into the realm of AI-driven algorithms and machine learning. Discover a recently implemented loss prediction model and how machine learning technologies help thwart fraudulent activities across the entire customer journey, fortifying security while ensuring a seamless experience.
#RISK Amsterdam
With over 50 exhibitors, keynote presentations from over 100 experts and thought leaders, panel discussions, and breakout sessions, #Risk Amsterdam 2023 is the perfect place to learn about the present and future risk landscape.
Click here for the full #RISK Amsterdam agenda
Click here to register for #RISK Amsterdam – 27th & 28th September 2023
Related Events:
Do you know what data is being used to ‘train’ the AI in your organisation?
Do you have a process for managing ‘risk’ in the use of AI?
Are employees being trained in the use of AI?
Attend #RISK to learn & knowledge share:
No comments yet