We are delighted to confirm that CMO, Janet Johnson will speak at #RISK A.I. Digital, this month.
Livestreaming March 20 and 21, #RISK A.I. Digital examines how organisations can harness AI technologies responsibly, ethically and safely.
Across two days, thought leaders, industry experts and senior professionals will provide their insight, enabling audiences to understand how to mitigate risk, comply with regulations and achieve robust governance over AI.
Event speaker, Janet Johnson is CMO and Founder at AI Governance Group. Today, she teaches, supports clients’ marketing and sales efforts, and focusses on the next tech transformation that is AI.
Janet will be at #RISK A.I. Digital to discuss education, literacy and training in AI.
Below, she goes over her professional journey and takes a look at the key issues.
Related Session:
The need for increased AI literacy, education and understanding
- Time: 14:30 – 15:00 GMT
- Date: Wednesday 20th March (Day 1)
Could you outline your career pathway so far?
I became enamoured of technology when I took my first mobile phone call 51 years ago as an AT&T Operator. In a summertime job while in high school, I discovered a local gentleman making phone calls (gasp!) from his car and needed help dialling numbers. Every time I saw that flickering light from the Kalama, WA CO, I grabbed it just to speak with him.
I began my career in tech in the Year of the Mac, 1984, and have never looked back. Over time, I worked for Apple, Enron (yes, I have stories), early Voice Over IP pioneer eFusion, Serena Software and more, as a business development leader, product marketing leader and digital marketing leader.
The past few years, I’ve focused on Education Technology and the education space, as a Chief Growth Officer for a financial services firm for Charter Schools, where we invested more than $2b US in schools around the US, safely managing risk and providing our schools additional services.
I built the Enrolment Marketing product line in 2019 that’s still thriving today. I’m an adjunct professor at Portland State University, teaching PR, social media and AI for the School of Business.
My journey to AI Governance began 20+ years ago when my company (Marqui) paid bloggers to blog about us, in the first paid influencer program. We caused quite a ruckus (before “going viral” was a thing,) and I quickly understood the power of two-way conversations online.
I worked with national and international organizations (Hanna Andersson, Sur La Table, Hickory Farms, Inter-American Development Bank, K12.com and more) to help them understand how to safely enter the social media space as businesses, training their employees, providing guidelines and guardrails, and playbooks on social media. (And I did my best to protect clients while truth, trust and transparency disappeared along the way.)
When generative AI launched in Nov. 2022, I realised we were in for another, larger tech transformation and the need for organisations to have a deep understanding of how to leverage AI tools and protect your employees, your IP, your brand and your organisation’s reputation was going to be paramount.
So, I gathered my favourite, trusted attorneys, anthropologists, risk management professionals, data scientists and brand and business builders (all C-Level folks) to form the AI Governance Group.
Could you describe the current risk landscape in terms of employees using generative AI responsibly? What are the primary problem areas?
Employees are using (especially) genAI tools without training, clear company policies or a deep understanding of how AI systems have been trained, and on (who’s / what kind of) data.
A Salesforce study in November of 2023 of 14,000 employees revealed that, globally:
- 55% of employees have used unapproved AI tools at work
- 45% of workplace generative AI users have used banned AI tools at work
- 69% of whom have never had ANY training
- 71% have never had training on using AI ethically
Meanwhile SaaS organisations are shoving AI into every technology (and the average organisation with up to 500 employees has a tech stack of >169 apps in use) according to Zylo. The need for education, policies, guidelines and guardrails is clear.
The risks of misuse, misunderstandings and reputational damage are seen every day in news coverage. Employees are begging for training. Organizations are looking for guidance. Communities need governance around AI.
We believe in the bright future of AI-augmented work transforming global challenges. We are deeply aware of the potential for the pitfalls along the way. And we’re actively working with organisational leaders to build a risk-managed use of AI in their organisations.
What are the main steps organisations can take in order to ensure workers adhering to best practice when using emerging technologies?
This is a board-level conversation. But it’s not yet happening at a broad level, especially in SMB-sized orgs. Business leaders must:
- First recognise the AI horse is out of the stable. We’re not going to be able to ignore it any more.
- We must understand the deep need for training and education.
- Our employees need clear guidelines and policies to protect their work, their brand, and their customers.
- Data that feeds these systems must be well understood
- Apply critical thinking all along the way
Don’t miss Janet Johnson exploring these issues in depth at #RISK A.I. Digital in the session:
The need for increased AI literacy, education and understanding
Over half (55%) of employees globally have used unapproved generative AI tools in their work. And 40% have used banned generative AI tools in their work, Salesforce says. And of those, 69% have never received nor completed training, increasing risks dramatically.
There’s never been a greater need to protect your employees, your IP, your brand’s reputation and your organisation as a whole as we enter into this newest technology transformation.
Exclusively at #RISK A.I. Digital, we explore the why and how of this crucial issue.
Details
The need for increased AI literacy, education and understanding
- Time: 14:30 – 15:00 GMT
- Date: Wednesday 20th March (Day 1)
The session sits within a packed agenda of insight and guidance at #RISK A.I. Digital, livestreaming through March 20 and 21.
Discover more at #RISK A.I. Digital
AI is a game changer, but the risks of generative AI models are significant and consequential.
The #RISK AI series of in-person and digital events features thought leaders, industry experts and senior professionals from leading organisations sharing their knowledge, and experience, examining case studies within a real-world context, and providing insightful, actionable content to an audience of end user professionals.
Attend the #RISK AI Series to understand how to mitigate these risks, comply with regulations, and implement a robust governance structure to harness the power of AI ethically, responsibly and safely.
No comments yet