We are delighted to confirm that RegTech adviser, Stan Yakoff will speak at #RISK A.I. Digital, opening tomorrow.
Livestreaming March 20 and 21, #RISK A.I. Digital examines how we can harness AI technologies responsibly, ethically and safely.
Across two days, thought leaders, industry experts and senior professionals will provide their insight, enabling audiences to understand how to mitigate risk, comply with regulations and achieve robust governance over AI.
Event speaker, Stan Yakoff is RegTech Adviser and Law Professor at Fordham Law School.
Stan has spent over a decade at firms such as Citadel Securities, Marshall Wace, and Knight Capital Group.
Stan Yakoff will be at #RISK A.I. Digital to discuss compliance in AI and the business benefits that grow out of a proactive approach to regulatory alignment.
Below, Stan goes over his professional journey and introduces the key issues of his talk.
Related Session:
AI in Compliance. Real World Use Cases and Business Value
- Time: 19:45 – 20:15 GMT
- Date: Wednesday 20st March (Day 1)
Could you outline your career so far?
Over the past decade I have pioneered RegTech, surveillance, compliance, and analytics programs from the ground-up in addition to covering sophisticated trading and technology environments across a spectrum of financial products globally.
I serve as an Adviser to RegTech firms focusing on making legal and compliance more effective and efficient, in addition to teaching Trading, Risk Management, and Market Structure Regulation at Fordham University School of Law.
I hold a J.D. in Law from Fordham University School of Law where I was an Associate Editor on the Journal of Corporate & Financial Law. Additionally, I hold an M.Eng. in Engineering Management, M.S. in Pharmaceutical Manufacturing Engineering, and M.A. in Technology, Policy & Ethics from Stevens Institute of Technology. When not building or lecturing, you can find me on the tennis courts.
Could you describe the practical applications and subsequent value of introducing AI to compliance functions?
The value proposition involves equating compliance as an engineering problem in the sense that an organization needs to know and demonstrate that all of its operations are in accordance with applicable rules and regulations.
Daily organizational operations tend to involve and proliferate data at significant scales across dimensions like: volume, velocity, variability in the data, producers and consumers of data, geography.
It is thus quite difficult, if not impossible, to manage this manually and in a consistent manner. This is precisely where technology and taking a data-driven approach to compliance offers significant benefits.
Some of the most practical applications of incorporating technology (inclusive of AI functionalities) for solving regulatory compliance needs includes:
a. Transaction Monitoring Systems – being able to systematically review transactions (e.g., trading activities, cash movements such as in an AML context) for anomalous indicators suggestive of suspicious activity. For example, we recently saw a nearly $350M fine to a large bank for lapses in its trade monitoring program over a multi-year period.
b. Communication Monitoring and Archival Systems – being able to systematically archive a firm’s communications (e.g., its books and records) while simultaneously analyzing them for anomalous indicators suggestive of violating the firm’s policies and procedures and applicable regulatory requirements. For example, over the last year, we’ve seen the SEC impose over $2 billion in fines for utilizing off-channel communications in contravention of regulatory requirements.
c. Know Your Customer (KYC) Onboarding Systems – being able to seamlessly source KYC and AML-related information, centralize the information, simultaneously analyze the information for possible red flags, simultaneously continue to monitor client profiles (e.g., regulatory disciplinary history, adverse media) for changes to initial KYC ratings, and enable investigators to review alerts, manage cases, and report on findings all in one platform.
What are some of the key challenges that businesses face as they integrate AI with compliance functions?
Perhaps it’s most helpful to outline several of the mistakes I’ve seen firms make:
a. Trying to rush a ‘solution’ without clearly understanding and articulating the problem statement being solved. Counter to the hype seen today, there are plenty of problem statements where ‘AI’ is not the most effective answer and there is a simpler, more cost-efficient, and timely approach. For example, the solution instead may be: (1) process automation or (2) the incorporation of additional contextual data available internally or externally to aid with decision-making.
b. Failing to understand and control data lineage from its generation to its ultimate consumption; if the data is necessary for accurate operations and it is not in your control or transparent oversight, you risk getting spurious outputs without possibly even knowing it.
c. If you’re ‘buying’ a solution rather than ‘building,’ taking assumptions for granted about AI availability and operations (especially from a vendor) without verifying or validating. Instead of assuming it works, assume it doesn’t and reverse engineer why that happened and what controls you’d need (or the vendor would need) to alert to and mitigate these outcomes. At the same time, it’s critical that products are thoroughly tested to ensure marketing meets reality, before any actual purchases are made.
d. Making decisions in one region, which carries its own regulatory requirements, without properly accounting for stakeholders in other regions which may have different regulatory requirements governing ‘AI’ (e.g., privacy) and different user requirements. The simple action of failing to take into account others’ user experiences and requirements often results in a situation where users are working for the technology (thus actually creating inefficiencies) rather than the technology helpfully working for the users.
Don’t miss Stan Yakoff exploring these issues in depth at #RISK A.I. Digital in the session: AI in Compliance. Real World Use Cases and Business Value.
Organisations are turning to AI to streamline compliance processes, mitigate risks and unlock new opportunities for innovation.
This panel discussion explores the practical applications of AI in compliance functions and the tangible business value it delivers across diverse sectors.
The dialogue will delve into a range of real-world use cases where AI technologies are transforming traditional compliance practices from regulatory monitoring, risk assessment and fraud detection.
Panellists will share insights from their experiences implementing AI-driven solutions, highlighting key challenges, success factors and lessons learned along the way.
Also on the panel:
- Grace Beason, Director of Governance, Risk and Compliance, Guidewire
- Supra Appikonda, COO, Co-Founder, 4CRisk.ai.
- Meera Banerjee, Regulatory Compliance, Investigations & Forensics Partner, PwC
Details
AI in Compliance. Real World Use Cases and Business Value
- Time: 19:45 – 20:15 GMT
- Date: Wednesday 20st March (Day 1)
The session sits within a packed agenda of insight and guidance at #RISK A.I. Digital, livestreaming through March 20 and 21.
Discover more at #RISK A.I. Digital
AI is a game changer, but the risks of generative AI models are significant and consequential.
The #RISK AI series of in-person and digital events features thought leaders, industry experts and senior professionals from leading organisations sharing their knowledge, and experience, examining case studies within a real-world context, and providing insightful, actionable content to an audience of end user professionals.
Attend the #RISK AI Series to understand how to mitigate these risks, comply with regulations, and implement a robust governance structure to harness the power of AI ethically, responsibly and safely.
No comments yet