Seven key takeaways from the #RiskAI conference

RiskAI, organised by GRC World Forums was an-depth walk through the latest thinking in AI; the opportunities, the risks and the key considerations. The rapid adoption of AI technologies is creating new challenges for corporate governance and risk management. Here were my key insights from the RiskAI conference:

1) Board-Level Concerns
Directors need to pay attention: there’s growing potential for personal liability related to AI misuse, exemplified by recent cases like the Dutch government benefits scandal [link]. Alarmingly, an estimated 70% of organisations are using AI without their Board’s knowledge, setting the stage for a potential reckoning as the technology moves beyond the initial hype cycle.

2) Regulation and guidance overload (and underload?)
The regulatory environment is evolving quickly. The EU AI Act leads the way as the most comprehensive framework, categorizing AI systems into four risk levels from prohibited to minimal risk. Meanwhile, the US has introduced multiple pieces of legislation, from executive orders to state-level, while the UK is taking a more principles-based, approach , and relying on existing regulation. There are over a thousand pieces of OECD guidance, making this a complex landscape to navigate.

3) Risk Management Challenges – and the potential for re-birth
Only 21% of organisations consider themselves leading edge in AI governance, and just 29% of CFOs and CROs believe their AI risks are being properly addressed. Key areas requiring attention include:

  • Data privacy and security (with models potentially leaking sensitive information)
  • Model governance and monitoring
  • Clear audit trails and validation processes
  • Post-market monitoring and crisis management protocols

For a comprehensive inventory of AI-related risks, check out the excellent IBM AI Risk Atlas, with thanks to Ian Francis of IBM for highlighting this.

One of the more optimistic takes I heard was from Kyle Martin, who asked whether AI could be a way to invigorate risk management? AI’s capabilities could allow us to speed up analysis, deliver more timely insights and simplify the landscape – an alluring possibility.

4) Building Trust is crucial
Trust is everything in this game and organisations need robust strategies to build stakeholder trust, including:

  • Clear validation processes for AI outputs
  • Early warning systems for model drift, bias, discrimination or errors
  • Comprehensive AI use case inventories
  • Strong human oversight in critical decisions
  • Regular testing and verification procedures

Also, we need to be able to invoke our existing crisis management protocols when things go wrong – mobilising the right people, putting out communications, turning off the offending system and managing the fallout. These are well-honed skillsets for risk professionals.

Another emerging concept is the possibility of an AI Kitemark in the future, scoring AI models on their use, their training and oversight – one to watch.

5) Diversity and inclusion are critical in AI development

Right now, AI models are being developed to:

  • Decide who gets access to funding, banking, investment
  • Assist policing and security (through e.g. facial recognition)
  • Filter CVs and put people forward for jobs

Panelists spoke convincingly about evidence of bias already in action. If misapplied, AI models will widen the existing disparities, not solve them.

To read more on this topic, I highly recommend the Machine Race blog by Suzy Madigan as well as a report co-developed by Care and Accenture on what can be done to involve the global south in AI development.

6) Competitive Advantage

Edward Zyzskowski, who built some of the earliest neural networks, contributing to the development of IBM’s Watson, talked about how LLMs vacuum up the “Diamonds, dust and dirt” from the internet [Ed, if that isn’t your next book title, you’re missing a trick] – true competitive advantage comes from adding your organisational knowledge.

Daniel Hulme of WPP reinforced this, saying that the real competitive edge in AI comes not from the technology itself, but from:

  • Proprietary data and differentiated insights
  • Specialised talent
  • Leadership that understands AI’s transformational potential, and
  • Effective deployment that empowers rather than replaces humans

7) Looking Forward
Oliver Patel (LSE, Astra Zeneca) spoke convincingly about AI’s ability to speed up clinical trials and accelerate time to market in the pharma industry. Hence the importance of setting the right guardrails to allow innovation to still occur. As AI continues to evolve, organisations should focus on:

  • Taking a risk-based, 80/20 approach to governance
  • Developing proportionate human-in-the-loop practices
  • Maintaining human judgment and avoiding de-skilling
  • Ensuring diversity in AI development to prevent bias
  • Building robust frameworks for testing, evaluation, and validation

While AI offers tremendous opportunities, organisations must “know their AI” and manage the risks proactively before they become problems. Success requires striking a careful balance between innovation and control, while maintaining strong human oversight of critical decisions.

[Notes synthesised with the help of Claude AI]

For a bonus piece “Eight ways in which AI is altering corporate approaches to security”, take a look here.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Your Bag
Shop cart Your Bag is Empty