Site icon Africa Ahead

How AI risks in compliance spill over into broader insurance sustainability

The elephant in the room in the 21st-century risk landscape is not how insurers and reinsurers are deploying artificial intelligence (AI) across the insurance value chain, but the extent and speed at which the technology is being adopted by businesses and households. Insurers that fail to recognise and respond to this shift face emerging exposures that threaten their long-term sustainability.

The risks inherent in the integration of technology in everyday life are not lost on governments or regulators. In April 2026, South Africa’s minister of communications and digital technologies published a draft National Artificial Intelligence Policy for public comment. The document acknowledges the “immense range of applications” for AI, and declares a primary objective of identifying “core principles to guide sectoral approaches”.

Some four months earlier, the country’s Financial Sector Conduct Authority (FSCA) and Prudential Authority (PA) released an inaugural report on the use of AI in the financial sector. This report gives the first comprehensive overview of AI adoption, including machine learning (ML) and generative AI (Gen AI), within South Africa’s financial institutions.

The report’s key findings spanned explainability and transparency; ethical standards and oversight; efficient and effective disclosures; governance frameworks; and the need for a coordinated, forward-looking approach across all role players, including institutions and regulators. Failing to adapt these functions to AI-driven change will alter both the customer-facing and internal risk postures of financial services firms.

In this context, the inclusion of AI on the programme at the Joint IFCA and CISA (International Federation of Compliance Associations and Compliance Institute South Africa) Summit, held recently in Cape Town, was not surprising. The event organisers invited a panel of compliance experts to share some of their compliance and risk management experiences as their firms pushed into the digital world. Their compliance-related insights translate readily into insurance and risk management across all sectors of the economy.

Samantha Padayachee, managing executive for group compliance at Vodacom, singled out change management and data quality as two major talking points. Under the former, she suggested making sure that all employees were ‘on board’ before undertaking an automation or digitalisation journey; under the latter, that firms realise there will always be ‘quality of data’ issues.

Rianne Potgieter, chief executive at IFCA and moderator of the discussion, asked Nevellan Moodley, a partner at BDO, how he approaches data and technology innovations at the firms he consults to. He said that decades ago, consultants would interact with distinct information technology (IT) teams; nowadays, tech innovation spans divisions and pay grades, introducing new risk.

“In this world of AI, we are starting to move back to somewhat of a shadow IT environment – everyone has an AI agent running,” Moodley said.

He encouraged firms to adapt their IT risk policies to enable quicker problem resolution, but warned of the potential for data compromises if haphazard, unchecked access was given to company servers.

Firms must also keep a close eye on the generalisation of IT skills. As recently as 2019, new hires in AI roles tended to be deeply technical individuals. From 2022, the release of ChatGPT and other large language models (LLMs) caused an inflection point that saw non-technical hires with AI competencies outpacing appointments among computer engineers and coders.

“You want to make sure that your compliance hires have an understanding of AI and a little bit of coding,” Padayachee said.

“Our new anti-money-laundering (AML) analysts have to complete a robotic process automation (RPA) course within three months of joining.” A basic understanding of AI and data science will become non-negotiable across sectors and roles in the future.

Potgieter introduced the concept of an adaptability quotient (AQ) to stand alongside the more conventional EQ and IQ measures when assessing new hires. Moodley agreed in the context of the rate of change exhibiting in the AI and technology space.

“Success goes beyond knowing the technology and knowing how to use it, to finding the use cases and [figuring out] how you are going to add value,” he said. Tomorrow’s IT leaders may be graduating from the College of TikTok or YouTube rather than MIT.

The panel then reflected on the pervasiveness of Agentic AI or AI agents in future workplaces. “Agentic AI is a new buzzword,” said Bradley Elliott, CEO at Rely Comply. “We use it predominantly for onboarding, and deploy a number of AI agents that can help in our case management system.” In the latter use case, the AI agents help human analysts and compliance officers to work cases more effectively.

Elliott suggested that firms had some way to go to maximise the benefits from non-AI RPA and traditional AI before exploring Agentic AI as a next step.

“There are a ton of AI and Agentic AI proof-of-concepts, particularly in the financial services sector, but a lot of these are not being adopted because they are not creating the expected value,” he said.

The discussion turned to the mismatch between legacy product and solutions development timeframes and the pace of technology. Nowadays, by the time a large bank or insurer gets a tech-backed solution through legal and risk, the underpinning technology has moved on. The trick is for firms to be in a state of readiness, always asking the right questions, and being ready for rapid experimentation.

Moodley made a similar observation that will leave traditional risk managers cold, saying that AI models were far better at building Agentic AI than humans.

He offered an example of his pending shift into the fraud-tech world; within hours, his start-up became a Claude agentic native. He said that a more traditional organisation might take 18 months to transition, but by that time there will probably be another version of Claude that comes along.

The new reality is simple: you cannot wait 22 months for risk and compliance to sign off on a solution that takes just three days to build.

Potgieter asked what organisations should avoid when implementing AI in compliance? The short answer is not to wait for AI solutions to be perfect.

“AI is scary and exciting, but it also represents a great opportunity for a country like South Africa, where we face significant unemployment and growth challenges,” Moodley said.

The spin-off from that question is how compliance and risk professionals could trust AI. According to Elliott, the starting point is to ensure the solution is both explainable and auditable. “You do not want to have a black box within your organisation … the models are prone to hallucination and will do strange things,” he said. The big challenge becomes ensuring consistent outcomes as AI models and data inputs evolve.

In her closing remarks, Padayachee called on the audience to go the extra mile to understand AI and other emerging technologies.

“If you have that understanding, you will be able to apply it in terms of the function that you are performing,” she said. “AI is here to stay. It is going to evolve very quickly. And we, as a cohort and as a community, need to make sure we keep up with the times.”

In the panel’s case, this entails incorporating AI knowledge in the compliance field. For general insurance professionals, it means applying that knowledge across the value chain to ensure sustainability. And for the rest of us, it means acknowledging the role of AI in ‘insert your core competency or job function here’.

Exit mobile version