Certified as an AI builder, AI specialist and AI master and founder of the SYSTINFO consultancy, Boukary Ouedraogo has a career at the crossroads of information systems and artificial intelligence. In this interview, he discusses the issues and challenges associated with implementing artificial intelligence (AI) in the insurance sector in Burkina Faso and even in the CIMA zone.
Is artificial intelligence (AI) an opportunity or a threat for the insurance sector?
Artificial intelligence is neither a threat nor an opportunity in itself; it is a tool whose impact depends entirely on how we use it. For the insurance sector, AI is above all a transformative opportunity, provided it is properly regulated.
Firstly, it enables better risk assessment, analysing thousands of data points in real time to offer fairer and more personalised pricing, allowing customers to pay a premium that truly reflects their risk profile, and is therefore more equitable for all.
Secondly, process automation frees teams from repetitive tasks. Underwriting, claims processing, document analysis – all of these can be significantly accelerated.
Thirdly, AI improves the customer experience. Intelligent chatbots can answer 80% of customer questions 24/7 without human intervention. This creates increased customer satisfaction.
Finally, AI is a powerful ally against fraud. This is perhaps the most critical opportunity. Detection systems can identify fraudulent patterns that the human eye would never see. Fraud detection is improved by a factor of approximately 3.5 times, if not more.
But there are also risks. The first is excessive reliance on automation. Entrusting everything to AI without safeguards risks losing human judgement, which is sometimes essential, particularly in complex or contentious cases.
The second risk is that of algorithmic bias. If our training data is not representative or contains prejudices, AI will reproduce and amplify these biases. This could lead to unintended discrimination in pricing or file acceptance. This is why data governance and regular model auditing are essential.
The third risk concerns data security and confidentiality. AI processes massive volumes of personal data. A security breach or mismanagement of this data could have serious consequences for customers and for the insurer’s reputation.
Finally, there is the risk of sophisticated fraud. Paradoxically, the same technologies that help detect fraud can be used by fraudsters to create increasingly convincing fake documents, deepfakes or identity theft, synthetic voice fraud and co-ordinated disinformation. This is a valid concern, and we are at a critical tipping point. AI does not create insurance fraud, but it amplifies and sophisticates it exponentially.
According to data from 2024-2025, AI-related insurance fraud is growing at a staggering rate. Deepfakes alone are estimated to have increased by 400% between 2023 and 2024, now accounting for 7% of all fraud attempts.
Even more alarmingly, synthetic voice fraud attacks increased in 2024 among insurance companies. And overall, fraudsters using AI increased fraud rates by 19% in 2024.
The categories of insurance fraud are: claims fraud, underwriting fraud, internal fraud (unauthorised access to insurance systems via identity theft), reinsurance fraud through data manipulation to affect premium calculations and risk models. Globally, 92% of companies reported financial losses due to deepfake incidents in 2024, with 10% reporting damages exceeding US$1 million.
In Africa and Burkina Faso, the situation is more complex but equally concerning. Although specific data on AI-related fraud is limited, we observe several factors that make Africa vulnerable: rapid adoption of technology without regulatory safeguards, still partial controls and digitisation, limited awareness, etc.
Are there any insurance lines or products that are more exposed to AI-related fraud?
Among the sectors most exposed according to the literature, motor insurance is the most vulnerable. This is because motor claims are numerous, relatively easy to falsify, and the amounts involved are often moderate, making them attractive to fraudsters. Deepfakes can be used to manipulate photos of car damage by creating fictitious damage or exaggerating minor damage, falsifying accident videos, or impersonating the driver or vehicle owner.
Home insurance is also a prime target, with fraudsters using AI to create deepfake images of water damage, fires or burglaries, falsify property documents or contracts, manipulate surveillance videos, and more.
Health insurance is also highly exposed, with particular risks relating to the falsification of medical documents (diagnostic reports, prescriptions), the creation of false medical invoices, identity theft to access cover, and the manipulation of electronic medical records (EMRs).
This is particularly concerning in Africa, where we are currently working on implementing electronic medical records in Mauritania. If these systems are not secure from the outset, they will become prime targets for fraudsters.
Travel insurance has moderate to high exposure because checks are often less rigorous than for home or motor insurance. Insurers who invest now in fraud detection for these specific lines of business will be best positioned to control this risk in the years to come.
AI is also an effective way to combat insurance fraud…
Absolutely. While AI amplifies fraud on the one hand, it also creates extraordinary detection tools on the other. It’s a race, but insurers who adopt AI for detection can regain the upper hand.
The types of fraud that AI can detect include claims fraud (detection through image and video analysis), underwriting fraud (detection through behavioural and biometric analysis), internal fraud (detection through access and transaction analysis), synthetic voice fraud, document fraud (detection through OCR and NLP), and reinsurance fraud (detection through predictive analysis).
Is this potential of AI being fully exploited by insurers in Burkina Faso and even in the CIMA zone?
The potential of AI is not yet being fully exploited by insurers in Burkina Faso and the CIMA zone. We are at an early stage. The majority of insurers in the CIMA zone still operate with partially digitised systems. Only a few pioneering companies have begun to deploy AI solutions for fraud detection.
Three factors explain this limited level of exploitation: high implementation costs, as sophisticated AI solutions require significant investment in infrastructure and expertise; insufficient or lack of local expertise, as there are very few AI experts in West Africa, especially in advanced automation (companies have to hire external consultants, which increases costs and delays).
There is also insufficient or poor-quality data for high-quality training needs; many Burkinabe insurers have fragmented, unstructured or incomplete data, making it difficult to deploy AI effectively.
Finally, there is cultural resistance to change, as the transition to AI is perceived as a risk rather than an opportunity. Traditional teams see AI as a threat to their jobs, not as a tool to help them. But we are at a tipping point. Insurers who invest in AI now will be tomorrow’s leaders. Those who wait risk falling behind. The time to act is now.
Among the major obstacles to the implementation of AI by insurance companies in Burkina Faso, and even in the CIMA zone, cost is the primary barrier. A complete AI solution, including cloud infrastructure, software licences, system integration and training, can cost between 10 and 15 million CFA francs for an SME insurance company with my structure, not including the annual cost of maintenance.
This is a considerable investment for companies with limited margins. Furthermore, the impact is not immediate; it takes 18 to 24 months to see a return on investment.
Also, many insurers still operate with legacy systems (old IT systems) that are not compatible with modern AI solutions. Integration is complex and costly. Furthermore, local hosting infrastructure is not always available or reliable.
Finally, to my knowledge, there is still no clear regulatory framework for AI in the insurance sector in the CIMA zone. Regulatory authorities have not yet established standards for the use of AI in fraud detection, algorithm transparency, personal data protection, auditing and compliance of AI models, etc.
What solutions are there for better use of AI in the insurance sector in Burkina Faso, Africa?
Firstly, technological solutions adapted to the African context. AI solutions should not be carbon copies of Western solutions, but modular and progressive, starting with pilot projects (such as fraud detection in the automotive sector, for example) before rolling them out on a larger scale.
For example, APSAB-72h was designed specifically for the Burkinabe context, with local data and use cases. Next, implement solutions for developing expertise, with localised training programmes, mentoring where external experts accompany local teams for 12-18 months, and recognised AI certifications for African insurers.
A clear regulatory framework must be established, requiring AI models to be auditable and their decisions explainable, while strengthening personal data protection laws and their enforcement, with tax breaks or subsidies for companies that invest in AI.
Finally, partnership and ecosystem solutions, including governments, regulators, insurers and technology providers, as no insurance company can solve this problem alone; and create secure data pools where insurers can share (anonymised) fraud data to train more robust AI models.


COMMENTS