Magazine

Read the latest edition of AIR and MEIR as an Interactive e-book

Apr 2026

March of the machine

Source: Asia Insurance Review | Apr 2026

Since ChatGPT was first introduced to the world in 2022, AI has been the only thing people can talk about. The details have changed over the past four years, from GenAI to agentic AI to edge AI, but the discussion points have always been the same: will we lose our jobs, AI is just a plagiarism machine, whether it’s just another tech bubble, the ethics and risks of AI and the lack of governance surrounding it.
 
Talk to any (re)insurer and they’ll let you know that they have all been exploring utilising AI in their work since before OpenAI turned ChatGPT into a household name. Back then, we called it automation, or machine learning, but the basics were the same; use a well-trained computer to accomplish the ‘grunt work’ within the industry and free up human processing power for more valuable and important tasks. AI chatbots were being rolled out in 2017, and US-based Lemonade made headlines (including in our own publication), for processing a claim in under five minutes.
 
To be fair to the insurance industry, there is indeed a lot of grunt work. Mountains of historical data to parse through, structured and unstructured, calculations for underwriting, risk assessment, claims processing, all tasks that AI could actually help with, and without committing too many mistakes.
 
In 2026, (re)insurers are moving from pilot projects to full-scale operational integration, although only a small percentage have embedded it into their core financial systems. According to recent research by financial controls platform AutoRek, only 14% of the 250 insurers they surveyed across the US and UK have operationalised AI. In Asia, the pace is slightly slower, but the industry here is also experimenting and innovating.
 
The other half of AI are the risks involved. While this should not affect the industry very much, the public perception towards AI-generated content (especially artwork) is negative, with many people calling tools such as Seedance, Midjourney and Nano Banana 2 (all AI specialising in video and image generation) as ‘plagiarism machines’, using stolen art to churn out soulless images and misinformation.
 
Beyond that, AI still has the tendency to hallucinate and produce false information in order to please the user; the way AI is trained to be helpful has made it almost sycophantic, wanting to ‘help’ the user at the expense of informational accuracy. While the engineers at OpenAI and Anthropic have been hard at work resolving the issue, AI still cannot be fully trusted.
 
At the same time, I do have trust in the (re)insurance industry to be mindful of these risks, given the nature of your work. You have been methodical and careful in your approach to using AI, the same as you have with every new piece of technology that jumped onto the scene (remember blockchain?). If any industry is equipped to navigate the uncertainty of AI thoughtfully, it is one built entirely on the premise that the future is unpredictable, and that preparation is everything. A 
 
Ahmad Zaki
Editorial Director
Asia Insurance Review
 
| Print
CAPTCHA image
Enter the code shown above in the box below.

Note that your comment may be edited or removed in the future, and that your comment may appear alongside the original article on websites other than this one.

 

Recent Comments

There are no comments submitted yet. Do you have an interesting opinion? Then be the first to post a comment.