Unlike traditional systemic risks that can be mitigated through controls like geographic distribution and industry variation, risks associated with AI cannot be addressed as such. Much of this is due to deployment of functionally identical AI infrastructure across organisations and industries, that may contain contained compromised training data or other unintended performance characteristics.
AI entered the top 10 global risks in Allianz’ 2025 Risk Barometer for the first time, but with approximately half of the 1,450 respondents believing that the technology brings more benefits than risk.
The report also noted that only 15% of respondents thought the opposite. The remaining 35% said that the impact AI was having on their industry “was neither positive nor negative”.
Moreover, the report also noted that “there are no signs that the AI frenzy will stop soon”, as most companies have only just started to integrate AI tools in their processes. This integration is “driving demand for all things related to AI”, the report added.
Regulatory risk
Notably, changes in legislation and regulation came in fourth in Allianz’s global list, where risk has been linked to what is called a “‘regulatory Wild West’ if grandiose announcements are followed by action, particularly (related to) AI”.
Lockton Re and Armilla’s report, titled ‘Ready or Not: The Impact of Artificial Intelligence on Insurance Risks’, seems to echo this sentiment, as it stated, “There is a patchwork of regulatory regimes in place with a variety of evolving obligations relating to the development and deployment of AI models.
“Geographic, industry, and technology-based regulations have a range of implications, which could create risks in the event of non-compliance.”
Mitigating risk
According to Lockton Re and Armilla’s report, “Sound regulation evolves over time and follows the path of technology adoption”.
“But critically, effective regulation depends on laying out principles and providing guidance on how to achieve them,” the report stated.
“In this regard, the evolution of standards is equally important.”
The report also noted that ‘effective standards’ establish clear expectations of what good regulation were, such as providing regulators and insurers with a common framework for measuring and mitigating risks.
“Standards enable underwriting and pricing at scale based on a common benchmark,” the report also said. Citing cyber insurance standards, such as ISO 27001, as an example, the report noted that these can “set clear expectations for security posture, allowing underwriters to assess risk consistently against independent criteria across a portfolio”.
However, the report stated that “both regulations and standards relating to AI are still evolving”.
AI’s systemic risk
“While many traditional policies address risks that typically manifest as individual losses, systemic AI vulnerabilities arise from structural characteristics that create inherent correlation across seemingly diverse portfolios,” Lockton Re and Armilla’s report showed.
But while traditional commercial insurance, such as property, can address systemic risk through controls like geographic distribution and industry variation, the risks associated with AI cannot be addressed as such, the report highlighted.
For instance, the report noted that if organisations across different sectors deployed functionally identical AI infrastructure, if said widely deployed models contained compromised training data or other unintended performance characteristics, “failures can propagate simultaneously across multiple organisations regardless of geography, industry, or individual risk management practices”.
Additionally, the report also said risk may be exacerbated “if the same underlying foundational model is used across different modalities”.
“AI systemic risks often stem from architectural characteristics inherent to the technology itself,” the report stated.
Moreover, the report also pointed out that “AI systems evolve at speed via rapid model updates, architectural changes and deployment pattern shifts”.
AI and the insurance industry
“Effective underwriting of AI risk requires a fundamentally different approach compared to traditional commercial lines,” said Lockton Re and Armilla’s report.
“In addition to focussing on individual policyholder risk management practices, underwriters
must evaluate portfolio-level exposure concentration through shared model dependencies, architectural vulnerability to coordinated attacks and the capacity to detect failures before substantial liability accrues.”
As a result, the report pointed out that the challenge for the insurance industry was not about whether AI may create systemic risk events, but rather when, and if, “underwriting practices can keep pace”.
“AI risk concentration operates through different mechanisms than traditional commercial insurance. Geographic and industry diversification provide insufficient mitigation when multiple policyholders deploy functionally near-identical systems with common vulnerabilities,” stated the report.
“The tools that enable effective traditional commercial insurance portfolio management, performance metrics, risk maturity assessments and loss history analyses become unreliable indicators when applied to AI systemic risk.”
Despite this challenge, the developing market means that insurers need to balance the opportunity covering AI presents “against fundamental uncertainty created by risk concentration mechanisms that differ structurally from other classes”, the report showed.
“The question is not how traditional commercial insurance frameworks can be adapted for AI risk but whether entirely new approaches to portfolio management, loss correlation analysis and catastrophic exposure modelling can be developed,” said the report.
The future
According to Lockton Re and Armilla’s report, generative AI “is only the first wave of a broader AI era”.
Additionally, insurers focussed on addressing the risks the technology creates have many opportunities ahead, the report noted.
“Insuring these risks will be as important as the marine insurance offered to early exploratory sailing ships,” the report stated, noting that when it comes to AI, clarity is sought, both by policyholders and insurers.
Said the report, “The sooner clarity is established for AI risks, the better.” A