A

AND-E (Aioi Nissay Dowa Europe)

AND-E achieves 120% improvement in fraud detection with continuously learning AI model

120%Fraud Detection Improvement
500% increaseAutomated Referrals Retained by Fraud Department
50% reductionFalse Positives

The Challenge

Insurance fraud costs the UK industry an estimated £1.2 billion annually, and motor claims are among the most targeted product lines. AND-E (Aioi Nissay Dowa Europe), a European motor and property insurer, faced a growing gap between the sophistication of fraud attempts and the capability of its legacy detection infrastructure. Its rules-based system could flag and triage suspicious claims, but static rule sets required manual retraining to remain effective — a process that could not keep pace with the speed at which fraudsters adapted their tactics. The result was a dual problem: genuine fraud slipping through undetected and a high rate of false positives that consumed the fraud team's capacity on low-value cases.

The Solution

AND-E partnered with the Aioi R&D Lab–Oxford and specialist AI firm Mind Foundry to design and deploy a bespoke, continuously learning predictive ML model. The solution was trained on over 20 million unstructured documents — including handwritten notes, adjuster records, and historical claims data — giving it a feature-rich foundation that rule sets alone cannot replicate. At ingestion, the model assigns each new claim a real-time fraud score derived from features co-developed by AND-E's fraud specialists and Mind Foundry's data scientists, ensuring domain knowledge was embedded directly into the model architecture. Critically, the system learns autonomously from incoming data, eliminating the need for manual retraining. The solution was deployed in 2022 and has operated without human-initiated retraining for over two years, adapting continuously to emerging fraud patterns while maintaining explainability for human claims handlers.

Results

Since deployment, AND-E has recorded substantial and sustained improvements across all key fraud metrics:

  • 120% improvement in overall fraud detection rate
  • 500% increase in automated referrals retained by the fraud department — meaning the model surfaces higher-quality cases rather than generating noise
  • 50% reduction in false positives, freeing the fraud team to close genuine cases faster
  • 2% saving on capped Third-Party indemnity spend in the past year

Beyond the numbers, the team works a smaller volume of higher-confidence cases, improving both speed and morale. The model has maintained its performance reliably across two years of live operation without manual intervention, validating the continuous learning architecture.

Key Takeaways

  • Static rule sets have a shelf life: fraudster behaviour evolves constantly; any detection model that cannot self-update will degrade in accuracy over time, making continuous learning a design requirement rather than a nice-to-have.
  • Training data quality and volume matter: access to 20+ million historical documents gave the model the breadth needed to surface subtle, multi-signal fraud patterns that rules alone would miss.
  • Domain expertise must be embedded at design time: joint feature engineering between fraud specialists and data scientists was central to the model's accuracy — neither group could have built this alone.
  • Explainability enables adoption: an AI that claims handlers can interrogate and understand drives collaboration rather than resistance.

Share:

Details

AI Technology
Predictive ML
Company Size
Enterprise
Quality
Verified

Have a similar implementation?

Share your customer's AI results and link it to your vendor profile.

Submit a case study →