A

Aviva

Aviva deploys scalable MLOps platform on Amazon SageMaker, cutting infrastructure costs 90% and ML deployment time from months to weeks

90%Infrastructure Cost Reduction
Weeks instead of monthsML Deployment Speed
Over 50% (reduced)Data Scientist Time on Operational Tasks (before)

The Challenge

Aviva, one of the world's oldest insurers with operations across 16 countries and over 33 million customers, faced a structural bottleneck in scaling ML across its business. Despite running more than 70 ML use cases, models were developed through a graphical UI-driven tool and deployed manually — a process ill-suited to the demands of a carrier processing approximately 400,000 claims annually and settling around £3 billion. Data scientists spent more than 50% of their time on operational overhead rather than model development. Monitoring model performance in production was inconsistent, and the absence of standardized pipelines meant each deployment was effectively a one-off effort, blocking the scale of automation Aviva needed.

The Solution

Aviva partnered with AWS Professional Services to replace its manual ML workflow with a fully serverless MLOps platform built on the AWS Enterprise MLOps Framework and Amazon SageMaker. The platform enforces a three-account structure — development, staging, and production — with CI/CD pipelines promoting models through each environment consistently. SageMaker Pipelines orchestrates data processing, training, Bayesian hyperparameter tuning across roughly 100 model variants, evaluation, and registration into the SageMaker Model Registry. Real-time inference endpoints connect to Aviva's internal claims management systems via API Gateway and AWS Step Functions. The Remedy use case — 14 predictive ML models classifying car insurance claims as total loss or repairable — was industrialized first, intentionally designed as a reusable blueprint to accelerate every subsequent deployment.

Results

The platform delivered measurable improvement across cost, speed, and data scientist productivity:

  • 90% reduction in infrastructure costs by replacing on-premises ML infrastructure with a serverless, pay-as-you-go model
  • Deployment time cut from months to weeks, enabling Aviva to industrialize ML use cases at a pace previously not achievable
  • Data scientist time on operational tasks reduced by more than half, shifting capacity toward model innovation

Beyond the numbers, the Remedy use case validated the entire platform end-to-end — demonstrating that a complex, multi-model workflow (14 models, real-time inference, external data integration) could be industrialized with consistent, repeatable processes now available to every future use case.

Key Takeaways

  • Pilot with your most complex use case first. Aviva chose Remedy — a 14-model, real-time workflow — as the platform's proving ground, ensuring the blueprint could handle production complexity before broader rollout.
  • Serverless architecture changes the economics of ML at scale. Moving from on-premises to pay-as-you-go eliminated idle capacity costs and unlocked a 90% infrastructure cost reduction.
  • Standardized CI/CD pipelines are the multiplier. Once templates are in place, each new use case inherits the deployment, monitoring, and governance infrastructure automatically.
  • Measure data scientist time on non-model work. If more than half their time is operational, the MLOps foundation — not the models — is the bottleneck to fix first.

Share:

Details

AI Technology
Predictive ML
Company Size
Enterprise
Company
Aviva
Quality
Verified

Have a similar implementation?

Share your customer's AI results and link it to your vendor profile.

Submit a case study →