Insurance Isn’t Ready for AI. Here’s Why That Matters.

Every conversation about insurance technology right now seems to lead to AI. Automate underwriting decisions. Personalize customer experiences at scale. Detect fraud before it happens. Use predictive analytics to optimize pricing. The promise is compelling, the use cases are legitimate, and the potential ROI looks significant on paper.

But here’s what I keep encountering across carriers: most insurance companies can’t confidently answer basic operational questions in real time. How many active policies do we have right now? What’s our current exposure by line of business? Which policies are coming up for renewal in the next 30 days? What’s the average time from quote to bind across different product lines?

These aren’t exotic queries. They’re fundamental business questions. And if your policy data lives in 4 different systems, maintained by 5 different teams, and uses 3 different definitions of “premium,” you’re not ready for AI. You’re not even ready for meaningful automation of existing processes.

The Foundation Problem Nobody Wants to Address

In my work with carriers over the past several years, the blockers to AI adoption are rarely about budget constraints or lack of executive interest. Leadership understands the competitive pressure. They see what’s happening with insurtech startups and digital-first carriers. They know they need to move.

The real blockers are infrastructure issues that have been accumulating for decades: legacy core systems that can’t be easily modified without risking operational stability, data governance frameworks that exist only in policy documents and not in practice, and delivery models that take 6 to 9 months to ship what should be fundamental changes.

I’ve watched underwriting teams try to apply machine learning models to flat files exported nightly from policy administration systems built in the 1990s. The data is extracted, transformed multiple times across various middleware layers, lands in a data warehouse, where it’s transformed again, and then finally becomes available for analytics. By the time the ML model sees it, the data is stale, the transformations have introduced inconsistencies, and nobody can fully trace the lineage back to the source.

The technical metaphor is apt: you’re trying to build a high-performance vehicle on a gravel road. The technology might be sophisticated, the algorithms might be cutting-edge, but the foundation can’t support what you’re trying to do. Every pothole in your data infrastructure creates friction. Every inconsistency in your data definitions introduces an error. Every manual handoff in your process introduces delay and risk.

Why This Pattern Keeps Repeating

The pattern I see repeatedly is that carriers treat AI as a technology problem to be solved by the innovation team or the data science group. They spin up a center of excellence, hire some talented ML engineers, pick a promising use case, and start building.

Six months in, they discover that the data they need doesn’t exist in the format they need it. Or it exists but with significant quality issues that require manual cleanup. Or it exists in high quality but is trapped in a legacy system with no API access. Or they can access it, but the business definitions don’t align across systems, so they’re comparing apples to oranges without realizing it.

Then the project scope expands. Now they need to build data pipelines. They need to establish data quality rules. They need to get multiple business units to agree on standard definitions. They need to negotiate API access with the team that owns the legacy system, who are already underwater with other priorities.

What started as an AI initiative has become a data infrastructure project. And suddenly, the timeline has stretched from 6 months to 18 months, the budget has tripled, and leadership is questioning whether this was really the right place to start.

What Actually Needs to Happen

If insurers want to compete effectively over the next 5 years, they need to stop launching disconnected AI pilots and start systematically rebuilding their operational backbone. That doesn’t mean ripping everything out and starting over. It means creating an intentional modernization path that addresses foundational issues first.

That means establishing consistent data definitions across business units and enforcing them through technical controls, not just documentation. It means modernizing core systems or at least decoupling them enough to enable innovation at the edges without destabilizing the operational core. It means building delivery teams that understand both insurance operations and modern software practices, because you need both perspectives to navigate the complexity.

It also means rethinking data architecture from the ground up. Most carriers have grown their data landscape organically over decades, adding new systems and data stores as needed without a coherent overall strategy. The result is a tangled web of point-to-point integrations, duplicated data with no clear system of record, and transformation logic scattered across multiple layers that nobody fully understands.

Creating a modern data architecture means establishing clear patterns for how data flows through the organization. It means building observable pipelines where you can see what’s happening at every stage. It means implementing automated quality checks that catch issues before they propagate downstream. It means creating feedback loops so when problems are discovered, they can be traced back to the source and fixed permanently, not just patched in the moment.

The Competitive Stakes

The AI payoff is real. Carriers who get this right will be able to underwrite more accurately, price more competitively, detect fraud more effectively, and serve customers more efficiently than their competitors. The advantages compound over time.

But only if your infrastructure can actually support it. Sophisticated algorithms trained on unreliable data don’t produce insights. They produce expensive mistakes at scale. And in insurance, where you’re making binding commitments based on risk assessments, those mistakes have real financial consequences.

I’ve seen carriers spend millions on AI initiatives that never made it past the pilot stage because the foundational work wasn’t done first. That’s not a technology failure. It’s a strategy failure. It’s treating AI as something you can bolt onto an existing architecture instead of recognizing it as a capability that requires specific foundational investments.

Where to Start

This isn’t about choosing between innovation and stability. It’s about sequencing the work correctly. Before you launch another AI pilot, ask yourself:

Do we have reliable, consistent data definitions across our core business domains? Can we trace data lineage from source systems through every transformation to final consumption? Do we have automated quality checks that give us confidence in our data? Can our delivery teams make changes to data pipelines without the 3-month coordination overhead?

If the answer to any of those questions is no, that’s where you start. Not because it’s exciting. Not because it makes for good board presentations. But it’s the foundation that makes everything else possible.

Fix the foundation first. Establish data discipline. Build the infrastructure that can support real-time decisioning and machine learning at scale. Then accelerate with AI.

The carriers who understand this and commit to doing the foundational work will have a material competitive advantage. Those who keep chasing AI without fixing the plumbing will burn budget on pilots that never ship and wonder why their transformation isn’t delivering results.