<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=7433348&amp;fmt=gif">
7 min read

Scaling AI in Healthcare: Escaping Pilot Purgatory

Dec 9, 2025 7:21:13 AM

Martín featured in Healthcare IT Today blog banner (1)

In a recent guest article for Healthcare IT Today, Martín Lewit of Nisum explored how healthcare can borrow lessons from retail to harness AI with confidence. This follow-up article from Nisum takes the conversation one step further: not just whether to adopt AI, but how to scale it beyond pilots.

Across the United States, hospitals are investing heavily in AI, from triage and imaging to operations and revenue cycle. Yet many of these initiatives never grow beyond small pilots. Promising models are tested in one unit or service line, then stall. The result is pilot purgatory: rising costs and limited impact.

If AI is going to become a real lever for quality, access, and efficiency, organizations need to focus less on building more pilots and more on building the conditions for scale. Other industries, especially retail, have already walked that path. Healthcare can reuse much of what has been learned.

 

What is pilot purgatory in healthcare AI?

Pilot purgatory describes the situation where AI solutions are repeatedly tested in small, controlled environments but never adopted at scale across the organization. Teams run proof of concepts, presentations look promising, but months later, the model is still only used in one department, or not at all.

The symptoms are familiar:

  • Many parallel pilots, but few production deployments

  • High experimentation costs, low enterprise impact

  • Growing skepticism among clinicians and executives

Escaping pilot purgatory requires understanding why AI initiatives stall and what is needed to move from experiment to everyday practice.

Firefly_En el dashboard de analítica con IA agrega íconos de medicina como camas, jeringas. S 842058

Why do healthcare AI pilots fail to scale?

Behind almost every stalled AI pilot, there is a familiar set of blockers.

  1. Fragmented data and legacy systems
    Data is scattered across EMRs, lab systems, PACS, and departmental tools that do not talk to each other. Models are trained on partial or static data, and it is hard to operationalize them because the underlying IT landscape is old, highly customized, and difficult to integrate with new services.
  2. No shared definition of success
    Many pilots are approved because the use case sounds interesting, not because it is clearly tied to a strategic priority. One team measures model accuracy, another looks at anecdotal clinician feedback, while executives ask whether it reduced readmissions or length of stay. Without shared metrics, it is very difficult to justify the investment required to scale.
  3. Workflow misalignment
    In a pilot, it is still acceptable for a data scientist to manually push scores to a spreadsheet or for clinicians to log into a separate dashboard. At scale, that is impossible. If AI is not embedded into the tools and processes staff already use, it will simply be ignored once the novelty wears off.
  4. Limited clinician trust and buy-in
    AI is still perceived by many as a black box. If clinicians do not feel involved in shaping the solution, or believe it will slow them down or increase risk, they will resist broader rollout, quietly but effectively. Adoption is a human issue as much as a technical one.

None of these challenges is unique to healthcare. Retail went through very similar growing pains.

 

What can healthcare learn from retail’s AI journey?

Retailers also started with isolated pilots: a recommendation engine in one channel, a demand forecasting model in one category, a pricing tool in one region. The turning point came when AI stopped being treated as an experiment and started being treated as an enterprise capability.

Three lessons are particularly relevant for hospitals and health systems.

Unify the data, then scale the intelligence
Leading retailers invested early in customer and product data platforms that aggregated information from online, in-store, and mobile channels. That unified view is what made personalization and predictive inventory truly powerful.

Healthcare has an equivalent opportunity: integrate clinical, operational, and patient-generated data into a strong, governed data foundation. It is not necessary to rip and replace every system, but a coherent data architecture is required so that AI can reliably access and use information.

Tie AI to outcomes that matter to the business

Quotation@4x

When AI clearly helps the organization hit its goals, it stops being innovation and becomes part of how the organization operates.”

In retail, AI initiatives are framed in terms of conversion, margin, loyalty, and customer lifetime value. Everyone understands the scoreboard. The same thinking can apply to healthcare: pick a small set of enterprise metrics such as access, throughput, quality, cost to serve, or clinician time freed, and require every AI project to move at least one of them.

When AI clearly helps the organization hit its goals, it stops being innovative and becomes part of how the organization operates.

Embed AI into everyday work, not as a sidecar
Retail AI stopped being “a dashboard” and became part of the checkout flow, the replenishment engine, and the marketing campaign tools. Staff no longer had to remember to go use AI, because it was embedded inside the systems they already used.

For healthcare, that could mean:

  • Risk scores are visible directly inside the EMR during rounds
  • Triage suggestions in the nurse intake screen
  • Predictive staffing alerts built into workforce management tools

The core idea is simple: if using AI requires extra clicks, extra screens, or extra coordination, it will not scale.

 

How can healthcare leaders operationalize AI at scale?

Translating these lessons into action, healthcare leaders can use a simple framework to move from experiments to enterprise AI.

  1. Build a healthcare-grade data foundation
    Prioritize interoperability, data quality, and security. Define a clear strategy for how AI solutions will access the data they need, in near real time, and with proper governance. This work is not glamorous, but without it, every pilot will remain a one-off.
  2. Start from strategic problems, not from algorithms
    Begin with a question such as: “What are the top five problems that must be solved in the next 24 months?” Examples might include ED crowding, capacity management, chronic care outreach, revenue leakage, or clinician burnout. Select AI use cases that directly address those issues and define in advance how success will be measured.
  3. Design for workflow from day one
    When a new AI initiative is scoped, the key question is not only what the model will predict, but also who will act on that insight, when in the day it appears, and what that person will stop doing because AI now helps. Clinicians, nurses, and operational leaders need to participate in that design. This is the only way to ensure adoption later.
  4. Treat change management as a first-class workstream
    Communication, training, and feedback loops cannot be afterthoughts. Champions in each unit should be identified and equipped with clear messages and support. Mechanisms for staff to report issues or improvements need to be created. Both the product and the workflow should be iterated based on real usage, not only on lab testing.
  5. Put governance and ethics around the whole lifecycle
    As soon as an initiative moves beyond a pilot, questions about bias, safety, accountability, and regulatory compliance become central. Clear roles for validating, monitoring, and updating AI models are required, and clinicians should understand when and how they can question or override AI outputs.

Quotation@4x

“If using AI requires extra clicks, extra screens, or extra coordination, it will not scale.”

 

Quick answers: common questions about scaling AI in healthcare

Why do so many healthcare AI pilots fail to scale?
Most pilots fail to scale due to fragmented data, legacy systems, lack of shared success metrics, misaligned workflows, and limited clinician trust. If these foundations are not addressed, even strong models remain stuck in isolated experiments.

How can hospitals move from AI experiments to enterprise solutions?
Hospitals can move beyond experiments by starting from strategic business and clinical problems, building a robust data foundation, embedding AI into clinical and operational workflows, investing in change management, and establishing strong governance for safety and ethics.

What do healthcare organizations need to operationalize AI safely and effectively?
They require interoperable data and systems, clear and agreed-upon metrics, inclusive design involving clinical stakeholders, continuous monitoring of performance and bias, and well-defined processes for oversight and escalation when AI outputs are questioned.

What role can a partner like Nisum play in scaling AI?
Nisum brings cross-industry experience from complex environments such as retail, where it has helped organizations break down data silos, modernize legacy systems, and scale AI-driven capabilities from pilot to full operation. That experience can help healthcare organizations accelerate their own AI journey with the right balance of innovation, safety, and trust.

 

Where a partner like Nisum fits

Making all of this work requires more than a strong model. It requires aligning data, systems, workflows, and people across complex organizations.

This is the type of challenge that Nisum has tackled for years in other highly demanding industries, especially retail: breaking data silos, modernizing legacy environments, and scaling AI-driven capabilities from pilot to full operation, always with a human-centric lens.

For healthcare leaders, that cross-industry experience can be an advantage. It provides proven patterns for transitioning from “an algorithm was tested” to “this is how the system runs now”, striking the right balance between innovation, safety, and trust.

The opportunity is clear: hospitals that escape pilot purgatory and truly operationalize AI will not just run a few clever experiments. They will redesign how care is delivered, resourced, and experienced. The question is no longer whether AI will be part of healthcare’s future, but how quickly each organization can make it work at scale.

Nisum

Nisum

Founded in California in 2000, Nisum is a digital commerce company focused on strategic IT initiatives using integrated solutions that deliver real and measurable growth.

Have feedback? Leave a comment!

Featured

Blog by Topics

See All
7 minutos de lectura

Scaling AI in Healthcare: Escaping Pilot Purgatory

Dec 9, 2025 7:21:13 AM

Martín featured in Healthcare IT Today blog banner (1)

In a recent guest article for Healthcare IT Today, Martín Lewit of Nisum explored how healthcare can borrow lessons from retail to harness AI with confidence. This follow-up article from Nisum takes the conversation one step further: not just whether to adopt AI, but how to scale it beyond pilots.

Across the United States, hospitals are investing heavily in AI, from triage and imaging to operations and revenue cycle. Yet many of these initiatives never grow beyond small pilots. Promising models are tested in one unit or service line, then stall. The result is pilot purgatory: rising costs and limited impact.

If AI is going to become a real lever for quality, access, and efficiency, organizations need to focus less on building more pilots and more on building the conditions for scale. Other industries, especially retail, have already walked that path. Healthcare can reuse much of what has been learned.

 

What is pilot purgatory in healthcare AI?

Pilot purgatory describes the situation where AI solutions are repeatedly tested in small, controlled environments but never adopted at scale across the organization. Teams run proof of concepts, presentations look promising, but months later, the model is still only used in one department, or not at all.

The symptoms are familiar:

  • Many parallel pilots, but few production deployments

  • High experimentation costs, low enterprise impact

  • Growing skepticism among clinicians and executives

Escaping pilot purgatory requires understanding why AI initiatives stall and what is needed to move from experiment to everyday practice.

Firefly_En el dashboard de analítica con IA agrega íconos de medicina como camas, jeringas. S 842058

Why do healthcare AI pilots fail to scale?

Behind almost every stalled AI pilot, there is a familiar set of blockers.

  1. Fragmented data and legacy systems
    Data is scattered across EMRs, lab systems, PACS, and departmental tools that do not talk to each other. Models are trained on partial or static data, and it is hard to operationalize them because the underlying IT landscape is old, highly customized, and difficult to integrate with new services.
  2. No shared definition of success
    Many pilots are approved because the use case sounds interesting, not because it is clearly tied to a strategic priority. One team measures model accuracy, another looks at anecdotal clinician feedback, while executives ask whether it reduced readmissions or length of stay. Without shared metrics, it is very difficult to justify the investment required to scale.
  3. Workflow misalignment
    In a pilot, it is still acceptable for a data scientist to manually push scores to a spreadsheet or for clinicians to log into a separate dashboard. At scale, that is impossible. If AI is not embedded into the tools and processes staff already use, it will simply be ignored once the novelty wears off.
  4. Limited clinician trust and buy-in
    AI is still perceived by many as a black box. If clinicians do not feel involved in shaping the solution, or believe it will slow them down or increase risk, they will resist broader rollout, quietly but effectively. Adoption is a human issue as much as a technical one.

None of these challenges is unique to healthcare. Retail went through very similar growing pains.

 

What can healthcare learn from retail’s AI journey?

Retailers also started with isolated pilots: a recommendation engine in one channel, a demand forecasting model in one category, a pricing tool in one region. The turning point came when AI stopped being treated as an experiment and started being treated as an enterprise capability.

Three lessons are particularly relevant for hospitals and health systems.

Unify the data, then scale the intelligence
Leading retailers invested early in customer and product data platforms that aggregated information from online, in-store, and mobile channels. That unified view is what made personalization and predictive inventory truly powerful.

Healthcare has an equivalent opportunity: integrate clinical, operational, and patient-generated data into a strong, governed data foundation. It is not necessary to rip and replace every system, but a coherent data architecture is required so that AI can reliably access and use information.

Tie AI to outcomes that matter to the business

Quotation@4x

When AI clearly helps the organization hit its goals, it stops being innovation and becomes part of how the organization operates.”

In retail, AI initiatives are framed in terms of conversion, margin, loyalty, and customer lifetime value. Everyone understands the scoreboard. The same thinking can apply to healthcare: pick a small set of enterprise metrics such as access, throughput, quality, cost to serve, or clinician time freed, and require every AI project to move at least one of them.

When AI clearly helps the organization hit its goals, it stops being innovative and becomes part of how the organization operates.

Embed AI into everyday work, not as a sidecar
Retail AI stopped being “a dashboard” and became part of the checkout flow, the replenishment engine, and the marketing campaign tools. Staff no longer had to remember to go use AI, because it was embedded inside the systems they already used.

For healthcare, that could mean:

  • Risk scores are visible directly inside the EMR during rounds
  • Triage suggestions in the nurse intake screen
  • Predictive staffing alerts built into workforce management tools

The core idea is simple: if using AI requires extra clicks, extra screens, or extra coordination, it will not scale.

 

How can healthcare leaders operationalize AI at scale?

Translating these lessons into action, healthcare leaders can use a simple framework to move from experiments to enterprise AI.

  1. Build a healthcare-grade data foundation
    Prioritize interoperability, data quality, and security. Define a clear strategy for how AI solutions will access the data they need, in near real time, and with proper governance. This work is not glamorous, but without it, every pilot will remain a one-off.
  2. Start from strategic problems, not from algorithms
    Begin with a question such as: “What are the top five problems that must be solved in the next 24 months?” Examples might include ED crowding, capacity management, chronic care outreach, revenue leakage, or clinician burnout. Select AI use cases that directly address those issues and define in advance how success will be measured.
  3. Design for workflow from day one
    When a new AI initiative is scoped, the key question is not only what the model will predict, but also who will act on that insight, when in the day it appears, and what that person will stop doing because AI now helps. Clinicians, nurses, and operational leaders need to participate in that design. This is the only way to ensure adoption later.
  4. Treat change management as a first-class workstream
    Communication, training, and feedback loops cannot be afterthoughts. Champions in each unit should be identified and equipped with clear messages and support. Mechanisms for staff to report issues or improvements need to be created. Both the product and the workflow should be iterated based on real usage, not only on lab testing.
  5. Put governance and ethics around the whole lifecycle
    As soon as an initiative moves beyond a pilot, questions about bias, safety, accountability, and regulatory compliance become central. Clear roles for validating, monitoring, and updating AI models are required, and clinicians should understand when and how they can question or override AI outputs.

Quotation@4x

“If using AI requires extra clicks, extra screens, or extra coordination, it will not scale.”

 

Quick answers: common questions about scaling AI in healthcare

Why do so many healthcare AI pilots fail to scale?
Most pilots fail to scale due to fragmented data, legacy systems, lack of shared success metrics, misaligned workflows, and limited clinician trust. If these foundations are not addressed, even strong models remain stuck in isolated experiments.

How can hospitals move from AI experiments to enterprise solutions?
Hospitals can move beyond experiments by starting from strategic business and clinical problems, building a robust data foundation, embedding AI into clinical and operational workflows, investing in change management, and establishing strong governance for safety and ethics.

What do healthcare organizations need to operationalize AI safely and effectively?
They require interoperable data and systems, clear and agreed-upon metrics, inclusive design involving clinical stakeholders, continuous monitoring of performance and bias, and well-defined processes for oversight and escalation when AI outputs are questioned.

What role can a partner like Nisum play in scaling AI?
Nisum brings cross-industry experience from complex environments such as retail, where it has helped organizations break down data silos, modernize legacy systems, and scale AI-driven capabilities from pilot to full operation. That experience can help healthcare organizations accelerate their own AI journey with the right balance of innovation, safety, and trust.

 

Where a partner like Nisum fits

Making all of this work requires more than a strong model. It requires aligning data, systems, workflows, and people across complex organizations.

This is the type of challenge that Nisum has tackled for years in other highly demanding industries, especially retail: breaking data silos, modernizing legacy environments, and scaling AI-driven capabilities from pilot to full operation, always with a human-centric lens.

For healthcare leaders, that cross-industry experience can be an advantage. It provides proven patterns for transitioning from “an algorithm was tested” to “this is how the system runs now”, striking the right balance between innovation, safety, and trust.

The opportunity is clear: hospitals that escape pilot purgatory and truly operationalize AI will not just run a few clever experiments. They will redesign how care is delivered, resourced, and experienced. The question is no longer whether AI will be part of healthcare’s future, but how quickly each organization can make it work at scale.

Nisum

Nisum

Founded in California in 2000, Nisum is a digital commerce company focused on strategic IT initiatives using integrated solutions that deliver real and measurable growth.

¿Tienes algún comentario sobre este? Déjanoslo saber!

Destacados

Blogs por tema

See All