For most healthcare leaders I talk to today, AI isn’t a hypothetical anymore. It’s embedded in clinical tools, creeping into workflows, showing up in places we didn’t explicitly plan for. And almost every conversation starts the same way.
It’s not, “Should we use AI?” It’s, “How did AI get here so fast?”
I’ve been in this industry long enough to recognize when something feels different. We’ve lived through EHR rollouts, cloud migrations, cybersecurity wakeup calls, and waves of digital transformation that promised more than they delivered. But AI has a different kind of momentum. It doesn’t wait for steering committees, respect budget cycles, or slow down just because governance hasn’t caught up.
This is what healthcare CIOs are wrestling with in 2026.
The real tension isn’t innovation, it’s control
From the outside, it might look like healthcare is struggling to adopt AI. From the inside, it feels very different. AI is already being used sometimes officially, sometimes quietly, sometimes in ways leaders didn’t intend. The tension isn’t whether AI can help. Most leaders I know believe it can. The tension is control:
Where is the data going?
Why does an AI-driven app work on one workstation but not another?
And how do we slow things down just enough to ensure safety without stopping innovation entirely?
Those questions sound familiar because they aren’t really AI questions. They’re governance questions. And they’re the same ones that healthcare leaders have been asking for years; just with higher stakes now.
Why AI breaks traditional governance models
Healthcare governance was built for a different era. We designed it around systems that were relatively static—applications, users, devices, and networks that changed on predictable timelines.
AI doesn’t behave that way. It evolves quickly. It shows up inside other tools. It can act, not just advise. And when it isn’t well integrated into clinical workflows, it creates friction like extra logins, inconsistent performance, workarounds that clinicians adopt just to get through the day.
When that happens, leaders don’t lose control because they ignored governance. They lose control because governance was never designed to account for something this fluid.
Treating AI as “special” is the fastest way to lose oversight
One of the biggest mistakes I see is treating AI as a separate initiative, committee, approval process, or a side program running parallel to everything else. That approach feels logical at first. AI is new. It feels risky. It deserves attention. But separating AI from existing governance structures often creates more risk, not less.
When AI lives outside the systems that already manage access, identity, and workflows, it becomes harder to monitor and explain. It encourages shadow usage, fragments accountability, and forces clinicians and IT teams to navigate yet another set of rules in an environment that’s already overloaded.
The reality is that AI is no longer a separate technology category. It’s part of the clinical environment just like identity, access, devices, and workflows.
One of the reasons I’m encouraged right now is that this conversation isn’t happening in isolation. The same questions CIOs are asking inside health systems are now being asked at a national level.
Earlier this year, Citrix submitted a response to the HHS Health Sector AI Request for Information. At its core, our position was straightforward: the biggest barrier to safe, scalable AI in healthcare isn’t the model – it’s the governance and delivery layers that surround it.
In our response, we emphasized that AI should be integrated within existing enterprise controls rather than managed as a separate technology with new policies. By embedding AI into frameworks like Zero Trust access, role-based governance, unified telemetry, and auditable workflows, healthcare organizations can advance innovation while safeguarding trust, accountability, and patient safety. This approach enables scalable progress without introducing unnecessary risk.
What effective AI governance really looks like
The healthcare leaders who seem most confident right now aren’t the ones moving fastest. They’re the ones integrating AI into what already works.
They focus on three things:
- Experience: If AI makes work harder, clinicians will avoid it. Governance has to ensure AI fits naturally into existing workflows, not alongside them.
- Security: AI needs the same managed access, least-privilege controls, and auditability as any other enterprise system – no exceptions.
- Operations: Governance only works if IT teams can actually enforce it. Automation, visibility, and consistency matter more than policy documents.
None of this requires reinventing governance. It requires extending it.
Where policy meets enforcement
A lot of health systems are still trying to govern AI through policy alone — acceptable use documents, vendor review checklists, steering committee approvals. Those things have their place, but policy without enforcement isn’t governance. It’s intention.
What’s actually making a difference in organizations that have moved beyond pilots is embedding control at the delivery layer, where AI traffic flows between clinicians, applications, and models. For Citrix customers, this is where NetScaler AI Gateway becomes relevant.
In a healthcare environment, that translates into practical control. Teams can be scoped to specific models based on role and need. Token-based rate limiting prevents a single department from driving uncontrolled cost or degrading performance for others. And sensitive data can be protected in real time — with LLM redaction automatically removing PHI from prompts before they reach a model, or from responses before they reach a user.
These are just a few examples of what’s possible today. As the space evolves, this approach extends further through integrations with best-of-breed LLM security solutions, enabling a layered model for AI governance that adapts alongside both emerging threats and new capabilities.
This is what governance looks like when it actually works — enforced in the data path, not just written into policy.
Why this matters right now
Healthcare leaders are carrying more responsibility than ever. AI can help, but only if it’s deployed in a way that strengthens trust instead of eroding it. This moment isn’t about slowing innovation – it’s about anchoring it.
The organizations that succeed won’t be the ones with the most AI pilots. They’ll be the ones that made AI part of their operating model and aligned with the realities of care delivery. That’s the work ahead of us. And it’s work worth doing right.
from Citrix Blogs https://ift.tt/gBLZMDP
via IFTTT
No comments:
Post a Comment