A senior emergency physician finishes a 12-hour shift and used to sit in a side room afterwards for an hour catching up on charting. The patients went home. The notes did not. Multiply that picture across every public ED in New Zealand, every shift, and the cost of clinical documentation reveals itself: hours of senior clinical capacity each day, traded against typing rather than care. The Heidi AI scribe rollout has redrawn that equation in a way few NZ public-sector technology projects have managed.
The numbers are striking. Health New Zealand reports clinical documentation time per patient falling from about 17 minutes to roughly 4 once Heidi is in use. After-shift administrative work has dropped by 81%. On average, a clinician on a Heidi-assisted shift now sees one additional patient. Those gains are why Health NZ has procured more than 1,000 additional licences specifically for mental health crisis teams, where the documentation burden is among the heaviest in the system. In the same period, Stats NZ has retired the traditional five-yearly census in favour of AI-augmented administrative data. Same tooling. Very different outcomes. The contrast is worth understanding.
What is the Heidi Effect, and how is it changing NZ healthcare?
The Heidi Effect describes the operational shift in NZ public healthcare since the AI scribe rolled out across every public emergency department. Documentation time per patient has fallen from around 17 minutes to roughly 4, after-shift admin work has dropped by 81%, and clinicians are on average seeing one additional patient per shift. The clinical work itself is unchanged. The typing has moved.
What makes the Heidi rollout distinctive is not the technology, which sits in a now-crowded market of AI medical scribes globally. It is the way it has been deployed. Heidi sits inside a closed clinical context, processes consultations into structured drafts, and presents those drafts back to the clinician for review and sign-off. The clinician remains the author of the record. The AI does the secretarial work. That clean line between machine output and clinical responsibility is the reason regulators and college bodies have not pushed back hard against the deployment.
The downstream effect on the system is more interesting than the per-shift gain. NZ public healthcare has been chronically constrained on senior clinical capacity for years, with workforce shortages stacked on top of unrelenting demand growth. Adding capacity through hiring is slow, expensive, and limited by a pipeline that takes a decade to refill. Returning an hour or more of senior clinician time per shift, achieved through software rather than recruitment, is a structural lever in a way most policy interventions are not. It has not solved the workforce problem. It has bought the system room to breathe while the longer-term workforce work continues.
Why did Health NZ procure 1,000+ Heidi licences for mental health teams?
Health NZ extended Heidi from emergency departments into mental health crisis teams because mental health documentation is even heavier and more time-consuming than ED documentation. A mental health crisis team can spend a substantial proportion of a shift on note-writing, formulation, and risk documentation, and the same scribe-assisted workflow applies almost directly to that work.
The decision is also a quiet endorsement of the underlying governance work. Mental health records are among the most sensitive clinical documents the public system produces. Any AI deployed against those records has to clear a higher bar on privacy, retention, and access than a typical hospital tool, and it has to do so in a context where errors of omission can have serious consequences. Health NZ rolling Heidi forward into mental health is not just a productivity story. It is a signal that the controls put in place during the ED rollout were strong enough to support a more sensitive use case.
For other NZ healthcare organisations, including primary care, NGOs, and aged-care providers, the implication is direct. Heidi-style scribes are available to them today, and the operational case is compelling. The harder work is the governance scaffolding underneath: clear policies, defined sign-off chains, retention rules, and audit logs. Organisations that try to lift the productivity gain without doing that work will find regulators, professional bodies, and clinicians pushing back, often correctly. Our services around AI strategy and governance cover the scaffolding work that makes a deployment of this shape land cleanly.
What does AI deployment in NZ public health teach about governance?
The Heidi rollout teaches that AI in NZ public health works when the governance work happens before, not after, the deployment. Privacy and security reviews are completed ahead of go-live. Clinical sign-off chains are clearly drawn. Audit logs capture every AI suggestion and every clinician decision. Accuracy and bias are reviewed on a defined cadence rather than only when something goes wrong.

That order matters. Across the deployments we have audited, the difference between AI projects that survive their first incident and ones that do not lies almost entirely in whether the governance was retrofitted or designed in. Governance retrofitted under pressure tends to be performative; it satisfies a short-term audit but does not change behaviour. Governance designed in shapes how clinicians use the tool, where they draw the line between machine output and clinical responsibility, and how the organisation responds when the model is wrong. NZ public health, working in a system that already takes clinical record-keeping seriously, was a strong starting point for that design discipline. Other parts of the public sector have not all been in the same position.
Where is AI going wrong in the broader NZ public sector?
The clearest example of AI going wrong in the broader NZ public sector is the decision by Stats NZ to retire the traditional five-yearly census in favour of administrative data summarised by AI. The financial logic is real, with a projected $400 million saved. The methodological cost is also real, and economists have publicly warned that the substitution risks stripping confidence intervals and coverage caveats from national reporting.
The shape of the problem is structural, not technical. A traditional census sets out to measure a population with explicit, well-understood limits. An AI summary of administrative data describes whatever the administrative data contains, with limits that are far harder to communicate. When the output of both processes is presented to a minister, a journalist, or a planning team in similar tabular form, the quiet erosion of caveats is easy to miss. Forecasts, funding allocations, and electoral arrangements that rely on accurate population data are then made on a foundation that looks the same as before but is not.
The contrast with Heidi is instructive. Heidi was deployed into a clinical setting that already had a mature culture of documentation, sign-off, and review. The AI was inserted into existing safeguards. The Stats NZ change was made in a context where the methodological safeguards have to be invented at the same time as the AI is introduced, and those safeguards have not yet been clearly articulated to the public. Both decisions used AI. Only one of them put AI inside a system designed to catch its limits.
What does this mean for healthcare and public sector headcount?
In NZ public healthcare, AI is mitigating chronic workforce shortages by lifting capacity per clinician rather than reducing headcount. A clinician seeing one additional patient per shift is a capacity gain, not a redundancy signal. The deployment is being used to manage attrition in a constrained workforce, not to shrink it.
In the broader NZ public sector, the picture is more uneven. Where AI has been used to absorb administrative work inside existing teams, the staffing pattern looks similar to professional services: flat or slowly declining back-office headcount alongside steady or growing service volume. Where AI has been used to substitute for a more rigorous data-gathering exercise, the cost shows up not in headcount but in methodological quality, and the bill arrives downstream. The system gets cheaper to run today and harder to defend tomorrow.
How should NZ healthcare and public sector leaders move from here?
NZ healthcare and public sector leaders considering their next AI deployment should pair every productivity claim with a corresponding governance commitment. Time saved per clinician should sit alongside the privacy, retention, and audit controls that allow that time saving to be defended. Cost savings on a measurement programme should sit alongside the explicit communication of new uncertainties.
The leaders we work with on AI deployments in these settings are increasingly running an internal pre-mortem before procurement: assuming the deployment has gone wrong in 18 months and asking what the failure looked like. That exercise consistently surfaces governance gaps that would otherwise have been deferred. It also surfaces the specific roles, processes, and escalation paths needed to keep the deployment safe at scale, which is exactly the work that Heidi's success rests on. Our AI readiness audit service walks public-sector and healthcare teams through that exercise before vendor procurement begins.
This piece is part of a wider series on the state of AI in NZ business across 2025 and 2026. For organisations preparing for an AI deployment in clinical, regulated, or sensitive contexts, the AI readiness audit is the place we usually start.
Frequently asked questions
- What is the Heidi AI scribe, and is it used across all NZ public emergency departments?
Heidi is an AI clinical scribe that listens to a consultation and produces a structured medical record draft for the clinician to review and sign. After a successful pilot, Health NZ rolled Heidi out across every public emergency department in the country, with usage now extending into mental health crisis teams and primary care. It does not replace clinical judgment or the clinician's responsibility for the record. It removes the typing.
- Did the Heidi pilot really cut clinical documentation time from 17 minutes to 4?
Yes. Reported operational data from the Heidi rollout shows average clinical documentation time per patient falling from approximately 17 minutes to around 4 minutes, with after-shift administrative work dropping by 81% and clinicians on average seeing one additional patient per shift. The headline numbers are unusual in scale and have held up under wider deployment, which is why Health NZ procured more than 1,000 additional licences for mental health crisis teams.
- Is replacing the NZ census with AI-augmented administrative data a good idea?
Replacing the five-yearly NZ census with administrative data padded by AI summaries saves a projected $400 million but introduces real risk. Census data carries explicit confidence intervals and rigorous coverage assumptions; AI summaries of administrative data can lose those caveats in transit, presenting estimates as facts. The economic decision is defensible. The methodological decision should be paired with strong public reporting of the uncertainties, which has not yet happened at the level the data deserves.
- How should NZ healthcare teams govern AI deployments in clinical settings?
NZ healthcare teams running AI in clinical settings should treat the deployment as a regulated change rather than a tool rollout. That means privacy and security reviews ahead of go-live, clear retention and access policies, sign-off chains for clinical record edits, audit logs of every AI suggestion and clinician decision, and structured ongoing review of accuracy and bias. Heidi's success rests on this scaffolding being in place, not on the model alone.
