In a recent article in the Health Service Journal, I set out my concern that, despite genuine progress, the NHS risks slipping backwards on patient safety. I wrote it because I am seeing a tension emerge between what we now know about safety and how systems behave when pressure mounts.
Over the last decade, there has been a real and welcome shift in how patient safety is understood. The language has changed. There is broader acceptance that harm is rarely the result of a single individual failure, and that learning requires curiosity, systems thinking, and psychological safety. In our meetings with NHS trusts, HSSIB has learned of inspiring work implementing frameworks such as the Patient Safety Incident Response Framework (PSIRF) which reflects real progress in how they are understanding system risks within services.
But progress in safety is not linear, and it is never guaranteed.
What investigations keep reminding us
One of the consistent lessons from safety investigations, in healthcare and in other high-risk sectors, is how easily systems revert to old patterns under stress. Not because people stop caring, but because familiar responses feel safer when uncertainty becomes uncomfortable.
When pressure rises, organisations often seek reassurance rather than insight. They count incidents instead of understanding risk. They look for closure rather than learning. They narrow their focus just when it most needs to widen.
From a safety perspective, this is where danger lies. Harm is rarely caused by dramatic failures. It emerges from everyday conditions: workload, trade-offs, assumptions, workarounds, all interacting in ways that only become visible after the event.
If we treat incidents as problems to be closed rather than signals to be understood, we lose the opportunity to see those conditions clearly.
Safety is not the same as performance
A persistent and serious source of confusion in healthcare is the relationship between safety and quality. The two are closely related, but they are not the same. Treating safety as merely one component of quality, or concluding that good quality outcomes indicate safety, risks obscuring systemic hazards and creates false reassurance.
Safety is not just one of the domains of quality. It is the foundation on which other aspects of quality are built. Good outcomes do not necessarily mean safe systems. Outcomes tell us what has already happened. Safety is about what could happen next.
In other industries, this distinction is fundamental. Aviation does not assume that because yesterday’s flights were safe, tomorrow’s will be too. It invests heavily in understanding weak signals, near misses, and system vulnerabilities, even when outcomes look good.
Healthcare often does the opposite. We reassure ourselves with performance data, while risks remain hidden beneath the surface. That false reassurance can be comforting, but it is also perilous.
Culture shows itself under pressure
Much has been written about the importance of just culture, restorative approaches, and psychological safety, and rightly so. But culture often only really reveals itself when things go wrong.
When organisations are under strain, do we lean into curiosity or retreat to certainty? Do we tolerate ambiguity, or demand simple explanations? Do we create space for honest accounts of fallibility, or quietly reintroduce blame through the language of assurance?
From what I have seen, the risk now is not that we reject modern safety thinking outright, but that we adopt it superficially, performing learning rather than practicing it.
That kind of regression is subtle. It happens quietly, and generally with good intentions. But it undermines trust, discourages openness, and erodes the very conditions that safety depends upon.
Frameworks do not by themselves create safe systems; the right culture is essential.
Sustainable improvement in safety requires leaders who are willing to accept uncertainty, resist simplistic narratives, and model learning in public. It requires regulators and national bodies to be equally thoughtful about the signals they send, particularly under pressure.
Language matters. If we default to a focus on human fallibility, compliance, targets, and punitive assurance, we should not be surprised when organisations prioritise appearance over understanding.
Being explicit about what we will no longer tolerate, blame, superficial learning, misuse of performance data as a substitute for safety, is as important as articulating what we expect to see.
Why this matters now
The NHS has made substantial progress towards a more mature approach to patient safety. That progress should not be underestimated. But it is fragile.
When systems are stretched, safety is often the first thing quietly compromised, not deliberately, but incrementally. If that happens now, we will not simply pause improvement. We may well undo it.
From a safety investigation perspective, the lesson is clear: systems that cannot tolerate honest accounts of fallibility cannot learn; and systems that cannot learn will not be as safe they should.
When the system is under pressure, it is not the time for retreat. It is when we need to hold our nerve.
Related articles