That order went to a pharmacy robot which diligently counted out 40pills, put them in baggies and sent them to the nurse. The nurse thought it was strange, but ultimately trusted the dispensing system that said everything was correct.
The medication error was noticed after the kiddo felt whole-body tingling. Poison control was called, but it didn't seem like they were able to give a clear treatment advice. A "rapid response" was called, they came and evaluated the patient. He was left in a non-ICU room. Several hours later he had a seizure. He recovered from the seizure and was then monitored in the ICU.
The problem of having to deal with too many alarms is not unique to the hospital system mentioned in this article. I discovered the other day that at least one model of ventilator has an alarm that sounds when any object sits in front of the display screen. There was a stethoscope dangling in front of a corner of that screen where nothing was displayed, and an alarm went off of approximately the same urgency as one that would sound if the ventilator were about to blow a patient's lungs out. The same alarm that goes off when a patient's heart rate goes from 60 to 250 bpm sounds when the patient's heart rate goes from 99 to 101 bpm. The pharmacist who was supposed to be checking my orders for sanity once paged me out of a patient room because he couldn't find the URL for the hospital's policy on titrating a particular medication, a document issued by the pharmacy. Most people would agree that it's insane to text and drive on the highway, and yet this is essentially what's being expected of every physician in every hospital while they're making major medical decisions.
Makes perfect sense to trust it at that point.
I am more concerned that any medical system allows such a shitty UI, or allows giving something that could be a fatal dose through the system.
And to think that many are led to believe that the medical system is filled with "professionals" who know what they're doing.
I see enough people just blinding trusting the "experts", because they must be right, because they were trained by a university/college for many years.
Maybe I'm ranting, but I'm sick of blind faith in a system demonstrated more often than not, to be broken in various ways.
A week later he was mailed the $421,455 bill for the services rendered due to overdosing /joking...unless they really do.
In real life scenarios there must be more than just one robot deciding.
Not quite. The article suggested that, yes, they could have ordered it in just mg. But they needed to change a dropdown from "mg/kg" to "mg", which is easily overlooked.
How sad.
This does lead me to wonder if said nurse, and other nurses, are properly trained to think for themselves.
> Poison control was called, but it didn't seem like they were able to give a clear treatment advice.
It gets worse... how come they weren't able to? Poor training? By-the-book training?
Shit like this doesn't inspire confidence in me that the medical system is even functioning properly, half of the time.
I wouldn’t be surprised if that was the first time in their history.
- The system's dosage caps were disabled
- The dosage was never double checked since "technology is so accurate", ignoring people still make mistakes when using it
- Vast swaths of alerts are regularly ignored rather than fixed resulting in them being ignored
But more than anything already listed... the nurse didn't question giving someone 38 pills beyond "must be dilluted" yet the article pushes the focus to how technology led to the error? I just can't see how this title was chosen beyond clickbait considering it has statistics included showing how this system has been more reliable than the classic solution.
I would also like to point out that simply fixing alerts isn't necessarily practical. When you're reliant on outside vendors for a highly regulated industry (requiring FDA approval of any and all changes), you can't exactly change on a dime. So while maybe we all in healthcare sneer at device manufactures, there's not a ton on the user side we can do.
This isn't limited to us now using technological systems for steps. Unless you're arguing the portion of the article covering how the same errors used to happen more often before the system was put in place is incorrect.
Fixing ALL false alerts is an impractical goal but lowering them to something reasonable to work with day to day is not. Epic is a complex system and a medical system but that doesn't mean every change involves you going to the government, a great deal of it can be customized with local Epic trained staff you were mandated to have when you bought the product. Doesn't make it easy but the majority of the problem is within the organization not the government or vendor.
Anyway my overall disagreement with the article isn't that I think the technology is perfect rather it's written like it's worse than it was before because of the technology when it even admits things are better than before because of the technology. It did little to convince me technology led a hospital to do anything vs the same problems the hospital has always had with it's verification procedures.
The nurse did question it further. I feel as though the articles addressed this and your subsequent point pretty adequately.
> Since the Paleolithic Era, we humans have concocted explanations for stuff we don’t quite understand: tides, seasons, gravity, death. The idea that the Septra might have been diluted was the first of many rationalizations that Levitt would formulate to explain the unusual dose and to justify her decision to administer it. At first glance it might seem crazy for her to have done so, but the decisions she made that night were entirely consistent with patterns of error seen in medicine and other complex industries.
> What is new for medicine is the degree to which very expensive, state-of-the-art technology designed to prevent human mistakes not only helped give rise to the Septra error, but also failed to stop it, despite functioning exactly as it was programmed.
> The human lapses that occurred after the computerized ordering system and pill-dispensing robots did their jobs perfectly well is a textbook case of English psychologist James Reason’s “Swiss cheese model” of error. Reason’s model holds that all complex organizations harbor many “latent errors,” unsafe conditions that are, in essence, mistakes waiting to happen. They’re like a forest carpeted with dry underbrush, just waiting for a match or a lightning strike.
> Still, there are legions of errors every day in complex organizations that don’t lead to major accidents. Why? Reason found that these organizations have built-in protections that block glitches from causing nuclear meltdowns, or plane crashes, or train derailments. Unfortunately, all these protective layers have holes, which he likened to the holes in slices of Swiss cheese.
> On most days, errors are caught in time, much as you remember to grab your house keys right before you lock yourself out. Those errors that evade the first layer of protection are caught by the second. Or the third. When a terrible “organizational accident” occurs — say, a space shuttle crash or a September 11–like intelligence breakdown — post hoc analysis virtually always reveals that the root cause was the failure of multiple layers, a grim yet perfect alignment of the holes in the metaphorical slices of Swiss cheese. Reason’s model reminds us that most errors are caused by good, competent people who are trying to do the right thing, and that bolstering the system — shrinking the holes in the Swiss cheese or adding overlapping layers — is generally far more productive than trying to purge the system of human error, an impossibility.
Is any other line here actually relevant to technology or just how errors end up occurring even though you have resilient systems and checks built in? I.e. what about any of those has to do with technology itself?
Is that even possible? Most of the machines are run by robots these days. Even simple devices (e.g. ECG monitor) has so much automation built into them.
There are some really scary stories in there, and I think that it remains a timely exposé of what happens when you aren't careful, given the accelerating rate of change that we're seeing in the development, deployment, and "hands-off" attitude that we're taking broadly across so many different contexts.
Some of the most egregious examples include the use of thalidomide, which ended up being teratogenic (babies without arms and legs), along with the "cool tech!" of X-ray shoe-fitting machines that ended up causing hundreds of thousands of cases of cancer and other diseases, all in the name of advancement (read: profit) by participants. Whats really great is the book was written before the advent of computers, so it provides very useful perspective into matters that should be deeply concerning to those who are in positions that enable the rapid deployment of wide-spread technology.
I can't recommend it enough.
I don't have much experience in the EMR system we use but we have two teams of about 5-6 people each dedicated to working specifically on the software on the clinical and business sides of it. The clinical side, from what I understand, all have backgrounds in nursing so at least when they're doing interface and other upgrades, they have that experience to draw on.
The article touches briefly on it but the sheer amount of work the nurses and doctors do also have to be a factor. 12 hour or longer days and very rushed.
Anyway, thanks for sharing this.
Nurses deal with pills in all kinds of concentrations, particularly in pediatrics, and while I acknowledge that 38.5 pills doesn't pass the sniff test, it isn't absurd that in pediatric medicine you may wind up giving multiple lower dose pills relative to one full adult dosage (particularly if they're right below the adult dosage, and so a dosage was "created" using multiple lower dosage infant pills).
I think the article is terribly written (all five parts of it), but I cannot see how anyone could have read it and come away with the conclusion that "the nurse did it." There's like a million degrees of nuance here.
The fact that 38 pills were ordered shows how crazy the problems can get, but a crazy number of pills like that should also be pretty trivial for a nurse to catch. How do you pour 38 pills down an 85-pound child's throat without considering that the consequence may be death?
Meanwhile that same technology has replaced a reportedly convoluted and error-prone manual process that presumably worked perfectly and never harmed anyone, otherwise surely the authors of the article would have gone into that.
> [...] But even in simplified form, you can see why the old system was hugely error-prone. A study from the pen-and-paper era showed that 1 in 15 hospitalized patients suffered from an adverse drug event, often due to medication errors. A 2010 study (using data collected during the pre-digital era) estimated the yearly cost of medication errors in U.S. hospitals at $21 billion.
> Those of us who worked in this Rube Goldberg system — and witnessed the harms it caused — anxiously awaited the arrival of computers to plug its leaks. [...]
It would seem to me much better to require the person to put in just the absolute dosage and have the computer show the actual nearest pill rounded dosage as well as the effective per weight dosage of that.
Additionally, instead of a routine alert, why not have a reactive red highlight/warning while the person is filling out the form? The person would actually see the issue while entering it, instead of just seeing an alert that they reflexively close out.
As more of the pipeline becomes automated and vulnerable to GIGO, the role of those physically administering the medication needs to adapt accordingly. The nurse should have been trained and encouraged to police such errors. If something looks abnormal, it probably is, push the button that summons the MD to verify the physical thing before administering it.
A hacker getting access to such systems would literally be able to control a patient's life --- or the end thereof.
I saw it right away. That said, I agree with the article that the UI is horrible. It (and many other elements of this story) remind me of interacting with Enterprise software, although in that case people usually aren't in mortal danger because of it.
I probably noticed it more because I saw the first screenshot with a "5" in the same box and thus "160" looks surprising, but even without that first screenshot I would probably have noticed --- the "mg/kg of trimetho" with a search(?) button doesn't make sense. Why would you want to search for that phrase, and why is it cut off like that? Another sign of "Enterpriseness": the two buttons next to it, presumably to set standard doses, have huge amounts of empty space surrounding them, while the inexplicable "search box" is too small to contain its expected contents.
Of course, it could leave the unit [mg versus mg/kg] box blank [...] but few systems do that because of the large number of additional clicks it would generate
That makes no sense. As any science teacher (or at least the good ones) will make it known very very clearly, units are important! I can remember a few incidents[1][2] that occurred because of units confusion.
Had Lucca noticed it, she could have changed it to “mg” with two clicks
Two clicks? Just looking at it, this UI is not obvious at all to me how to even change the units. Do you type in "mg of trimetho..." in the "search box"? After thinking about this for a tiny bit, my proposal would be either radio buttons for each unit, with no default, or two text boxes, one labelled "mg" and the other "mg/kg", where editing one instantly updates the other.
We also needed to address another problem that is not limited to healthcare: overtrust in the technology
Or more generally, a lack of thinking; people who are taught to follow procedures or "best practices" for the "best results" are only going to follow them unquestionably. It's unfortunate that a lot of the time the management above only points to a lack of procedure when it's actually this "overproceduring" which can cause such errors --- the incident where NASA's satellite fell over[3] is one example of this. No one thought to even take a look to see if they had secured it, everyone was too busy executing lists of instructions. If they weren't, and just given a general description of what they needed to accomplish, I'd be pretty sure they would at least check the mounting before trying to turn it on its side.
[1] https://en.wikipedia.org/wiki/Gimli_Glider
[2] https://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Cause_of_...
[3] https://en.wikipedia.org/wiki/NOAA-19#Damage_during_manufact...
I'm relaxed right now and it's the weekend, I'm reading an article in downtime, I have precisely 0 other things going on... and I've been reading for the past 20 minutes about a broken system. Of course I'm going to be primed to take the screenshots to bits, and have sufficient mental bandwidth to do so very effectively on the first try.
>> Had Lucca noticed it, she could have changed it to “mg” with two clicks
> Two clicks? Just looking at it, this UI is not obvious at all to me how to even change the units.
Me neither. Perhaps clicking in the search box also opens a dropdown with default values in it? That would satisfy one click, two click.
Hospitals and practitioners are incentivized to never admit they made a mistake. This article is that writ long.
The screen said what would be ordered and did exactly that. The ordering physician did not read the screen. Perhaps the screen could be better but it's not bad. The dosage is bolded.
Someone else reviews it and also doesn't notice the error. I.e. they did a bad job of reviewing.
UCSF staff had turned off notifications and alerts in a very broad manner.
The robot pharmacist gets dragged into this even though it just followed orders, in an article titled "Beware of the Robot Pharmacist". Imagine how long this series would be if the robot had actually made decisions.
At one point a nurse asks the juvenile patient if he thinks 38 pills is too much. In movies, asking a child for advice is the comic low point where we are meant to realize the adult is incompetent. This nurse kept her job.
Hospitals are incredibly political environments and this article goes out of its way to keep everyone's hands clean. But at the end of the day multiple people made mistakes and the author just decides to blame everyone's new favorite boogie man "technology".
Having seen hundreds of incident reports, I assure you that most hospital issues are caused by people making mistakes and/or not following procedures. And all signs point to that being the case here.
What would have been more helpful is if the author would have followed his own conclusion, and centered and titled the articles around it:
"Safe organizations relentlessly promote a “stop the line” culture, in which every employee knows that she must speak up — not only when she’s sure that something is wrong, but also when she’s not sure it’s right. Organizations that create such a culture do so by focusing on it relentlessly and seeing it as a central job of leaders. No one should ever have to worry about looking dumb for speaking up, whether she’s questioning a directive from a senior surgeon or an order in the computer."
Perhaps if medical leaders stopped pretending that they always do everything perfect - i.e. the opposite of this literally CEO approved article - people further down the ladder would feel they could also be honest.
>>LETS GET RID OF TECHNOLOGY !
----(joke end)---- seriously, a bad title and bad article. I was thinking he/she might be a primitivist or some gradient of it. Look at medium article, see the promoted book. His book, "The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age " I assume is him expressing skepticism at why he thinks teledoctor and telemedicine are bad or something. Biased against technology much tho.