Risk and Balance

The “promise” of danger and salvation in risk assessment.
Copyright, 2004.

The fundamentals of medial device risk reduction are local observation (interval based and corrective maintenance, testing labs), technical and administrative assessment, and information feedback networks (including monitoring, problem notification, product recalls). We have the opportunity to reduce risk when someone (technician, patient, clinician) – in a hospital, or company, or testing lab – gathers, assesses, and decimates information about a problem.

Again, the central tenants of risk reduction are observation, assessment, and the distribution of information. Early on, when someone, somewhere, finds something suspect, and the process of risk assessment begins, critical thinking discriminates by asking the following: How might this suspected problem pose a danger to patients? And later, how relevant might this information be to patients in other institutions?

To further illustrate this process, consider two types of equipment failures: a lose knob on an ESU, and a batch of ESU ground return pads that exhibit poor adhesion. The lose knob assessment – poses a moderate risk, and is fixed by tightening – is probably a familiar issue in many facilities. The defective ground pad assessment — the pad can have a critical impact on patient treatment, is resolved by not using the product — is an issue other institutions plus the manufacturer should be told about. This assessment needs to be distributed.

Currently, there is no comprehensive and unified risk oversight for medical device management as exists for the airline and nuclear industries.

The critical issues for risk management center not only on how Clinical Engineering departments manage equipment maintenance, but on how relevant information is assessed and distributed inside and outside the local hospital by such oversight groups as the EOC (Environment of Care Committee), FDA, JCAHO, and state agencies. Here, assessment refers not only to device failure, but to administrative function. Effective risk reduction involves problem identification, assessment, and reporting of risk-related issues, as well as administrative effectiveness in addressing these issues.

If relevant information is not identified, if no one outside the hospital assesses the effectiveness of the Clinical Engineering department, and if there is no effective mechanism to circulate risk issues, then risk is on its own recognizance. Or as I often find myself thinking, “It’s the missing link, stupid.”

The idea of bad things happening to good people has undergone a change in the past few decades. Risk is now accepted as a concept / tool, used to manage an array of environmental circumstances that can influence an outcome in an undesirable way. As Paul Slovic reports in The Perception of Risk (Earthscan Publications) “Risk does not exist ‘out there,’ independent of our minds and cultures. Instead, human beings have invented the concept of risk to help them understand and cope with the dangers and uncertainties of life. Although these dangers are real, there is no such thing as ‘real risk’ or ‘objective risk.’”

Our attitudes are largely biased. “Which weighs more, a pound of iron or a pound of feathers?” If you say, “iron,” you may be right.

Professional or not, what we consider dangerous, and the degree to which we think a danger is relevant to someone else, comes from a biased view as much as from a cool-headed, studied, objective perspective. Consider the following pairs: driving a car & flying a plane, owning a gun & leaving the front door unlocked, drinking tap water & drinking from a mountain stream, buying stock & buying a lottery ticket, going to the dentist & having a colonoscopy.

Each of these activities contain a generally understood degree of risk, and an emotional association — that can affect the activities we choose for ourselves, what we teach our children, and perhaps who we vote for. We can categorize, quantify, and distribute risk analysis, but in so doing, it is likely that our own biased view of information is playing an active role.

Which is a greater tragedy, one person’s, or that of 10,000 people? What if that one person is you? Whom do you feel for more, the neighbor next door, or people halfway around the world? What if it’s your mother that’s halfway around the world?

Several years ago, I met a mechanic for one of the major airlines. At the time, I was considering a career in aviation technology, and posed the following question to him: “Do you think your attitude concerning the work you do is different from someone doing consumer appliance repair?” He almost shrugged the question off: “Not really,” he said. “It’s just a job.”

More recently, I was dropping someone off at Kennedy Airport in New York City. I parked my car, and as I walked to the terminal observed two pilots heading the same way, only as I waited for the walk light, they sprinted between moving cars and crossed the street.

I may appreciate the concept of there being no absolutes when it comes to risk assessment, but something changes when I enter a hospital as a patient, or assess some other service that has a direct impact on my life. I expect to receive what I need, when I need it. It’s ironic, but holding my behavior to my standards often takes real work and determination, while holding someone else’s behavior to my standards is usually easy to do.

Standards or no standards, it’s easier to care more about my space than about others: piece of glass on my front door — pick it up; piece of glass in the curb when I’m walking down a busy street — I might not even notice let alone pick it up.

I also expect that more attention will be given to aircraft than to my bicycle at the bike shop, even though in reality, failures from either can kill me just as dead. The jet I fly is just as personal to my safety as my bicycle, and can be just as critical in terms of maintenance, yet one appears more significant than the other.

Applying risk assessment on a professional level means that I will attempt to set aside my emotional or personal reactions in order to “see objectively.” But is distancing myself necessarily a good idea? After all, which is the correct view: treat all situations as though my mother or child were the patient, or as though the patient is a stranger halfway around the world?

That’s a dilemma: dissociating myself from my emotional side can lead to some cold conclusions, while behaving as though everything impacts my mother or child can be irresponsible – when I’m not paying the bill for what may be my excessively cautious view on the hazards of life.

Of course I’m generalizing and dramatizing to help clarify a point. But whether as a technician on the front lines, or a hospital administrator, or agency bureaucrat, we’re all making decisions that are based not only on reason, but on personal bias. You could say that the balance we are continually forced to keep in our profession, revolves around two things, the dollar cost of safety, and the difference between me being the one at risk (or someone close to me), and someone else not close. The real danger for the patient comes in not appreciating this.

According to the National Highway Traffic Safety Administration, there were 662 pedalcyclist deaths in the U.S. in 2002. There were also 4,743 pedestrians killed. Many more pedestrians were killed than cyclists, yet pedestrians are not asked to wear helmets.

According to the Center for Devices and Radiological Health (MedWatch), in 2002 there were 1,276 deaths,18,103 injuries, and another 35,207 malfunctions placing patients at risk, all attributed to medical device malfunctions. According to the Flight Safety Foundation, a total of 1,022 people were killed in 40 aircraft accidents in 2002, world-wide. We track with a fine-tooth comb, causes of plane crashes, yet do not even standardize investigation and data collection procedures for medical device failures.

There are far fewer plane crashes than medical device failures, and usually – but not always – greater numbers of people involved in each incident, so the FAA can justify sending inspectors to each crash occurrence. But there’s another factor – the image of a downed plane is more dramatic than that of a free-flowing infusion pump.

Medical device technology is seen by many as covering too many types, models, and manufacturers, and too rapid changes in technology, for a managed network to be cost effective. Medical device risk oversight relies on a patchwork network of private and government groups, collecting data supplied by hospitals using their own methods of collection and assessment, and conforming to submission standards for the most part mandated by their state, the FDA, and to a degree, JCAHO, all of which can be different. We also have available a body of summary data – records of deaths, injuries, and malfunctions, going back to 1991 – collected from mandatory and voluntary reporting.

To illustrate how the current reporting system works: when a technician, clinician, or patient, suspects equipment malfunction, the hospital makes an assessment as to how the situation is to be handled, i.e. will the EOC, manufacturer, state, ECRI, or FDA be notified, and determines what information will be submitted. Each of these entities will process the information using its own data format (device name, failure description detail), and will make decisions as to what pieces of information get reported to whom.

Airlines know the exact type of information that is required, to whom and when they are suppose to report it, and they know the FAA is watching. JCAHO mandates that institutions periodically select and describe equipment-related issues that will be monitored for a period of time. The exercise is mandatory, but the type of equipment, and kinds of data are not, allowing institutions to be highly selective. This may be an education in monitoring, but it is not enforcement.

Summary Points

There currently exists no unified national standard for data assessment, collection, dissemination, or bureaucratic oversight. This includes data types, device and problem terminology, assessment procedures, information routing, and enforcement procedures (Sometimes the relationship between an adverse event and a medical device goes unrecognized. One benefit to having systems that collect large amounts of standardized data is that more subtle but significant issues may be uncovered. Currently it is very difficult, or impossible, to select out such detail as incubator motor failures, delays in cases due to fibrillator failure, percent of scales or manometers found inaccurate, or failures of devices detected during maintenance inspections instead of in use on a patient. Two programs of note: MedSun (www.medsun.net), now collects comprehensive data on medical device problems from about 200 hospitals that volunteer for this project. Also, ECRI will be developing an extensive information gathering and assessment system for the state of Pennsylvania.).

Data collection and assessment should be used to assess the data collectors as well as device failure. A risk management system should be able to detect a hospital that consistently reports extraordinarily low or high failure rates, and/or ineffective failure assessments.

In order to effectively and fairly assess risk, we must balance our personal response with a detached one. We should be aware that device risk assessment is not an objective science, but an inherently flawed process. That said, there are information structural issues that need to be managed by one group representing the various bureaucratic entities now involved in medical error management.

In assessing risk management, we must keep in mind that any system fails if the data collected does not feed back relevant information to those in direct control of devices used on patients.

Comments are closed.