January 15, 2020
Because we have big brains and have been to the moon and have invented remarkably complex stuff, the idea persists that human beings are inherently rational. This idea is both untrue and ironic, but not in a black-fly-in-the-chardonnay way.
Rational beings, by definition, would make decisions based on proven, reliable data and learned reason. Most of the time, human beings use impulses, moods, emotions and urges to make decisions, all the while believing they are the reincarnation of Socrates himself.
But is this a bad thing?
Many researchers argue that, no, it is not a bad thing to be what we are. Trying to constantly think rationally and calculate data would be time consuming, constraining and, perhaps most importantly, would rob humans of the tremendous power to make learned instinctual decisions.
Ah, but that “learned” decisions thing … that’s where we get into trouble.
Humans, as it turns out, generally double down when presented with information that contradicts what they believe.
Take health, for example. One study that surveyed 500 Americans on genetically modified foods found that more than 90 percent were opposed to them. This perspective mirrors that of actual scientists who work with genetically modified organisms, about 90 percent of whom think GMOs are safe and potentially beneficial.
Another interesting result of the same study: Those surveyed who were most strongly opposed to GMOs also said they were highly knowledgeable about the topic but scored the worst on tests of scientific knowledge.
“In other words, those with the least understanding of science had the most science-opposed views, but thought they knew the most,” writes Aaron Carroll, MD, MS. “Lest anyone think this is only an American phenomenon, the study was also conducted in France and Germany, with similar results.”
When it comes to making decisions about health, these tendencies have predictable and sometimes negative impact. The current furor over childhood vaccinations is probably the most high-profile example, but it’s not the only one. Health supplements are a billion-dollar industry in the United States, yet little scientific support exists for their efficacy.
With regard to the patient/doctor relationship, this disconnect between available data and chosen treatment manifests as low-value care, which the American Board of Internal Medicine created a campaign in 2012 to guard against. In a nutshell, a lot of the care Americans receive shows little medical benefit, yet it adds more than $200 billion annually to healthcare costs, according to an Institute of Medicine study. That spend might be more acceptable if it benefitted the patient or were at least benign, but often it is not.
“And so the harms from a cascade [of questionable tests and diagnostics] — such as cost, time, stress, pain from unnecessary biopsies, and overdiagnosis — can outweigh any benefits, especially when the cascade stems from an unexpected finding or when that initial test wasn’t needed in the first place,” says Ishani Ganguli, MD, an assistant professor of medicine at Harvard.
For help with this cascade of unhelpful care, we now have electronic health records (EHRs), according to one study at Boston Medical Center that evaluated EHR interventions for six months after system implementation.
Specifically, the study found a reduction in unnecessary but arguably habitual behaviors on the part of providers. Pre-admission chest x-rays fell by 3.1 percent. Routine lab orders fell by 4 percent. Total use of the lab decreased by more than 1,000 orders a month. These and other interventions, accumulated over time, can make a significant financial difference.
The reason EHRs can have this kind of impact is that doctors, like patients, are human, i.e., they fall into thought patterns that they can’t break themselves out of. All those years of medical training don’t necessarily enable providers to suddenly overcome innate behavior. When doctors are patients, as one study found out, they’re not much different from patients with no medical training.
So why aren’t humans responsive to dry statistics and data? As cognitive science has determined in recent decades, we’re just not rational creatures. At our core, we are emotional and communal beings, even if we regularly nurture fantasies about emotional independence and rugged individualism.
“Providing people with accurate information doesn’t seem to help; they simply discount it,” writes Elizabeth Kolbert. “Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science.”
Perhaps, but one could certainly argue that ‘promoting sound science’ is not the same as reducing healthcare costs and convincing doctors and patients to make better decisions. In that regard, the Boston Medical Center study is encouraging and suggests EHRs and other forms of technology have dramatically more potential than they’ve shown thus far.
And didn’t we already know that? That little computer in your pocket that also makes calls has been repeatedly engineered to draw you in and alter your behavior. There’s no reason to assume other technology devices and programs can’t also. Indeed, put to positive use, technology becomes an aggregator of effective decisions that both relieves us of the unnecessary burden of isolated decision making and demonstrates our true connectedness.
“Relying on our community of knowledge is absolutely critical to functioning,” says Philip Fernbach, cognitive scientist at the University of Colorado. “We could not do anything alone. This is increasingly true. As technology gets more complex it is increasingly the case that no one individual is a master of all elements of it.”
So, when will the breakthrough in human self-understanding push us to embrace our true nature, make peace with our common humanity and realize that we don’t magically just know things we’ve never really studied? Probably never. In the meantime, the gradual integration of technological nudges in complex processes is proving valuable in improving how humans make decisions.
Long live incrementalism.