Science isn’t the hard part…
What? Aren’t they all the same thing?
Not necessarily – especially when statistics are involved.
As we’ve already noted, a weekly news-and-commentary summary magazine called The Week (https://theweek.com/), even has a regular column titled “Health Scare of the Week.”
Vitamin D cures cancer. Vitamin D statistically minimizes recurrence of certain types of cancer. Those are two very different statements, and the “statistically” part depends heavily on sample-size and sample population. What population was checked? Healthy college students? Aging nursing home residents? Australian lifeguards exposed to the Sun all day? Big difference. Also, measures of statistical significance (T-tests, etc.) must be considered. If you can understand them, that is. More to the point, statistical significance tests are predicated on two fundamental and dangerous assumptions: (1) that the data represent something 100% true, and (2) that one need not understand the relationships between data and reality.
How does one make sense of a statement like “…found that people with the most Mediterranean diet have up to a 40 percent lower risk of developing Alzheimer’s disease.”? (DeWeerdt, 2011)
What could that possibly mean??!? For starts, what constitutes a “Mediterranean diet” anyway? How long do you have to “eat Mediterranean” to be “saved” from Alzheimer’s?
At the core of these problems is a distinction between Bayesian statistics and Fisherian statistics. For a blunt explanation of the profound difference between the two, the authors recommend Nate Silver’s book “The Signal and the Noise” (Silver, 2011). Fisherian statistics is a fading paradigm in much of the science world, but its long-term damage has already been profound. Starting with several flawed basic assumptions, including that no prior knowledge can be incorporated in the calculations, and that all data truly represent only the true world situation, these have led to many of the problems in modern science referred to in previous chapters.
Conflicting scientific papers are a real cause for concern. However, there is no more obvious red flag to these problems than scientific papers with a conflict of interest buried somewhere inside. For an example: one paper (Nemeroff et al., 2001, Am. J. Psychiatry 158, 906-912), cited in the scientific literature more than 250 times since 2001, said that the drug Paxil was a wonder antidepressant with minimal side-effects. However, there have been accusations that the study’s academic authors were “hand-picked” by the drug company and engaged in gross scientific misconduct. Among other things, they allowed their names to be attached to the original manuscript, which was actually written by an unacknowledged contractor hired by Glaxo-Smith-Klein. That smells like week-old fish to any ethical scientist. The contentious issue of drug-industry influence over medical research is bubbling just under the spill-all-over-the-stove-and-onto-the-floor point as we write this. Honesty and ethics (or the lack thereof), are important parts of this whole issue, and unfortunately the process is money-driven.
And it’s not just ethics of scientists and the corporations funding them, but also the powerful driving force that publishers “need” to get readers with catchy headlines for articles designed to appeal to uninformed people.
How can you retain your sanity when faced with conflicting and sometimes even ridiculous scientific and health claims? We recommend the following:
- Never take something in popular “news” media too seriously – it has usually been written by a so-called “science writer,” who may have little actual training in scientific research and interpreting statistics. It’s often removed some distance from the original data, and nearly always has an incentive for a sensationalist slant.
- Never bet too much on a scientific study until there are repeats and duplicate verifications by independent groups. The supposed discovery of cold fusion is an example: When other scientists couldn’t replicate the results of University of Utah chemists Pons and Fleishman, it became clear that their “research” and “evidence,” though probably not a deliberate hoax, were clumsily done and not carefully evaluated. A university fearful of losing lucrative patent and other benefits took their ball and ran away with it, compounding the original error(s) (Voss, 1999; Ackerman, 2006).
- If something cannot be replicated by independent researchers, it’s probably not true. However, here is yet another confounding issue: replication studies are not often done, because they are expensive, and don’t get promotions and tenure for researchers. We thus have yet another Catch-22.
Progress in science has nevertheless been a growing, living thing. In the early 19th Century, common medicinal practice, for example as documented in the Lewis and Clark expedition (Ambrose, 1997), consisted of giving mercury for syphilis, and bleeding the patient for almost everything else. During childbirth, doctors and midwives regularly bled women into unconsciousness, mainly to relieve their own anxiety at hearing the screams and moans accompanying most births. Medicine at that time generally did more bad than good for humanity, or at least for individual humans.
Deduction and experimentation were in their relative infancy, and steps to improve medicine were slow and halting. Louis Pasteur (1822-1895) was observant enough to recognize that milk maids who had been infected with cow pox did NOT contract smallpox, a lethal killer at the time. He developed the first vaccine and then went on to figure out fermentation and the process named for him: pasteurization. In 1854 John Snow looked at a map of London, and marked houses where people – mostly children – had died of cholera. He saw that deaths clustered around a single community water pump on Broad Street. He arranged for the handle of this pump to be removed, and voila! – end of epidemic. Progress, in other words, measured in the lives of children saved (Johnson, 2006).
Perhaps the largest elephant in the room of modern science is the uneven distribution of resources. Is it more useful to society to expend tens of billions of dollars on the Large Hadron Collider to find the Higgs Boson? Or does it make better sense to spend 10% of that money on a malaria vaccine to save the million children who die every year from that hideous disease? This resource allocation issue is always present at all levels in medicine and publicly-funded science. The media influence this resource allocation process by the attention they pay to some issues but not to others. Do you spend stretched public resources on something like solving tinnitus (the screeching that about 30 percent of adults are hearing every day and every night of their lives), or finding an AIDS vaccine (which affects roughly 0.3 percent of the US population, and rarely kills anymore) – or do you spend your resources on cholera (which kills millions yearly, mostly children)? Which of these options have YOU noticed most in the news?
Doctors depend on medical research, which itself depends on statistics – to correctly compare the efficacy of one drug or treatment to another. The larger the sample size in a research study, the more reliable are your statistics, and therefore your usable results, will be. For example, the results compiled from the 238,000 people being studied in the ongoing Harvard Nurses Health Study (http://www.channing.harvard.edu/nhs/) will be more reliable than the results of a study with 45 men taking a drug and 52 men taking a control placebo to evaluate a possible treatment for tinnitus (https://www.ohsu.edu/xd/about/news_events/news/2004/07-30-tinnitus-patients-need-n.cfm). Every additional person in a clinical trial, however, costs the researchers additional money. NSF and NIST grants being limited, sample research populations also necessarily tend to be limited.
We said “tend to be” … read on.
Keep in mind that there are no black-and-white medical fixes or solutions – not even cigarettes vs. no cigarettes. One of our uncles died of dementia at 96 after chain-smoking for 82 years, while a devout Church member friend who had never smoked died of lung cancer at age 45. This helps us understand another problem of medical research: There are so many variables that it’s hard to test for them separately. Thus, for instance, when researchers think they are finding a genetic effect, such as obesity for some forms of diabetes, they may just as likely be finding results influenced by a family’s grandparents’ eating habits or their current economic status – poor people cannot afford healthy food. Even with tobacco, which is probably as close as we’ll ever get to a statistical slam-dunk for a known and avoidable bad outcome, there are always glaring “great uncle” exceptions to the rule.
If you want the largest statistical sample possible, to get the most reliable numbers, you have to go to the state level: millions of people. Death from all cancers is lowest in the State of Utah (Lyon et al., 1994). It doesn’t take a rocket scientist to figure out a correlation here.
Joseph Smith gave that one to us in 1835. It’s called the Word of Wisdom, and advises avoidance of tobacco, coffee and tea, and also avoidance of heavy use of meat in our diets except in times of necessity like winter. It also encourages consumption of vegetables, fruit, and grains.