Every day you make decisions on using certain products or choosing not to purchase items. Would you rather base your decisions on what someone says on their blog or how much scientific evidence there is about the safety of the product?
Most of us would answer "evidence." But what kind of evidence? And how much is enough? For some materials, like cigarettes or lead, we have direct human evidence from the medical histories of consumers or workers of the dangers of exposure over long periods of time. And we certainly know what substances can sicken or even kill quickly, which we refer to as poisons. But what about the other substances, including both natural and man-made materials, that are around us every day and which find their way into our bodies in trace amounts? How can we determine whether we should be worried or not?
Society and science have no perfect answer. Direct testing of industrial chemicals on humans (e.g., clinical testing of pharmaceuticals) is done in only the most controlled of conditions at extremely low doses that would not produce an adverse effect and after some idea of toxicity is obtained beforehand. However, the two most common ways science can get at possible effects of human exposure to substances in the environment are through indirect means. One is to conduct controlled tests on animals, primarily rodents, in the laboratory. The other is to compare exposures, behaviors, and other factors between or among different populations of workers or others to identify factors that may be associated with the different health outcomes that exist within the populations. Both ways can be inconclusive, but they are currently the best tools that the scientific community has. Here is a brief look at both methods:
- A way of studying possible risks in humans is to investigate possible cause-effect linkages with animal studies. Typically, researchers expose groups of laboratory animals, usually rodents, to a range of doses of the material in question (e.g., a chemical, electromagnetic radiation). The higher doses are usually far higher than what humans might experience in daily life. A high enough dose of just about anything will increase the number of rodents that are affected, when compared to a control group that receives no dose at all. Researchers then look for the highest dose that has no effect on the animals over periods that can range from 60 days to two years—two years being the average life span of a rodent. The "no observed adverse effect level" (NOAEL) or “lowest observed adverse effect level” (LOAEL) then becomes the basis for regulators to set the maximum permissible level of exposure for humans. It is divided by a number generally ranging from 100 to 1,000 to account for differences between humans and animals and to build in an added margin of safety for vulnerable segments of our populations such as infants or children. These levels are not proof that a certain substance is harmful at higher level, but instead help set a way that regulators employ a careful approach to protecting public health by providing a wide margin of safety.
- Another way to study risks to humans is through the study of epidemiology. Epidemiologists are medical detectives, usually known for their efforts to track down the source and spread of a disease. Epidemiologists can take a number of approaches. One observational tool used by epidemiologists is called a "cohort study," which tracks a group of people and monitors the influences on them—diet, exposure to pollutants, personal habits, and the like. If over time they develop a disease or a medical condition, (say, lung cancer) the incidence of the disease may link up with a certain environmental influence (say, smoking). Statisticians call that a correlation, or an association. Another observational tool, called a case-control study, begins with a number of people suffering from the condition whose cause you are trying to identify. They are compared to similar people without the condition, and the epidemiologist looks for differences in lifestyle and environmental exposures (i.e., associations) that might explain the cause of the disease or condition. Both methods look for an association or correlation that relates exposure and outcome.
Now for the hard part. Does a statistical correlation prove anything? Statisticians point out that "correlation is not causation." In epidemiology, only when the association is strong and repeatedly found in many studies does the probability of the link to a disease become convincing. There are very few sure bets in epidemiology. "Probable" is not proof. There are many cases where an association that was "statistically significant"—that is, having only a 5 percent chance of being a coincidence—proved to be nothing but a coincidence.
Epidemiology and animal testing rarely offer proof. They offer evidence, and it is an examination of the weight of all the available evidence that leads to decisions on how to safeguard public health. A responsible and rational regulatory framework in government is based on science and evidence, not on public or political opinion. There is a need to replicate findings in several different situations, rather than to attempt to reach conclusions on the basis of the results from just one study. The scare stories, fad diets, and miracle cures we hear about every day do not serve public health and safety; in fact, they may actually do the opposite. The tools available to public health officials to separate actual risk from needless fear are not perfect, but they produce evidence, and it is the weight of that evidence which helps science to separate the real from the imaginary.