I’ve told you before that it’s okay to do your own research, but you have to know what you’re looking at. What kind of research would someone who wants to believe that vaccines are the source of all evil do? Well, here you go. It’s the kind of “research” that professors like to tear into because it’s cherry-picking at its finest and, what is worse, it’s drawing the wrong conclusions from what the studies are telling you. So, shall we begin? Continue reading
They keep asking for it. Antivaccine activists keep asking for a vaccinated vs. unvaccinated study. But they don’t want just any study. They want a study that has as much validity as the most valid epidemiological studies out there: Randomized Clinical Trials (RCTs). RCTs are incredibly difficult to set-up.
To set up a proper RCT you have to do several things. (I know because I had to help set one up for my master’s degree.) You’re looking to compare intervention A (e.g. a drug, a vaccine, or some thing like that) to intervention B. You get a group of people and you randomly assign them to group A or group B. Here’s the thing; you don’t tell them which group they’re in. This is called “blinding.” Otherwise, the people who are getting A and believe that it works might feel better or do things to make it work. It’s happened in real life. A group of people who had diabetes knew that the drug they were being given was not the best drug. So they started watching their carb intake, or continued to use their other medication, or exercised. Needless to say, that kind of skewed things a little. That is called “bias.”
But it’s not just the participants that you keep in the dark. You also have to keep the researchers conducting the study in the dark about which group is which. Say, for example, that you have been paid mad money to conduct the study for drug B. If it fails, you’re out of a job. You’re human, have a family to feed, and really don’t want to go back to waiting tables for a living. If you see that the people on B are not doing too well, you might be inclined to either fudge the results a little bit or put them on an additional drug to make them feel better.
Randomization and “blinding” keeps bias at bay.
The rest of the study is pretty standard. You check for predetermined outcomes like a change in blood pressure values, fasting blood glucose, days of school or worked missed… Things like that. Finally, you compare the two groups for outcomes. Some number crunching later, you determine which of group A or group B is more likely to have the outcome you were measuring, and whether this difference in likelihood was statistically significant.
So why can’t we do this with vaccines?
First of all, we’ve done this with vaccines. Here is the data on what was done to get the HPV vaccine approved. It has been done with other vaccines as well, but you know how anti-vaccine activists are. They don’t want facts to get in the way of their ideas.
Second, we can’t keep vaccines known to work away from children. Why? Because we’d be leaving those in the control group (the unvaccinated group), unprotected from diseases… From some very serious diseases.
Third, anti-vaccine parents will never go for this because there will be a 50/50 chance that their children are immunized. Remember, we randomize the groups into the vaccine or no vaccine groups. We have no control over it, and neither do they. Further, should the anti-vaccine parent find out that their child was vaccinated, do you think that they won’t exaggerate any ailment as being caused by the vaccine or minimize any good outcome — like not missing school due to the chickenpox — and not report it? Bias is a hell of a drug.
Of course, anti-vaccine advocates have come back and said that there are thousands of children who are not vaccinated. Why can’t we go back and look at their medical records and compare those records to vaccinated children?
We can, but what we get from it would be hard to interpret. It would be hard to interpret because these children will have gone to different providers in different healthcare systems at different times and in different locations. Lots of bias there. If we look at only one location, the sample size becomes too small to detect any “signals” in all the “noise.” And then there’s the very real possibility that anti-vaccine parents will tend to be anti-medicine and thus not take their children to licensed healthcare providers, opting for “alternative” providers like chiropractors or wizards, or homeopaths (who might as well call themselves wizards).
So how do we know that vaccines are safe, then?
We know because vaccines have to go through a stringent approval process that includes the agreement of continuing surveillance for adverse events even after the vaccine is “released into the wild.” The vaccines go through several phases of evaluation before they’re licensed to be used. And then, once they’re licensed, healthcare providers, epidemiologists, and scientists are all on the lookout for adverse events. And, contrary to what the anti-vaccine forces will tell you, the adverse events are far and few in between, and they are very rarely catastrophic or even deadly.
To quote a friend, “Anti-vaxxers must think that they win the Power Ball lottery every time they play” because of the way they exaggerate the odds of rare events.
Then again, if you still want to do this study, make sure that you are well-funded. (I’d ask the companies selling alternative medicine supplements and products for the cash.) You’re going to need it if any kids in your control group catch a deadly vaccine-preventable disease and you knowingly kept the vaccine away from them.
“My Own Country” (1998) is a movie based on the book by the same name by Dr. Abraham Verghese. It tells the story of Dr. Verghese’s experiences in the South in the beginning days of the HIV/AIDS epidemic. The movie, like the book, is not for people who are still, to this day, close-minded about the origins of the epidemic. They should read the book and watch the movie, yes, but it is presented in such brutal honesty that it will only make them revolt against it even more. People who see this movie and are inspired to see human beings as the frail and fallible beings that we are will also come to see people as capable of unconditional love… Something reserved in literature and history only to the deity of the highest order.
Dr. Verghese was an outsider in the town of Johnson City, Tennessee. Ethiopian by birth and Indian by heritage, the movie makes it clear that he was accepted in the town only because of his education. But race is not the issue with this town, not the way the movie is framed. The issue is this new epidemic that has arrived in the form of young, gay men with AIDS. Men who were otherwise healthy and full of life begin to lose weight at a phenomenal rate, become too weak to go on in life, and eventually succumb to the disease.
The people around these young men are scared to death of what is going on. If you are too young to remember those days — and I’m not — you will see how people truly reacted to HIV and AIDS. They would not touch a person who was infected. They would not hug, kiss, or want to be around an infected person. Even Dr. Verghese’s wife asks him once when he gets home, “Did you wash your hands?” The stigmas and stereotyping are all there, and they are presented without judgment, more as the natural response of society to something that is scaring them to death — sometimes literally.
But it’s not just homosexuals that are seen to be affected in the movie. A heterosexual couple become infected when the husband has sex with men. He is dragged to the hospital by his wife and children and sheepishly admits to having sex with men and women. “I like sex,” he admits. Later, when the wife is told that both she and her sister are also infected, both from the husband, she is seen contemplating suicide. That is what I meant by being scared to death.
Dr. Verghese continues musing about homosexuality and what he is seeing all around him. It is touching because he seems to be trying to rationalize what is going on around him. We all do this. We see such horrors and unspeakable things through the news or in person and we try to tell ourselves that we, humans, are not really that evil. We can’t be. If we were, we would have never progressed as much as we have in this world.
In a post-HIPAA society, it is shocking to see how news of peoples’ diagnoses spreads through town. People are said to stand up in church and “out” their relatives with AIDS. Employees of the hospital are rumored to be spreading diagnoses to people in the community. When you realize that people who were diagnosed with HIV infection, or AIDS, were fired from their jobs, shunned by their families, or worse, you come to understand why it became necessary to have stronger privacy laws.
Somewhat humorous is a scene where a young man we meet earlier in the film has passed away. His sister comes to make sure that his body looks presentable for the funeral. The mortician is asked to put on socks on the body and returns with a silly-looking pair of rubber gloves that are more fitting for an electrician working with a high-tension wire. The sister remarks that the body is “pickled” and that there “is no bug in the world that’s going to survive that”.
We also see something that is still going on to this day: A family overriding the wishes of their dying relative while the relative’s helpless partner looks on. “We have legal authority,” they claim while the partner is brought to tears at the prospect of extending his beloved’s suffering. Without preaching, just by presenting the facts, we see how this is not the best thing for the patient, only for the family.
Threaded throughout the movie are scenes where the audience gets to see that unconditional love I wrote above about. When a gay man embraces his partner, both crying over the diagnosis, a nurse states that she wishes a man loved her like that. That embrace is powerful because people with AIDS at that time were shunned to the point that people did not want to be in the same room with them at times. Handshakes were questioned, and hugs were forbidden. Ignorance and fear, the most virulent contagions, guided people’s responses. Science and reason, the antidotes to these things, were set aside back then as they continue to be ignored today.
Yet there is hope, there is always hope. We see the hope in this young infectious disease doctor who is doing his best to inform the public on what HIV and AIDS are and what they are not. We see the hope in his staff who work with him and start to understand what is going on and what the best course of action is. And we see hope in the family members of those who are stricken with the disease and come to accept their relatives, love them, take care of them until their dying day, and become advocates in the community for those who are shunned and too weak to defend themselves.
If you are an advocate for public health, for social justice, for equality, then this is a great movie for you to see. The book goes into even more detail, of course, but the movie is powerful enough. When you see that the issues of those days are still here today, you can’t help but to want to rise up and fight it, do something about it. And we must.
Gary Schwitzer has an awesome post on the “Three common errors in medical reporting”. I suggest everyone read it to better understand how some of the stuff that medical reporters (or reporters in general) can get things wrong even when they’re trying to get it right. My favorite error is the absolute vs. relative risk fallacy. As Mr. Schwitzer describes it:
Many stories use relative risk reduction or benefit estimates without providing the absolute data. So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).
This is something that people on both sides of a scientific debate are guilty of doing. Some overzealous public health people may say that an intervention is the best thing ever because it “cut in half” the number of new cases of some disease or condition. On the flip side, pseudoscientific zealots may say that some “natural” remedy is “the shit” because it “more than doubled” the life expectancy of a person with cancer… Or something like that.
In both instances, it is absolutely critical that the person making the assertion report on the actual numbers, not just how much – or how little – impact the intervention had. So trust your sources, but always verify. Just because something is “statistically significant” doesn’t mean that it’s “impressive”.
I was reading through some of the reviews of a restaurant the other day when I read some comments by several people who swore that they had been made sick by food from that restaurant. One commenter stated that they had become “gravely ill” soon after leaving the restaurant. Another commenter agreed, saying that they had become ill “about a half hour” after eating at the same establishment. Soon after that, others piled on. As I watched the ratings site, I was very upset to see what became a comedy of stupidity hours later.
Judging by the comments, the incubation time for their disease was between 30 minutes and TWO WEEKS. Not only that, but their onsets were days and weeks apart from each other. This leads me to one of two possible conclusions: 1) The restaurant has an enormous problem with regards to hygiene to the point that they are making people sick on a prolonged scale spanning weeks. Or 2) the commenters were exhibiting – at the very least – recall bias and/or – at the very worst – a mob mentality.
Then again, they could all have been the same person with some sort of vendetta. (I’m not linking or publishing the exact quotes because the restaurant already has enough issues.)
It is very natural for us to associate our illness to the very last thing we ate before we got sick, especially if we are not familiar with things like “incubation times” or the modes by which viruses and bacteria that we eat can make us sick. For example, Norovirus takes just a few viral particles to make a person sick. The incubation time – the time from infection to symptoms – ranges from 24 to 48 hours with Norovirus, certainly not 30 minutes. That is, you’re completely symptom free for about a day before you get really sick form Norovirus.
Salmonella and E. coli make you sick through the cunning use of toxins. Alright, alright… They don’t do it on purpose. It’s just that some of their metabolic byproducts of their own cell membrane may act as a toxin once in our gut. Their incubation times? 12 to 72 hours for Salmonella and 3 to 4 days for E. coli. Again, no where near the 30 minute mark. And certainly not two weeks later.
What could cause disease in 30 minutes or less
or your money back?
Staphylococcus aureus or Bacillus cereus can make you sick in 30 minutes after ingesting their toxins… But it’s a stretch in this case, especially in light of others reporting such disparate incubation times.
This is why it is necessary for health departments and health care providers to educate the public on the nature and behavior of gastrointestinal disease – and other diseases as well. Because that lack of understanding can not only lead to a restaurant or other food businesses to be wrongly accused of making people sick – which can have them go under financially – it can also muddy up investigations of serious food borne outbreaks.
Have you ever noticed that reports of case counts from public health sources usually have the word “reported” included in them? You have, haven’t you? Well, have you ever wondered why that is so?
|Click to enlarge|
The reason for that is because of the inherent nature of epidemiological surveillance and the barriers to getting an exact case count for every single disease or condition out there. Some of these issues with surveillance make for an overestimation of the number of cases. Other issues make for an underestimation of the number of cases. In all cases, it is highly unlikely that you are seeing the true number of cases in any report from public health.
Does that make these reports not useful or even – as some will claim – “manipulated” in any way? Not necessarily, and let me tell you why…
The first thing you need to understand in analyzing descriptive data presented to you from public health sources is the case definition being used in counting cases. A case definition is usually presented in terms of person, place, and time. For example, a case of Salmonella food poisoning may be defined as “anyone with a stool culture positive for Salmonella who ate avocados in Pittsburgh in the week of December 8 to 15″. That’s pretty specific, right?
Case definitions can also be very broad, like saying that a case of Salmonella food poisoning is “anyone with gastrointestinal disease with an onset of December 10 to 17”. This definition would surely bring up many more cases than the cases from the previous, more stringent case definition. So you can see why you need to know exactly what defines a case.
Likewise, you need to know what diagnostic tools are being used to define a case. In our example above, we used a stool culture to define the specific case definition and a clinical description of “gastrointestinal disease” to define the second. When being presented with data, make sure that you know what diagnostic tool – or tools – was (were) used. It makes a big difference.
For example, in the late 1970’s and early 1980’s, we had very little with regards to technology to isolate the Human Immunodeficiency Virus (HIV). So an HIV infection had to progress to Acquired Immune Deficiency Syndrome (AIDS) – a collection of signs and symptoms of the deterioration of the immune system – in order to define a case of HIV infection. AIDS itself was very broad at first, and the definition then was refined. As more and more diagnostic tools have been made available, the case definition of HIV and AIDS has changed. Where the presence of an opportunistic infection was once enough to diagnose a person with AIDS, there are now lab tests to look at the white blood cell counts and diagnose earlier in order to intervene and treat earlier.
The example with HIV/AIDS above is true of autism as well. It used to be that there was no uniform diagnosis for autism – or any of the conditions that fall within the autism spectrum. Children were either “hyper”, or “retarded”, or “slow”, or had some other condition. As medical science began to understand what it meant to be on the autism spectrum, the definition of someone with autism changed, leading to better recognition of cases and a subsequent rise in the prevalence – the underlying rate of disease in a population – that we see now.
Incidentally, the case definition for autism became more sensitive and specific – and thus more accurate – around the same time that vaccines began to be more abundant and more recommended. This lead to the misperception that vaccines raised the rates of autism and not the better diagnostic tools. But that is for a whole other discussion.
It goes without saying that an improvement in surveillance methods also leads to a change in the number of cases observed and counted. For example, infant mortality reporting has gotten better as more and more health care providers in the United States are able to report infant deaths electronically. Health departments at all levels of government are more active in their surveillance of cases by surveying hospitals, clinics, and even midwives on the survival numbers of infants. So you can see how this extra effort to count the deaths that were previously not reported has led to the belief that the infant mortality rate in the country has increased.
Other countries don’t have the same systems as we do in the United States. As a result, their infant mortality rates are different – even lower – than those observed here. Is it true, then, that the US is failing in controlling infant mortality compared to countries with less resources? Nope. It’s all in how we’ve been counting the numbers. Apples to apples, the rates are much better in the United States, where expectant mothers have better access to prenatal care and children are – for the most part – born in medical facilities capable of caring for them if they are in trouble.
So here is what you do when you compare two rates of a single disease either across time, across location, or even across populations of people. You need to make sure that the case definitions of both datasets are comparable and as close to matching as possible. Otherwise, you really are comparing apples to oranges. You also need to look at the diagnostic methods used for each dataset. There is no use in comparing one dataset whose cases were diagnosed based on symptoms – a subjective way of diagnosing – and another dataset whose cases were diagnosed by a lab – an objective way of diagnosing. Finally, you need to look at the surveillance system that collected these data and make sure that the systems for both sets of data are – yet again – comparable. If one relied on providers reporting cases while the other went out and looked for cases, then – yet again – you will find yourself comparing apples to oranges.