There have been many times when I’m standing before master of public health (MPH) students, giving them a presentation on epidemiology, and I wonder how any of them can even put on their shoes in the morning. Don’t get me wrong; they’re bright students. Many of them have graduated from college with impressive grades and great projects. They wouldn’t be in these top-notch universities if they were not bright. (Or if their benefactors didn’t see brightness in them.) Still, I’m not surprised when I see many of those kids getting an MPH in epidemiology and not become epidemiologist.
Being an epidemiologist is tough. It requires you to be able to think critically and analyze a problem from different sides and different points of view. Most public health problems requiring epidemiological analysis are big, huge puzzles with many working parts. Just being book smart is not enough. Being street smart is not enough. Being charismatic is not enough. Having an MPH in epidemiology or otherwise is not enough.
Like many people, I have issues completely comprehending biostatistical analyses. Biostats is tough. Few people get through it and continue to take classes in it. In fact, I look at the biostats crew at my job and shake my head in amazement. They can slice and dice data in ways I can’t even dream of. So I go to them with questions about biostats. It was one of them, a PhD-level young lady, who explained to me why the paper by Dr. BS Hooker was full of, well, BS.
I have never claimed to be all-knowledgeable on things like epidemiology and biostats. I just know what I know, and I know when to ask for help. I don’t like to pound my chest and say that I’m the best epidemiologist out there. I’m not.
So who wasted their time in a biostats class? Who else, the kid. What leads me to that opinion? First, some background.
In epidemiological studies, there is a hierarchy of what studies contribute the most evidence. At the very bottom is professional opinion. Surely, you would not guide public health policy based on what I or any other person would write on their blog or in an op-ed, or in a letter to the editor. Right above professional opinion are cross-sectional studies. Cross-sectional studies are basically surveys. You survey the population to get an idea as to what is going on before you move on to bigger, better-designed studies, like case-control and cohort studies. After case-control and cohort studies come randomized trials, where issues of confounding and bias are better addressed, and the results have a lot more weight on how to go about solving a public health problem. At the very top of the hierarchy are meta analyses and systematic reviews, where you take all the data from different studies and weigh all the evidence to separate the wheat from the chaff.
Did you notice how I bolded where cross-sectional studies lie on the hierarchy? Why would I do that? Again, some more background.
The paper by BS Hooker took the data from the DeStefano study and treated those data as a cohort study. That right there was one of many flaws in the BS Hooker paper. You don’t take case-control data (which was how the DeStefano study was conducted) and treat it as cohort data. You just don’t.
When the kid tried to defend the findings of the BS Hooker paper as if his life depended on it, using only a screenshot from a video published by Andrew Jeremy Wakefield (and nothing more), someone pointed out to him (again) the flaws in the BS Hooker approach:
“Hooker didn’t crunch the data as a case-control; he crunched it as a cohort study and without knowing temporality of MMR vaccination with ASD diagnosis, it’s dead in the water.”
The kid took exception to this and made what I believe to be the epidemiological and biostatistical mistake of the year:
“No, he crunched it as cross-sectional.”
I spat my coffee all over my desk when I read this. Not only did BS Hooker torture data, his protege is now saying that BS Hooker downgraded the way he treated the data. Remember where cross-sectional studies rank in the hierarchy? I mean, holy sh!t. I knew the kid wasn’t that good at epidemiology, but this confirms how bad he is with biostats.
The same commenter tried to correct the kid (again):
“Anyone looking at how he modelled the data can see that it was a cohort design and if that wasn’t enough, Hooker explicitely states that, “In this paper, we present the results of a cohort study using the same data from the Destefano et al. [14] analysis.” Taking a tumble down the hierarchy of study-design strength, particularly when the dataset available to him was sufficient to conduct a case-control is a bizarre strategy to salvage Hooker’s miscalculated results.”
But the kid can’t be wrong, not even in this case:
“He should have said cross-sectional in that sentence, but it doesn’t change the validity of his results. Relative risk would be more meaningful to the average person than odds ratios and this is an issue which effects (sic) everybody, so I would imagine that is why Brian Hooker conducted it that way.”
HOLY SH!T. He thinks that cross-sectional analyses are better than cohort, and better than case-control as well!!! Even worse, he thinks people are effected, not affected. So call the grammar police!
Of course, to the uninitiated, this doesn’t matter. To the true believer antivaxxer, the kid is an authority on epidemiology and biostatistics. God help anyone who places their faith on him for analysis of scientific evidence. But thank God that, although my comments are not being allowed through by the kid, he allows comments from other people who can see through his, well, BS.
If you have some minutes to waste, and you want to have a good laugh, go read the comments section of the kid’s blog post. It’s comedy gold. If you know epidemiology and biostatistics, you’ll have a good laugh at the errors in logic and reasoning that are pervasive throughout his commentary and his readers’ comments.
This was the eighth post that has nothing to do with vaccines, for the most part.