Someone wasted their time in biostats class

There have been many times when I’m standing before master of public health (MPH) students, giving them a presentation on epidemiology, and I wonder how any of them can even put on their shoes in the morning. Don’t get me wrong; they’re bright students. Many of them have graduated from college with impressive grades and great projects. They wouldn’t be in these top-notch universities if they were not bright. (Or if their benefactors didn’t see brightness in them.) Still, I’m not surprised when I see many of those kids getting an MPH in epidemiology and not become epidemiologist.

Being an epidemiologist is tough. It requires you to be able to think critically and analyze a problem from different sides and different points of view. Most public health problems requiring epidemiological analysis are big, huge puzzles with many working parts. Just being book smart is not enough. Being street smart is not enough. Being charismatic is not enough. Having an MPH in epidemiology or otherwise is not enough.

Like many people, I have issues completely comprehending biostatistical analyses. Biostats is tough. Few people get through it and continue to take classes in it. In fact, I look at the biostats crew at my job and shake my head in amazement. They can slice and dice data in ways I can’t even dream of. So I go to them with questions about biostats. It was one of them, a PhD-level young lady, who explained to me why the paper by Dr. BS Hooker was full of, well, BS.

I have never claimed to be all-knowledgeable on things like epidemiology and biostats. I just know what I know, and I know when to ask for help. I don’t like to pound my chest and say that I’m the best epidemiologist out there. I’m not.

So who wasted their time in a biostats class? Who else, the kid. What leads me to that opinion? First, some background.

In epidemiological studies, there is a hierarchy of what studies contribute the most evidence. At the very bottom is professional opinion. Surely, you would not guide public health policy based on what I or any other person would write on their blog or in an op-ed, or in a letter to the editor. Right above professional opinion are cross-sectional studies. Cross-sectional studies are basically surveys. You survey the population to get an idea as to what is going on before you move on to bigger, better-designed studies, like case-control and cohort studies. After case-control and cohort studies come randomized trials, where issues of confounding and bias are better addressed, and the results have a lot more weight on how to go about solving a public health problem. At the very top of the hierarchy are meta analyses and systematic reviews, where you take all the data from different studies and weigh all the evidence to separate the wheat from the chaff.

Did you notice how I bolded where cross-sectional studies lie on the hierarchy? Why would I do that? Again, some more background.

The paper by BS Hooker took the data from the DeStefano study and treated those data as a cohort study. That right there was one of many flaws in the BS Hooker paper. You don’t take case-control data (which was how the DeStefano study was conducted) and treat it as cohort data. You just don’t.

When the kid tried to defend the findings of the BS Hooker paper as if his life depended on it, using only a screenshot from a video published by Andrew Jeremy Wakefield (and nothing more), someone pointed out to him (again) the flaws in the BS Hooker approach:

“Hooker didn’t crunch the data as a case-control; he crunched it as a cohort study and without knowing temporality of MMR vaccination with ASD diagnosis, it’s dead in the water.”

The kid took exception to this and made what I believe to be the epidemiological and biostatistical mistake of the year:

“No, he crunched it as cross-sectional.”

I spat my coffee all over my desk when I read this. Not only did BS Hooker torture data, his protege is now saying that BS Hooker downgraded the way he treated the data. Remember where cross-sectional studies rank in the hierarchy? I mean, holy sh!t. I knew the kid wasn’t that good at epidemiology, but this confirms how bad he is with biostats.

The same commenter tried to correct the kid (again):

“Anyone looking at how he modelled the data can see that it was a cohort design and if that wasn’t enough, Hooker explicitely states that, “In this paper, we present the results of a cohort study using the same data from the Destefano et al. [14] analysis.” Taking a tumble down the hierarchy of study-design strength, particularly when the dataset available to him was sufficient to conduct a case-control is a bizarre strategy to salvage Hooker’s miscalculated results.”

But the kid can’t be wrong, not even in this case:

“He should have said cross-sectional in that sentence, but it doesn’t change the validity of his results. Relative risk would be more meaningful to the average person than odds ratios and this is an issue which effects (sic) everybody, so I would imagine that is why Brian Hooker conducted it that way.”

HOLY SH!T. He thinks that cross-sectional analyses are better than cohort, and better than case-control as well!!! Even worse, he thinks people are effected, not affected. So call the grammar police!

Of course, to the uninitiated, this doesn’t matter. To the true believer antivaxxer, the kid is an authority on epidemiology and biostatistics. God help anyone who places their faith on him for analysis of scientific evidence. But thank God that, although my comments are not being allowed through by the kid, he allows comments from other people who can see through his, well, BS.

If you have some minutes to waste, and you want to have a good laugh, go read the comments section of the kid’s blog post. It’s comedy gold. If you know epidemiology and biostatistics, you’ll have a good laugh at the errors in logic and reasoning that are pervasive throughout his commentary and his readers’ comments.

This was the eighth post that has nothing to do with vaccines, for the most part.

Spitting on the graves of children lost to influenza

A friend of mine who has worked in influenza surveillance for years send to me this blog post from the Huffington Post. It’s written by Lawrence Solomon, who, by all accounts, has zero experience in infectious diseases or epidemiology. Still, that doesn’t stop him from attempting to write about influenza deaths in an authoritative way, quoting, what else,  anti-vaccine and anti-science material. In fact, I need not go farther than his first sentence to know what he’s all about in this post:

“Flu results in “about 250,000 to 500,000 yearly deaths” worldwide, Wikipedia tells us. “The typical estimate is 36,000 [deaths] a year in the United States,” reports NBC, citing the Centers for Disease Control. “Somewhere between 4,000 and 8,000 Canadians a year die of influenza and its related complications, according to the Public Health Agency of Canada,” the Globe and Mail says, adding that “Those numbers are controversial because they are estimates.””

Why are these number estimates? It’s simple. We can’t possibly count each and every single case of influenza, or influenza-related deaths, in the world. What we can do is use the tools of science and mathematics to come up with a best estimate. If you read further in Lawrence Solomon’s piece in the Huffington Post, you’d think that we epidemiologists come up with these numbers at random, or, if we do use science and math, that we adjust those numbers to some sort of agenda. To make his point, Lawrence Solomon goes to the latest go-to guy in Peter Doshi, PhD (who is not an epidemiologist of any sort but still wants to be some sort of authority on influenza and influenza vaccine science):

“Peer reviewed publications accept Dr. Doshi’s vaccine research, even if he doesn’t meet your standards. But are you saying that you would accept the views of epidemiologists who turned thumbs down on vaccines? It would be my pleasure to present some to you, if that is your test.”

Continue reading

Mental exercises for a better brain

There’s this discussion going on over at Respectful Insolence between an anti-vaccine activist and an epidemiologist, like me. The anti-vaccine activist — whom I thought was banned from there (oops) — is known to be quite “dense” when it comes to epidemiology and biostatistics. I don’t blame him, much. His highest degree in science is in Fire Science. I don’t know where this guy when to school, but most programs I’ve found, like this one, don’t have biostatistics or statistical reasoning in their curricula. This would explain the activist’s misunderstanding of a case-control study. Like the PhD in Biochemistry being discussed by Orac in that post, the activist thinks that matching cases and controls in a study somehow disallows for the examination of their vaccine status and its relationship to autism. They think that cases (autistic children) should have a different vaccine status than controls (neurotypical children), and then we can see if they have a difference in vaccine exposures.

Can you see the logical fallacy in that? Continue reading

When statistically significant is insignificant

I love Twitter. I got a hold of this little bit of anti-vax nonsense and just had to bring it to everyone’s attention. Check this out:

Source.

You can click on the image to see it a little larger. The original caption is what caught my eye. It reads: “Snapshot of the Verstraeten study dated 02/29/00 showing a statistically significant relationship between mercury exposure and autism.” My emphasis added in bold because this image shows no such thing. It shows a statistically insignificant relationship between mercury exposure and autism.

However, I realize that some of these terms might as well be in Chinese to some of you, unless you speak Chinese. So let’s break it down piece by piece.

Relative Risk (RR) is the ratio in the risk of developing autism given an exposure to thimerosal between a control and an intervention group. That’s the left-hand axis. The control group doesn’t get thimerosal. The intervention group does.

For example, if the RR is 10, then those exposed to thimerosal have a ten times higher risk of developing autism than those who were not exposed. An RR of 1 means that there is no difference in the risks; both exposed and unexposed have equal risks of developing autism. So, an RR of 1 means that the relationship observed is not statistically significant.

Statistical significance means that the results you observe are not due to random chance. That’s the 95% confidence interval (CI) part. That CI tells you the range of RR values you’d see 95 out of 100 times if you repeated the same experiment 100 times. The CI in this chart is represented by the error bars in each value.

At <37.5 micrograms, there was no difference between the two groups. The RR was 1. Note the lack of error bars for that value because of the low number of study subjects (n=5).

At 37.5 micrograms, the RR is still 1. Again, no difference.

At 50 micrograms, the RR is 0.93. This means that the control group is about 7% more likely to develop autism than the thimerosal group. BUT the CI includes 1, so there is a very good chance that your RR will be 1 if you repeat the experiment 100 times. As a result, this finding is not statistically significance. Certainly, I would not go out to the streets and proclaim that thimerosal protects from autism.

At 62.5 micrograms, the RR is 1.26, meaning that the group receiving thimerosal is 26% more likely to get autism than the control group. BUT look at the CI again! It still includes 1. As before, this result is statistically insignificant.

At over 62.5 micrograms, the RR rises to 2.48. The CI still includes 1. This result is statistically insignificant.

Wait! Doesn’t this show a trend whereby if the exposure is high enough, then the association will be stronger? Nope. It doesn’t. If you look at the error bars, you could hit 1.0 the whole time. Heck, with the logic shown in this article, I could make a case that thimerosal is protective against autism at certain levels.

It’s nonsense (to not use a harsher word).

But anti-vaccine advocates are not known for letting facts get in the way. The author of that piece of nonsense continues with quotes taken out of context from some meeting long used by anti-vaxers as evidence of a plot… Blah! Blah! Blah!

If you don’t know what is statistically significant and what is not, then that pretty much destroys your entire argument from the get-go. If you try to come off as a researcher, when you’re obviously not, then you lose the argument even worse.

But what about that study? Well, read all about it here, here, here, and here, and see how it has been misused to further the anti-vaccine agenda. Too bad they don’t know the difference between significant and insignificant, or they would have not used this study (or this graph).

Prevalence, Prevalence, Prevalence, Prevalence!

If you have an anti-vaccine agenda, and you want to scare people off vaccines by telling them that vaccines cause autism, and you want to scare them about autism, then all you have to do is get the definition of prevalence wrong. Then, take a national emergency like Hurricane Sandy and write some half-assed blog post about how autism is some sort of a national emergency that needs to be addressed immediately but is being hidden from the public by special interests.

How something that is emergent like that can be hidden remains a mystery to me, but — as always — facts don’t ever get in the way of a good anti-vaccine, anti-government, big conspiracy nut’s blog post. Like this one here. If you can stomach it, go read it, then come back for today’s breakdown of the [redacted] spewed there.

Let us begin with two quick definitions. “Incidence” is the number of new cases of a disease or condition divided by the number of people at risk. For example, the incidence of cervical cancer would be the number of new cases divided by the number of women with cervices. Note that we don’t include men in that rate/proportion because men don’t have uteri nor cervices.

“Prevalence” is the number of existing cases of a disease or condition divided by the total population. For example, the prevalence of diabetes is the number of total diabetes cases in a community divided by all of the people in that community. These two numbers, incidence and prevalence, tell you very different things epidemiologically. Only incidence can tell you if you have an outbreak, or national emergency, on your hands.

For a condition such as autism, where the person who has autism rarely, if ever, dies from it and can lead long, productive lives, the prevalence rate will continue to climb and climb as more people are diagnosed and more of them are living long. Even if the incidence (new cases) drops precipitously, the fact that there are new cases will mean that prevalence will continue to rise. I’ve explained this to you before, haven’t I?

I have.

I really wish the author of that post had an epidemiologist who she could ask about these things before looking foolish. All she has is an even more hardcore anti-vaxxer who is trying to become an epidemiologist. But that’s a whole other story.

Anyway, back to the post in question. In it, the author states the following:

“Starting in the 1980s the autism rate began an ever-ascending climb. 

1995 1:500
2001 1:250
2004 1:166
2007 1:150
2009 1:110
2012 1:88″

She quickly acknowledges having been told the reason for this climb in prevalence, but she immediately refutes it:

“For years the medical community has been credited with “better diagnosing” of a disability that’s always been around. In other words, we’ve always had people like this in society– we just didn’t call it autism… The trouble is, no one has ever had to prove the claim of “no real increasing—better diagnosing.””

Allow me to highlight the troubling part of her statement:

“…no one has ever had to prove the claim of “no real increasing —  better diagnosing”

No one? Really? What about this, this, this (.pdf), and this? Those don’t count because of [insert conspiracy theory here]? Oh, well, I tried.

And then she gets all conspiracist about it:

“That hasn’t stopped authorities from claiming that they’re out there somewhere, undiagnosed or misdiagnosed. It would be especially interesting to see the 40, 60, and 80 year olds with classic autism, whose symptoms are evident to all. It would be of real significance to find middle aged and elderly people whose health history also included normal development until about age two when they suddenly and inexplicably lost learned skills and regressed into autism.”

In other words, because the author doesn’t see them, they must not exist.

Tell me something. Do you “see” people with schizophrenia everywhere? Well, you should. You should see them because 1.1% of the world’s population suffers from it. As it turns out, 1.1% is 1 in 88.

Let that settle in for a little bit. Maybe get up and stretch and whatnot.

Based on prevalence, there are just as many people with schizophrenia as there are people with autism. In the cases of both conditions, the prevalence will continue to increase not because there is some “tidal wave”, “hurricane”, or “emergency” of number of incident cases. Nope. The prevalence will continue to increase because people with these conditions are being treated and accepted — diagnosed and intervened on — and allowed to be part of society. No longer are they being institutionalized in the same manner or proportion as they were in the past.

But we don’t “see” them everywhere because these kinds of conditions manifest themselves at A) a certain age, and B) as a spectrum. You don’t see kids with schizophrenia because it manifests in young adulthood. You don’t see a lot of schizophrenic adults because they are either being treated for their condition and lead “normal” lives or are institutionalized (e.g. sanatoria or even jail). Likewise, you don’t “see” autistic children everywhere because, well, seriously, how many of us wander around elementary schools? And the 1 in 88 adults? I’ll get to that in a second.

By the way, I have several friends with mental health issues, including schizophrenia, and central nervous systems that are not typical, and I love them to death. But I digress…

The author of the misinformed, misconstrued blog post then want to see the following:

“The problem is no one has ever been able to show us the one in 88 adults with autism.”

The author wants to believe — or make her readers believe —  that 1 in 88 adults has autism. I hope it’s an oversight on the author’s part because the prevalence rate on autism is for children. Here, I’ll show you:

“About 1 in 88 children has been identified with an autism spectrum disorder (ASD) according to estimates from CDC’s Autism and Developmental Disabilities Monitoring (ADDM) Network.”

It’s children. There are less children than adults in the United States. So you can’t extrapolate that number willy-nilly without use of some biostatistics. Again, if only she had a [expletive] epidemiologist to help her sort these things out and not read so idiotic.

Finally, if you can do me a favor and not even mention the author’s name in the comments. She’s been known to go all “decepticon” and have her bot fill comments sections with what can be best described as manure.