Tonight’s ramble is going to be my take on the MAHA movement. My view is
heavily influenced by the years (in the early 1970s) when I studied
biochemistry through my junior year of college. I imagine some people
will throw that out as being too “establishment,” but I think there are good
things to talk about and ideas to spread around down this road.
The first thing I stumbled across that made me pay attention to RFK Jr. was
him saying that in the past, autism struck something like 1 in 10,000 kids
while today it’s 1 in 34. Put another way, it has gone from a 0.0001
portion (0.01% of kids) to (1/34 or 0.0294 (2.94%) At almost 300 times
the previous percentage, that’s a monstrous increase and it really needs to be
investigated.
The problem is that we don’t know, as proven by any real science, why this has
happened. Some people will say vaccinations, but we have just as much
proof of that as we do that it was chem trails or that chem trails are simply
jet engine exhaust, or anything else. So how do we establish a cause
with as little doubt as we can? How do we prove if one specific thing
causes an effect?
As I quipped the other day, junk science is a favorite topic of mine, but we
have enough now. We don't need to add volumes more junk in the effort to
improve the many widely quoted statistics.
The gold standard way to really prove causation is double-blinded, randomized,
controlled trials (I’ll just call them RCTs because that seems to be common) –
and potentially a LOT of those RCTs. The golden rule here is the bigger
the population being experimented on the better. That makes these sorts
of studies hard to do, take a long time, and burn dumpsters full of money.
So what are RCTs? A controlled trial is an experiment with two groups:
the experimental group that gets the thing being tested and a second group
called the control that gets something expected to have no effect at all,
usually called a placebo. (While many people envision something like a
sugar pill, sugar clearly has effects on some things so the placebo has to
carefully chosen – a placebo for an injection might be “normal saline” or
saltwater.) Randomized means that a group chosen to be used in the study
is chosen to be as identical as possible, and exactly which group a subject
goes into (experimental or control) is chosen randomly. Blinding a study
means either the subject or the experimenter that gives them their treatment
knows which group they’re in; double-blinding means that neither the subject
getting the treatment or the person giving them the treatment can know if it’s
the real treatment or the placebo.
I hope you’re seeing a big problem here. Let’s say we want to find if
giving a particular vaccination causes autism. We need two big groups –
the bigger the better – to experiment on. Then we have to monitor them
for however long we think it takes to be able to say “if they haven’t gone
autistic by now, they’re not going to.” How long? Here’s where the
question might not be as long as it could be for other things. Maybe
there’s evidence that if they don’t start showing signs in the first couple
months they never do; maybe it’s more like if they don’t show signs in five
years they won’t, and maybe it’s 10 years or fully adults.
Now it gets harder to run the tests. Nobody gets one vaccination;
today’s kids get larger numbers than even 30 years ago. In the RCTs, we
can test whether getting two specific vaccines staggered in time however the
protocols assign them can cause the autism. They can’t get any other
vaccines or anything the control group doesn’t get. We need more huge
groups to experiment on.
And it gets even harder; astronomically harder. In
probability and statistics classes
they cover how to compute how many possible combinations there are. It’s
worse than this, but let’s assume kids get 15 vaccines and we want to test
every combination of two out of the 15 in an RCT. How many RCTs does it
take?
That shows that to test 15 vaccines 2 at a time, takes 105 RCTs. That
would be like 1 vs 2, 1 v 3, up to 1 vs 15 then 2 vs every other, 3 vs every
other and so on. If it’s 30 vaccines, twice that “N” in the calculated
number, that 105 jumps to 435. The last time I did any research on this
question, the results were that there have never been any tests like even one
of these about interactions between combinations of vaccines, but it has been
some years since I looked.
The shear number and cost of those tests could be one of the reasons it has
never been done, however just vaccinating everyone instead of testing it
rigorously and carefully should not be the way to approach this.
This
is one of the reasons why science is in such deep trouble these days.
Now think of a harder thing to do an RCT on: dietary guidance. An
example some people might be interested in would be something along the lines
of “if I eat something they say is bad for me once a week, let’s say bacon, is
that going to shorten my life compared to never eating it.” To do a
rigorous RCT, you’d need to get a couple of groups of lots of people that are
genetically similar (to rule out effects from that) and study them from
childhood throughout their entire lives. These two groups would need to
eat exactly the same thing as each other at every meal for their entire lives
before a conclusion could be reached. How could they be sure it was that
one food unless a number in the test group (that ate the food being tested)
died that was statistically higher than the number in the control group?
This experiment is unethical, to say the least. The experimenters would
have to commit a group of children to being experimented on for their entire
lives – long before they could make that decision. Whoever is paying for
the test would have to pay for every single meal for both groups for up to a
hundred years. Kids growing up in either group would have to be
isolated. No going out to meals with friends, no just going out for a
late night pizza or any sort of “social eating.” Not to mention not
having a conclusion until long past everyone associated with starting the
experiment has passed away.
So what are the alternatives to doing a
lifelong RCT? The approach appears to be to study some number of people
who get the treatment and then see if the number is close enough to the
general population’s incidence of early death (or whatever they’re interested
in). This is relatively easy; the numbers of people are smaller, they
aren’t really subjected to getting a test substance, and they don’t need to be
housed separately or cared for differently. In the case of eating the
bacon, we’ll give a group of people forms to record what they eat and
when. The typical way of doing this a questionnaire that’s filled out in
retrospect, called a Food-Frequency Questionnaire or FFQ. It’s not
quite the same as someone asking, “what did you have for lunch on March 10,
2023?” two years after the fact, but it’s close. In processing data from
the FFQs, the software could separate out those who claimed to have eaten
bacon from those who didn’t claim to and see if their rates of death
correlated with the general population.
As I’ve said over and over, it then becomes a matter of correlations, what I
call “he-who” studies: he-who eats 3 ounces of bacon/day correlates with the
group that lived the expected lifespan, or lived longer or shorter. To
stretch the example to absurdity, let’s say in the past 50 years, life
expectancy in the US has gone up. Anything that has also increased, or
has become more common in the same time range can be correlated; we could say
that since global temperature has gone up, global warming is extending
lifespans. The standard method of testing whether that correlation is
good enough to claim possible causation is to compare the rate of change
(slopes) of the two things. If they’re within a certain range (typically
5%) the correlation is considered good enough to rule out agreement by random
chance. It’s simply not robust enough, IMO.
Do you see the immediate problem here? If autism had increased
dramatically at the same time that the number of vaccines increased
dramatically, as it did, that's automatically a correlation. One which
could mean exactly nothing.
I've mentioned John P. A. Ioannidis on my pages many times before. He's the
author of what’s widely quoted as one of the most downloaded papers in
history, “Why Most Published Research Findings are False,” in which he presents data that as much as 70% of published science is
wrong. One of the features of that document is a list of his ROTs (Rules
of Thumb) for what makes papers more likely to be good or bad. Allow me
to post two of them here; I think they're relevant:
Corollary 5: The greater the financial and other interests and prejudices
in a scientific field, the less likely the research findings are to be
true.
Corollary 6: The hotter a scientific field (with more scientific teams
involved), the less likely the research findings are to be true.
Both of those seem to explain a lot of "newspaper reported science" perfectly.