Take all of this blog with the proverbial grain of salt.

Randomized controlled trials (the “gold standard”) are not applicable to all interventions.
Expanding definitions of disease are worrisome.

Especially irreproducible are any claims in the realm of psychology.

Are psychiatric diagnosis even useful?

Psychiatry and psychology journals can be guilty of spin.

Brain scans can lead to false conclusions.

Empirically supported psychological treatments may also be doubtful.

Are researchers freaked out by those on the autistic spectrum?

How could any credible scientist fall for the Stanford Prison Experiment?

Negative results, sometimes hard to publish, or harder to find, are useful.

Reproducibility in cancer research is also problematic.

But reproducibility may not be indicative of truth.
Contaminated cell lines are a concern.
Peer Review does not guarantee quality.

How to spot bogus health claims, perhaps most claims are bogus!

How studies are skewed: follow the money?
Funding sources, and the resulting conflicts of interest, are often not disclosed.

Why there is so much commercial corruption in nutrition.
Industry involvement in medical trials is not always reported.

Journalists are frequently misinformed.

Scientific information and misinformation are amplified through social media.

Transparent reporting standards help, and hype hurts.

Researchers sometimes fail to appreciate the effect of exercise.
Right-handed people are almost exclusively studied.

Sick, cold, overfed, and tired male lab rats may skew results.

Mice do, though, enjoy running wheels.

Mice are often a poor model of human disease except, maybe, cancer.
The “forced swim test” for anti-depressive effects is bogus.

Studies on dissimilar people can limit applicability.

Poor methodology does not help.
Maybe start with correctly identifying cells used in research?

Food frequency questionnaires (the basis of much here) are fraught with peril.

Such are an example of observational studies.

The flawed peer review process, along with misuse of p-values (it’s not hard!).

P-values are not as useful as strong descriptive statistics, including effect sizes.

Outcome switching and statistical fishing – oh, my!

At least be skeptical:  be wary of advice

It can be tricky, especially for the well educated
Even personal anecdotes (some are in this blog) can’t be trusted.

Bogus conclusions can be coaxed from good data
And “good” conclusions can be had by manipulating data.
Some studies here are preliminary, and many are wrong.
Single studies are especially suspect.
Self-reported questionnaires (vs. diagnostic interviews) are suspect.

Scientific taboos, like research on poop

Risk assessment

Though I cite WebMD often, beware – it is funded by big pharma.
Likewise many “patient advocacy” groups.

Searching any health information is perilous, unless one first installs Disconnect

Beware of self-proclaimed health gurus (why amateur is in the title here).

Finally, I am not a licensed medical practitioner. I have no medical training or background, so I am ill-equipped to spot weak studies. If I could find someone more qualified than I to take over this blog (and clean it up), I would!

The greatest of follies is to sacrifice health for any other kind of happiness. Arthur Schopenhauer