Which studies are reliable? Be mindful of confirmation bias…

Studies Reliable YouTube Thumbnail

– The reliability of studies and research is more important than ever given the stakes.
– So learning whether to trust a study is important. There are ways you can do that. For starters, are they peer reviewed?

Over on my mindfulness blog – RealGoodFresh.com – I’ve blogged about being aware of confirmation bias. Actually, it’s about “be aware” of that bias – simply acknowledge that it exists rather than be wary or cautious of it – the distinction comes down to trying to make sense of your bias. It’s inevitable that you will some.

Anyway, I was really digging Harald Walkate’s note on LinkedIn that said:

It’s hard work keeping up with Alex Edmans’ steady flow of LinkedIn posts, but it’s really worth it, at least if you’re interested in evidence-based ESG approaches. And this post provides some handy rules of thumb on how to assess the equally steady flow of papers coming out that ‘demonstrate’ that ESG funds outperform.

Let’s all be mindful of our confirmation bias here – like Alex I’m a strong supporter of SRI, but we ESG proponents should be careful not to fall in the trap of believing things are true because we would *like* them to be true. Wanting to save the planet does not provide an excuse to not understand how financial markets, investments, public policy, economics and academic research work. If you want to make a difference through sustainable finance, you’ve got to study up folks!

Harald is referring to this note from Alex Edman:

My LinkedIn feed is erupting with excitement about a new meta-analysis apparently proving that SRI always pays off. It’s being lauded as “infallible facts” and “proof”. As a strong supporter of SRI I’m tempted to believe the results, but we have to be careful of confirmation bias. The meta-analysis includes some very flimsy papers, either published in low-ranked journals or not even peer-reviewed at all – so there may be basic methodological mistakes.

A meta-analysis finding “60% of papers show X” places the same weight on a sloppy study as a rigorous one. Averaging is worse for SRI since you can’t lump a diversity study with a Catholic values study – some aspects may pay off, others may not. If the analysis found the opposite, we wouldn’t take it at face value – so we should apply the same scrutiny even if we like the results.

How does a time-pressed person learn what the evidence actually says? One is to follow this simple user’s guide to discern which studies are reliable. The other is to read a “survey paper,” which summarizes the best academic evidence, explaining what it does and doesn’t show. Pedro Matos’s one is excellent.

And here’s a note from Tom Gosling:

I led my first academic seminar for 25 years today, presenting initial findings from a survey of investors and board directors on CEO pay at a London Business School finance seminar. Fortunately my faculty colleagues were kind to me and my co-authors Alex Edmans and Dirk Jenter were there to answer the tough questions.

It brought home to me the rigorous and extensive process academics go through, with multiple rounds of testing, challenge, and review, before papers even get submitted to publication at a peer reviewed journal. It can be a slow process but done properly ensures rigor and helps filter out sloppy analysis. This is why high quality academic research is often more reliable than ‘studies’ produced by consultancies, think tanks, or special interest groups.

Of course, academic research can also be bad and the experience (plus some recent exposure to some particularly bad such ‘studies’) brought to mind the excellent guide to evaluating evidence produced by Alex Edmans.

Our findings will be stronger for it and we’re looking forward to writing up the results first for participants in the study later in the spring and then for academic journals and the wider market.

Finally, you might want to check out this article from “Boston Review” entitled “The Quest to Tell Science from Psuedoscience.”