At ScienceForWork we communicate the science of human behavior at work, a kind of specialized knowledge that informs us about the conditions in which things do good or harm, why, and what we could do about it. Since not all scientific evidence is created equal, we apply critical thinking to evaluate the relevance and trustworthiness of scientific studies to answer the questions arising in managerial practice.


What questions managers ask?

As managers, most of the burning questions we ask are about cause and effect. These may concern:

  • effectiveness: do using structured interviews improves the accuracy of our hiring compared to unstructured ones?
  • safety: will identify the so called ‘high potentials’ do more good than harm?
  • cost effectiveness: does flexible work reduce costs? Is flexible work cheaper than regular work arrangements?

We can also ask questions about:

  • process: how does the practice of performance appraisal works?
  • acceptability: will line managers accept the intervention?
  • satisfaction: are line managers satisfied with the new method of working?

Depending on what questions we ask, certain types of scientific studies give the most appropriate answer.


What research design is most appropriate to answer cause and effect questions?

When we critically appraise a study’s trustworthiness for making causal claims, its methodological appropriateness sets the starting level of trustworthiness. The methodological appropriateness is the degree to which a study can answer a practical question based on its design. The design is the ‘blueprint’ of a study that describes its steps, methods and techniques used to collect, measure and analyze data. We examine the study’s methodological appropriateness referring to the pyramid of evidence (see image below). For example, to understand the effects of the practice of performance appraisal on workplace performance a meta-analysis of randomized controlled studies has very high methodological appropriateness for demonstrating the underlying causal effects. By contrast, a cross-sectional study has low methodological appropriateness to address this question. Still, according to the questions we ask, it can be highly informative to tell how satisfied people are with the practice of performance appraisal that we have in place.

Not all scientific evidence is created equal: Why does methodological quality matter?

Not all that glitters is gold: even a methodologically appropriate study can sometimes be untrustworthy! Trustworthiness is also affected by a study’s methodological quality, that is, the way the study was conducted. The trustworthiness of a study with very high methodological appropriateness can drop dramatically when the study is not well conducted and as a result contains several weaknesses. In fact, if a meta-analysis of randomized controlled studies contains too many serious flaws, its trustworthiness can be downgraded from 95% up to 55%, which is slightly more than chance.


What is the best available scientific evidence?

The pyramid of evidence shows scientific evidence can come from different research designs. At ScienceForWork we aim to communicate information that reflects high generalizability and reliability, that is, the best available evidence at any time. We mainly choose meta-analyses and systematic reviews, which combining results from many studies to answer a specific question provide a more accurate estimation of reality than a single study in a single situation. Building on painstaking, incremental work done by teams of scientists who produce single studies, meta-analyses finally help recognizing the big insights about human behavior at work with the highest degree of confidence.

The philosophy of meta-analysis and systematic reviews sounds like the old saying that

a dwarf standing on the shoulders of a giant may see farther than the giant himself


In the end, what is our takeaway? The Trustworthiness Score

The level of methodological appropriateness and the methodological quality that results from the appraisal is then expressed in a measure of trustworthiness: the chance that the outcome of the study is likely to be caused by the intervention or variable(s) studied after controlling for biases, statistical noise, and other confounding factors. It is visually represented as a Trustworthiness Score, where red represents the answer giving the lowest degree of confidence while green gives the go-ahead for applying the insights.

We critically evaluated the trustworthiness of the study we used to inform this article. We found that it has a moderately high (90%) trustworthiness level.

This means that there is only a 10% chance that alternative explanations for these results are possible, including random effects.


Ultimately, our philosophy is that where evidence is strong, we should act on it. Where evidence is suggestive, we should consider it. Where evidence is weak, we should find reliable information and build the knowledge to support better decisions in the future.



You can learn how to assess scientific claims accessing the free e-learning course on Evidence-Based Practice in Management and Consulting offered by the Carnegie Mellon University here.

References

Barends, E., Poolman, R., Ubbink, D., ten Have, S., (2015). Systematic reviews and meta-analysis in management practice: How quality and applicability are assessed? In: Barends, E. (2015). In search of evidence. Empirical findings and professional perspectives on evidence-based management. Center for Evidence-Based Management: www.cebma.com

You can find the original article here!



Author

Pietro Marenco, Editor and Critical Appraisal Specialist @ScienceForWok