When someone reports a scientific effect as statistically significant, what does that mean? Furthermore, what’s the size of that effect – large, small, or somewhere in between? This is what Dr. Kevin Peters, associate professor in Trent’s Psychology department, is working to find out in his project Effect Sizes in Psychology: Expert Perspectives.
For years, scientists have been using statistical tests involving p-values to determine the statistical significance of an effect, such as the difference between two groups or the correlation between two variables.
“Normally in science a probability value (p-value) has to be below a certain value, around .05, before we can say it’s statistically significant,” says Professor Peters. “But people have been talking for decades about the limitations of such an approach, it’s better than nothing but it’s not ideal from a number of perspectives. We still have to interpret the actual effects.”
Take a clinical trial of a drug to improve memory as an example.
“It’s one thing to say the memory recall results are statistically significantly better in the drug group than the placebo group, but I would want to know how much better, for example, will they remember on average three more words than the placebo group,” explains Prof. Peters.
Quantitative research using a qualitative approach
Prof. Peters’ research – in collaboration with Dr. Fergal O’Hagan, associate professor of Psychology, and Rob Cribbie, Psychology professor at York University – investigates how scientists use statistical tests to understand whether a finding is important and by how much. Just how do they plan to conduct research on these quantitative values? Using a qualitative approach, of course.
“It’s ironic in many ways because we are using a qualitative approach to understand quantitative science,” says Dr. Peters, who specializes in quantitative methods himself. “We have quite a few people who specialize in qualitative methods here in Trent’s Psychology department, so I think having access to people with different methodological approaches who are open to that type of research is one of our strengths.”
Prof. Peters recently received a Social Sciences and Humanities Research Council (SSHRC) Explore Grant through Trent’s Office of Research and Innovation, through which he will continue to study effect sizes from the perspective of various statistical experts, starting with quantitative experts. Data will be gathered through a series of detailed, open-ended interviews. There is currently no guiding theoretical framework on how to use and interpret effect sizes, and the goal is to develop a substantive theory on how they are used and interpreted.
Later the team plans to get the perspectives of researchers in other areas of psychology, as well as medical and scientific journal editors, people who teach statistics, and policymakers. Prof. Peters also plans to involve students to transcribe and code data.