Correlation coefficients can assume values between −1.00 & +1.00. Usually, researchers compute a correlation and then check to see if it’s significantly different from 0.00. Recently, a published report displayed several correlations, each of which had been tested to see if it was significant. One of the computed correlations was equal to 0.01. Amazingly, it was significantly different from 0.00, with p < .05. How could this be? The sample size was gigantic (27,687), that’s how. The enormous amount of data allowed the researchers to say, correctly, that the r of 0.01 was statistically significant. However, it clearly had no practical significance whatsoever.
The moral here should be clear. If you are on the receiving end of a researcher’s statistical summary, and if he or she points out that 1 of the study’s correlations is “significant,” don’t let that single fact cause you to think that a strong relationship has been uncovered. Weak relationships can be statistically significant if n is massive. And if you, yourself, are the researcher who has collected data and done the statistical analysis, don’t look only at the magnitude of p and then get excited if it’s small enough to beat your alpha level. If you fail to pay attention to the actual size of the correlation coefficient (or, better yet, to the size of r2), you soon may find yourself being accused—legitimately—of “making a mountain out of a molehill.”