Category Archives: Misconceptions

WHAT SEEMS TO BE TRUE OFTEN ISN’T

Many People Get Fooled

The Motley Fool provides advice on money management and investing. However, its recommendations can and should be used by people in other fields. For example, the following 20-word tip, from the “Fool’s School,” should be memorized by everyone who encounters statistically-based claims or findings in politics, medicine, psychology, education, and all other arenas of our lives:

“Never blindly accept what you read. Think critically about not just words, but numbers. They’re not always what they seem.”

Here are 5 examples illustrating how numbers in statistics often do NOT mean what they seem to indicate:

Example A

If the 14 players on a basketball team have a median height of 6 feet 6 inches, it might seem that 7 of those athletes must be shorter than 6’6” whereas 7 must be taller than that. Wrong!

Example B

If the data on 2 variables produce a correlation of +.50, it might seem that the strength of the measured relationship is exactly midway between being ultra weak and ultra strong. Not so!

Example C

If a carefully conducted scientific survey indicates that Candidate X currently has the support of 57% of likely voters with a margin of error of plus or minus 3 percentage points, it might seem that a duplicate survey conducted on the same day in the same way would show Candidate X’s support to be somewhere between 54% and 60%. Bad thought!

Example D

If a null hypothesis is tested and the data analysis indicates that p = .02, it might seem that there’s only a 2% chance that the null hypothesis is true. Nope!

Example E

If, in a multiple regression study, the correlation between a particular independent variable and the dependent variable is r = 0.00, it might seem that this independent variable is totally useless as a predictor. Not necessarily!

The Motley Fool’s admonition, shown above in italics, contains 20 words. If you can’t commit to memory the entirety of this important warning, here’s a condensed version of it:

“Numbers. They’re not always what they seem.”

Leave a comment

Filed under Mini-Lessons, Misconceptions

THE “MEN & HATS” PROBABILITY PARADOX

Misconception #5

Imagine that each of N=6 men has a hat. Also imagine that these hats are identical except that each man’s name is written inside his hat. Finally, imagine that the 6 hats are taken up and then later, because they look alike, randomly returned to the men.

As the 6 hats are returned to the 6 men, there’s a chance that no man will receive his own hat.  The chance of this happening is a tad greater than 1 in 3. To be more precise, the probability (to 3 decimal places) of all 6 hats going to the wrong individuals is .368.

Now, let’s add a new wrinkle to this  imaginary situation. Suppose the number of men (each with a hat) is greater than 6. What if there are 7 men? Or 8? Or more? As N increases, what happens to the probability that no hat will be returned to its proper owner? Some people guess that this probability goes up as N increases. Others guess that this probability goes down.

Both thoughts are wrong.

That’s because the likelihood of no correct “match” is virtually the same for any N > 5, whether N = 6 or N = 600 or N = 600,000!

The actual probability (p) of having no hat returned to its proper owner is given by this formula:

p  =  1/(2!)  –  1/(3!)  +  1/(4!)  –  1/(5!)  +  . . .

where there are N-1 terms on the right side of the equation. With the symbol “!” standing for “factorial,” we could rewrite the above formula as

p  =  1/2  –  1/6  +  1/24  –  1/120  +  . . .

As either of the above formulas shows, additional terms on the right side of the equation have a smaller and smaller impact on the value of p. Moreover, the drop-off of this impact is sharp, not gradual. This fact is made clear by the following chart showing the value of p, to 6 decimal places, for the case where N = 2, 3, 4, … , 10.

N = 2     p = .500000

N = 3     p = .333333

N = 4     p = .375000

N = 5     p = .366666

N = 6     p = .368054

N = 7     p = .367857

N = 8     p = .367882

N = 9     p = .367879

N = 10     p = .367879

It should be noted that this puzzle question is sometimes referred to as “Montmort’s Problem.” Montmort was a Frenchman who studied the probability behind a game called “Treize.” (Treize is the French word for 13.) In its original form, the puzzle question dealt with a jar containing identical balls numbered 1, 2, 3, … , 13. If balls are randomly pulled out of the jar, one at a time, the puzzle question was stated like this: “What’s the probability that the 1st ball taken from the jar will not be the ball numbered 1, that the 2nd ball will not be the ball numbered 2, and so on, with the end result being that no number on any ball matches the order in which the ball is removed from the jar?”

 

 

Leave a comment

Filed under Mini-Lessons, Misconceptions, Puzzles/Games

CAN “INTACT” COMPARISON GROUPS BE EQUATED STATISTICALLY ?

Misconception #4

Suppose pre- and posttest data are available for the individuals in 2 intact groups, such as a classroom of kids in school X and a classroom of kids in school Y. If pretest performance is used as a covariate (i.e., “control”) variable in an analysis of covariance, are the 2 groups “equated” such that the comparison of the groups’ posttest means is fair? Many people think ANCOVA achieves this goal. It doesn’t. Even with several “control” variables, ANCOVA can’t truly equate the groups.

There are two reasons why the analysis of covariance cannot equate intact groups.

First, studies in theoretical statistics have shown that ANCOVA’s adjusted means turn out to be biased in the situation where the comparison groups differ with respect to their population means on the covariate variable. In other words, the sample-based adjusted means on the dependent variable do not turn out to be accurate estimates of the corresponding adjusted means in the population when the population means on the covariate variable are dissimilar.

Besides ANCOVA’s statistical inability to generate unbiased adjusted means when nonrandomly formed groups are compared, there is a second, logical reason why you should be on guard whenever you come across a research report in which ANCOVA was used in an effort to equate groups created without random assignment. Simply stated, the covariate variable(s) used by the researcher may not address one or more important differences between the comparison groups. Here, the problem is that a given covariate variable (or set of covariate variables) is limited in scope. For example, the covariate variable(s) used by the researcher might address knowledge but not motivation (or vice versa).

Consider, for example, the many studies conducted in schools or colleges in which one intact group of students receives one form of instruction whereas a different intact group receives an alternative form of instruction. In such studies, it is common practice to compare the two groups’ posttest means via an analysis of covariance, with the covariate being IQ, GPA, or score on a pretest. In the summaries of these studies, the researchers may say that they used ANCOVA “to control for initial differences between the groups.” However, it is debatable whether academic ability is reflected in any of the three covariates mentioned (or even in all three used jointly). In this and many other studies, people’s motivation plays no small part in how well they perform.

Leave a comment

Filed under Misconceptions

HOW CONFIDENT CAN YOU BE IN A CONFIDENCE INTERVAL?

Suppose you use data from a random sample to build a 95% CI around the sample’s mean. Next, suppose you put that sample back into the population. Finally, suppose you get ready to extract a 2nd random sample from the same population, with plans to use the new data to compute just the 2nd sample’s mean. How confident can you be that your 2nd sample’s mean will lie somewhere between the end points of the 1st sample’s 95% CI?

Did you say or think: “95% confident”?

If you did, you’re a bit more confident than you actually should be!

If your 1st sample’s mean were to match perfectly the mean of the population, you could be 95% confident that the 2nd sample’s mean would turn out to be “inside” the 1st sample’s 95% CI. That’s because the end points of your CI would coincide with the 2 points in a sampling distribution of the mean that serve to bookend the middle 95% of that distribution’s means. Select a 2nd sample, and its mean would have a 95% chance of landing between those bookends.

Your 1st sample, however, is not likely to have a mean that matches up perfectly with μ. This will cause the 1st sample’s 95% CI to be “off-center” in the sampling distribution of means. More than half of the CI will be located on the high (or low) side of that distribution’s midpoint, the true population mean. By having the CI’s end points not coincide with the points that bookend the middle 95% of sampling distribution of means, the 95% CI captures less than 95% of those means.

To prove to yourself that a 95% CI based on one sample’s data does not predict, with 95% accuracy, what a 2nd sample’s mean will be like, answer these 2 little questions: (1) How much of a normal distribution lies between the z-score points of +1.96 and –1.96? (2) How much of a normal distribution lies between any other pair of z-scores that are that same distance (3.92) apart from each other?

Leave a comment

Filed under Misconceptions

RANDOM SAMPLES: WHAT DO THEY LOOK LIKE?

Extract a perfectly random sample from a population, and what will you get? Many people think such a sample will be just like the population, but smaller. They expect the sample to be a “miniature population.” Bad thought. Most likely, the numerical characteristics of the sample will not match exactly those of the population.

If you randomly select 16 people out of a population containing as many males as females, what kind of gender split should you expect in the sample? Don’t predict 8 males & 8 females! That’s because the odds are about 4-to-1 AGAINST having the sample be perfectly balanced gender-wise.

To prove to yourself that a sample is not likely to end up being a small mirror-image of the population, conduct a little coin-flipping experiment. You can do this quickly via a computer simulation available through the link below. After you get to the “Simulating Coin Tossing” applet created by Allan Rossman & Beth Chance, click on the gray, rectangular button labeled “16 Tosses.” This will cause the computer to (a) flip 16 fair coins, (b) show you, simultaneously, the result of each flip, and (c) put a dot in the graph to indicate the number of heads contained in the sample. Click the “16 tosses” button several more times, and watch what happens as additional dots are put into the graph. You will see, especially after clicking the “Show Tallies” button, that the majority of samples produced something other than 8 heads and 8 tails, even though the coins being flipped were fully unbiased.

Coin-Flipping Simulation

Leave a comment

Filed under Misconceptions

THE STANDARD DEVIATION

Many people think a standard deviation indicates the “standard” amount that individual numbers deviate from the group’s mean. Specifically, they think an SD is computed as the average (i.e., arithmetic mean) of the deviation scores, disregarding whether the original scores are above or below the mean. Not so. For most groups of numbers, the SD is about 1.25 times as large as the “average deviation from the mean.”

Consider, for example, this population of 10 scores: 1, 2, 3, 4, 5, 5, 6, 7, 8, and 9. Disregarding sign, the average deviation from the mean = 2.00. However, the SD = approximately 2.45. The SD is larger because it gives greater weight to scores that lie farther away from the mean. It does this by squaring the deviations. The SD is computed as the “root-mean-squared-deviation,” with these 4 words explaining, in reverse order, what you must do to calculate the SD: (1) figure out how far each original score deviates from the mean, (2) square each of these deviation scores, (3) take the mean of the squared deviations, (4) compute the square root of the result arrived at in Step 3.

For more information about the standard deviation, go to http://en.wikipedia.org/wiki/Standard_deviation.

Leave a comment

Filed under Misconceptions