If you’ve ever seen a correlation coefficient, you’ve probably looked at the number and wondered, is that *good*? Is a correlation of -0.73 *good* but not a correlation of +0.58? Just what is a *good* correlation and what makes a correlation* good*?

The strength of the relationship between two variables is usually expressed by the Pearson Product Moment correlation coefficient, denoted by **r**. Pearson correlation coefficients range in value from -1.0 to +1.0, where:

-1.0 represents a perfect correlation in which all measured points fall on a line having a negative slope

0.0 represents absolutely no linear relationship between the variables

+1.0 represents a perfect correlation of points on a line having a positive slope.

If you have a dataset with more than one variable, you’ll want to look at correlation coefficients.

The Pearson correlation coefficient is used when both variables are measured on a continuous (i.e., interval or ratio) scale. There are several variations of the Pearson Product correlation coefficients. The *multiple correlation coefficient*, denoted by **R**, indicates the strength of the relationship between a dependent variable and two or more independent variables. The *partial correlation coefficient* indicates the strength of the relationship between a dependent variable and one or more independent variables with the effects of other independent variables held constant. The adjusted or *shrunken correlation coefficient *indicates the strength of a relationship between variables after correcting for the number of variables and the number of data points. There are also correlation coefficients for variables measured on noncontinuous scales. The Spearman R, for instance, is computed from ordinal-scale ranks.

**Types of Correlation Coefficients.**

So, what is a *good* correlation? It depends on who you ask.

- I once asked a chemist who was calibrating a laboratory instrument to a standard what value of the correlation coefficient she was looking for. “0.9 is too low. You need at least 0.98 or 0.99.” She got the number from a government guidance document.
- I once asked an engineer who was conducting a regression analysis of a treatment process what value of the correlation coefficient he was looking for. “Anything between 0.6 and 0.8 is acceptable.” His college professor told him this.
- I once asked a biologist who was conducting an ANOVA of the size of field mice living in contaminated versus pristine soils what value of the correlation coefficient he was looking for. He didn’t know, but his cutoff was 0.2 based on the smallest size difference his model could detect with the number of samples he had.

Is 0.2 a *good* correlation or does a *good* correlation have to be at least 0.6 or even 0.98? As it turns out, the chemist, the engineer, and the biologist were all right. Those correlations were all *good* for those uses. So, the meaningfulness of a correlation coefficient depends, in part, on the expectations of the person using it.

But how do you know what value of a correlation coefficient you should expect for it to be *good*? One answer is to look at the square of the correlation coefficient, called the coefficient of determination, R-square, or just R^{2}. R-square is an estimate of the proportion of variance in the dependent variable that is accounted for by the independent variable(s). It is used commonly to interpret the strength of the relationship between variables and to compare alternative statistical models.

You might be able to decide how good your correlation is from a gut feel for how much of the variability you wanted a relationship to account for. For example, correlation coefficient values between approximately -0.3 and +0.3 account for less than 9 percent of the variance in the relationship between two variables, which might indicate a weak or non-existent relationship. Values between -0.3 and -0.6 or +0.3 and +0.6 account for 9 percent to 36 percent of the variance, which might indicate a weak to moderately strong relationship. Values between -0.6 and -0.8 or +0.6 and +0.8 account for 36 percent to 64 percent of the variance, which might indicate moderately strong to strong relationship. Values between -0.8 and -1.0 or +0.8 and +1.0 account for more than 64 percent of the variance, which might indicate very strong relationship.

That’s only part of the story, though. Two other things you have to do to decide if a correlation is *good* are plot the data and conduct a statistical test.

**Plots**—You should always plot the data used to calculate a correlation to ensure that the coefficient adequately represents the relationship. The magnitude of **r** is very sensitive to the presence of nonlinear trends and outliers. Nonlinear trends in the data cause the magnitude of the relationship to be underestimated. You can often use transformations to straighten any nonlinear patterns you see (https://statswithcats.wordpress.com/2010/11/21/fifty-ways-to-fix-your-data/). Outliers (i.e., data values not representative of the population) that are located perpendicular to the data trend cause the relationship to be underestimated. Outliers parallel to the data trend cause the relationship to be overestimated.

**Tests—**Every calculated correlation coefficient is an estimate. The “real” value may be somewhat more or somewhat less. You can conduct a statistical test to determine if the correlation you calculated is different from zero. If it’s not, there is no evidence of a relationship between your variables. This test looks at the absolute value of the correlation coefficient and the number of data pairs used to calculate it. The larger the value of the correlation and the greater the number of data pairs, the more likely the correlation will be significantly different from zero. For example, a correlation of 0.5 would be significantly greater than zero based on about 11 data pairs but a correlation of 0.1 wouldn’t be significantly different from zero with 380 data pairs. That’s why all statistical software outputs the number of data pairs and the test probability with a correlation. With some software, you can also calculate a confidence interval around your estimate to see if the interval includes the value you set as a goal. But one way or the other, you have to consider the variability of your calculated estimate to decide if the correlation is *good*.

Correlation coefficients have a few other pitfalls to be aware of. For example, the value of a multiple or partial correlation coefficient may not necessarily meet your definition of a *good* correlation even if it is significantly different from zero. That’s because the calculated values will tend to be inflated if there are many variables but only a few data pairs, hence the need for that shrunken correlation coefficient. Then there’s the paradox that a large correlation isn’t necessarily a *good* thing. If you are developing a statistical model and find that your predictor variables are highly correlated with your dependent variable, that’s great. But if you find that your predictor variables are highly correlated with each other, that’s not good, and you’ll have to deal with this multicollinearity in your analysis. Finally, if you’re calculating many correlation coefficients from a large data set, you might find that the number of data pairs is different for each calculation because of missing data. Some statisticians believe it is acceptable to compare correlations calculated with different numbers of data pairs and other statisticians believe it is unwarranted, nonsensical, dishonest, fraudulent, heinous, and sickeningly evil.

**What to Look for in Correlations.**

What makes a *good* correlation, then, depends on what your expectations are, the value of the estimate, whether the estimate is significantly different from zero, and whether the data pairs form a linear pattern without any unrepresentative outliers. You have to consider correlations on a case-by-case basis. Remember too, though, that “no relationship” may also be an important finding.

Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order **Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data Analysis**** **at Wheatmark, amazon.com, barnesandnoble.com, or other online booksellers.

Superb posting, I share the same views. I wonder why this particular world truly does not picture for a moment like me and also the blog site creator😀

Actually genuinely great weblog article which has received me considering. I by no means looked at this from the stage of look at.

Hey can I copy and paste this post on my web site? What references must I give? You might give this info for other people too.

Yes, just link back to the

blog.Stats with CatsYou must participate in a contest for probably the greatest blogs on the web. I’ll recommend this web site!

I regret that I have no handbags, shoes or other products to offer in my comment. However, I

didenjoy your`Taxonomy of Correlation Coefficients Table`

very much! This was my very first exposure to ordinal, even binary flavored correlation coefficients.

The table formatted comparisons were particularly useful. Thank you.

Pingback: The Best Super Power of All | Stats With Cats Blog

Great article dude. Nicely explained. Thanks for sharing. I will be visiting this page soon again.

Thank you. That was useful, especially the different sciences vs. acceptable value.

Hi Charlie,

This post is really useful for me as I am cracking my head on the interpretation of correlation coefficient. Like what you have mentioned, different people have different views on good correlations and at last I am confused. I need to put down the references on my scientific paper, so would you mind to share with me literatures that you referred to for the interpretation of correlation coefficient on “correlation coefficient values between approximately -0.3 and +0.3 account for less than 9 percent of the variance in the relationship between two variables, which might indicate a weak or non-existent relationship. Values between -0.3 and -0.6 or +0.3 and +0.6 account for 9 percent to 36 percent of the variance, which might indicate a weak to moderately strong relationship. Values between -0.6 and -0.8 or +0.6 and +0.8 account for 36 percent to 64 percent of the variance, which might indicate moderately strong to strong relationship. Values between -0.8 and -1.0 or +0.8 and +1.0 account for more than 64 percent of the variance, which might indicate very strong relationship.”

Much appreciated.

sn

Those guidelines are just that — guidelines. I heard these guidelines, or something like them, thirty years ago in grad school, and they’ve proved to be useful. They’re based on how much of the variance in a bivariate relationship you want to account for. If you are conducting exploratory analyses and have no particular expectations, they’re as good a place to start as any. If you’re trying to confirm a prior analysis, you would want something more stringent. You can find other guidelines on the Internet, such as: http://www.dmstat1.com/res/TheCorrelationCoefficientDefined.html, http://www.dummies.com/how-to/content/how-to-interpret-a-correlation-coefficient-r.html, http://sites.stat.psu.edu/~jls/stat100/lectures/lec16.pdf, http://www.sagepub.com/salkind2study/articles/14Article02.pdf, http://sahs.utmb.edu/pellinore/intro_to_research/wad/correlat.htm, and the Interpretation of the size of a correlation section at https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient.

Short and sweet article! I hate cats but love the cat-pictures that explain correlation. Great reference for the social sciences (effect sizes in T-shirt sizes: small, medium, large):

Cohen J. (1988) Statistical Power Analysis for the Behavioral Sciences – Second Edition, New York: Lawrence Erlbaum Associates.

Cohen J. (1992) A Power Primer. Psychological Bulletin 112: 155-159.

Pingback: GGPLOT Graphs for NFL Stats « Hearing the Oracle

Thanks for this useful post. I included a link to you from this recent post:

http://hearingtheoracle.com/2013/07/02/another-r-layering-example/

Just FYI.

Pingback: NFL Nerds 2013 : Week 1 « Hearing the Oracle

Hello Charlie

I too enjoyed the cat pictures! I have a question which may indicate how little I remember statistics from my College years. I work in a clinical laboratory. We are requiered to “correlate” analyzers which produce the same type of resulsts. I came across your blog by trying to get information about what the lead way should be. I know 10% agreement is ormally used, but I wasnt sure if 20% would be too much. I am familiar with Coefficient Variation, however we use % agreement between devices. Every time I “google” correlation, I get coefficient variation. HELP!!!

Good question! There are many types of correlation coefficients. The most widely known is the Pearson product-moment correlation coefficient, which is used for assessing the magnitude of the relationship between two variables that are measured on continuous scales. There are other types of correlation coefficients for variables that are measured on other scales.

What you’re doing with “percent agreement” involves “correlation” of variables measured on a binary (two category) scale. Percent agreement is the percent of results for two devices (one a reference device and the other a device being evaluated) that provide consistent results. If you were to run ten tests of two devices and in 4 cases both devices showed @, in 3 cases both devices showed #, in 2 cases one device showed @ and the other showed #, and in 1 case one device showed @ and the other showed #, then the Percent Agreement would be: (4+3)/(4+3+2+1) or 70%.

If you can download the PowerPoint presentation at:

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&ved=0CE0QFjAD&url=http%3A%2F%2Fwww.amstat.org%2Fmeetings%2Ffdaworkshop%2Fpresentations%2F2006%2FAssessing%2520Agreement%2520for%2520diagnostic%2520tests%25202006.ppt&ei=uHY8UobCFq3C4AObv4CYBQ&usg=AFQjCNHD2pQrk21pxMA4bt_lHIO3WVLORA&sig2=tJXNe1o46vWqhpUVXDzj7Q&bvm=bv.52434380,d.dmg

called Assessing agreement for diagnostic devices, by Bipasa Biswas (Mathematical Statistician, Division of Biostatistics, Office of Surveillance and Biometrics, Center for Devices and Radiological Health, FDA) presented at the FDA/Industry Statistics Workshop, September 28-29, 2006.

Page 7 has a description of how percent agreement is calculated for a comparison of devices. There are other ways to compare devices that are discussed on the following pages.

I hope this helps.

Pingback: Visualizing Airport Delay Correlations with Google BigQuery and the Maps API | Directory Net

Pingback: Visualizing airport delay correlations with Google BigQuery and Maps API | InfoLogs

Hmm is anyone else having problems with the images on this blog loading?

I’m trying to find out if its a problem on my end or if it’s the blog.

Any feedback would be greatly appreciated.

Just wanted to say thanks for making someone temporarily frustrated with a data situation smile!!

If you suspect there to be a significant correlation between two ordinal variables, and are using the Spearman test, is there a generally accepted minimum number of pairs of data you should use. I have heard 15, but not seen this written anywhere.

Enjoyed reading your well written and researched blog, thank you.

As with all statistical tests, it’s a matter of resolution. Here’s a link that provides critical values for a test that a Spearman correlation is different from zero:

http://www.sussex.ac.uk/Users/grahamh/RM1web/Rhotable.htm

There’s nothing magical about 15 data pairs except that they’re better than fewer data pairs.

Pingback: What is a Good R-Squared Value or Is the Fit Good for a Trend Line? - Critical to Success

Thank you for your blog post and your book. I am conducting some data mining research for my doctoral thesis where I’m testing to see if different data sets about characteristics of nations have a correlation with each other, and plan to test several types of potential correlations. I would like to use a standardized correlation coefficient for each of my tests, whether it is linear or non-linear (I will probably use Excel’s solver to do the fitting of the data to various potential curves). Most of my data sets can be modeled as continuous data, but there may be some cases where the data is ordinal (such as rankings of nations that don’t provide underlying data). I have started this process with using the Coefficient of Determination (r^2), but before I go much further, I want to make sure it is the best correlation coefficient to use. My data sets will generally have between 30 to 250 data points (given there are slightly less than 250 UN recognized countries of the world). Would it be better to use a different type of correlation coefficient (like the Spearman Rho) for all of my data, given that some of my data sets are only ordinal? Or is there a different correlation coefficient that works reasonably well for both types of data? Thank you for any wisdom you can provide!!!

Pingback: Why You Don’t Always Get the Correlation You Expect | Stats With Cats Blog

Pingback: Feline secrets of correlation | MVM learning

Pingback: Olivenöl führt zu Schwangerschaftsdiabetes? Und anderer statistischer Unfug. | Leichter Gesund Leben

Super article! I had a question though. How good is correlation to be used for comparing curves? For example if I had 2 life cycle curves for 2 different products, is correlation between the 2 curves a good measure to compare them?