You Need Statistics to Make Wine

Todd P Chang 10845962_10204840870808885_3322491173165553713_nThe American Statistical Association has identified 146 college majors that require statistics to complete a degree.

You probably wouldn’t be surprised that statistics is required for degrees in mathematics, engineering, physics, astronomy, chemistry, meteorology, and even biology and geology. Most business-related degrees also require statistics. Agronomy degrees require statistics as do degrees in dairy science, aquatic sciences, and veterinary sciences. Degrees for medical professions such as nursing, nutrition, physical therapy, occupational health, pharmacy, and speech-language-hearing all require statistics. And, many social science degrees require statistics, including economics, psychology, sociology, anthropology, political science, education, and criminology. What may be surprising though is that statistics is required for some degrees in history, archaeology, geography, culinary science, viticulture (grape horticulture), journalism, graphic communications, library science, and linguistics. Pretty much everybody needs to know statistics.


Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at,, or other online booksellers.

Posted in Uncategorized | Tagged , , , | Leave a comment

Dare to Compare – Part 2

2-1 INTRO cats-big-blue-eyes-cat-animals-free-wallpapers-736x491Part 1 of Dare to Compare summarized several fundamental topics about statistical comparisons.

Statistical comparisons, or statistical tests as they are usually called, involve populations, groups of individuals or items having some fundamental commonalities. The members of a population also have one or more characteristics, called phenomena, which are what is compared in the populations. You don’t have to measure the phenomena in every member of the population. You can take a representative sample. Statistical tests can involve one population (comparing a population phenomenon to a constant), two populations (comparing a population phenomenon the phenomenon in another population), or three or more populations. You can also compare just one phenomenon (called univariate tests) or two or more phenomena (called multivariate tests).

Parametric statistical tests compare frequency distributions, the number of times each value of the measured phenomena appears in the population. Most tests involve the Normal distribution in which the center of the distribution of values is estimated by the average, also called the mean. The variability of the distribution is estimated by the variance or the standard deviation, the square root of the variance. The mean and standard deviation are called parameters of the Normal distribution because they are in the mathematical formula that defines the form of the distribution. Formulas for statistical tests usually involve some measure of accuracy (involving the mean) divided by some measure of precision (involving the variance). Most statistical tests focus on the extreme ends of the Normal distribution, called the tails. Tests of whether population means are equal are called non-directional, two-sided, or two-tailed tests because differences in both tails of the Normal distribution are considered. Tests of whether population means are less then or greater then are called directional, one-sided, or one-tailed tests because the difference in only one tail of the Normal distribution is considered.

2-2 NORMAL Why-do-kittens-meow-so-muchStatistical tests that don’t rely on the distributions of the phenomenon in the populations are called nonparametric tests. Nonparametric tests often involve converting the data to ranks and analyzing the ranks using the median and the range.

The nice thing about statistical comparisons is that you don’t have to measure the phenomenon in the entire population at the same place or the same time, and you can then make inferences about groups (populations) instead of just individuals or items. What may even be better is that if you follow statistical testing procedures, most people will agree with your findings.

Now for even more …


There are just a few more things you need to know before conducting statistical comparisons.

You start with a research hypothesis, a statement of what you expect to find about the phenomenon in the population. From there, you create a null hypothesis that translates the research hypothesis into a mathematical statement about the opposite of the research hypothesis. Statistical comparisons are sometimes called hypothesis tests. The null hypothesis is usually also written in term of no change or no difference. For example, if you expect that the average heights of students in two school districts will be different because of some demographic factors (your research hypothesis), then your null hypothesis would be that the means of the two populations are equal.

2-3 HYPOTHESESWhen you conduct a statistical test, the result does not mean you prove your hypothesis. Rather, you can only reject or fail to reject the null hypothesis. If you reject the null hypothesis, you adopt the alternative hypothesis. This would mean that it is more likely that the null hypothesis is not true in the populations. If you fail to reject the null hypothesis, it is more likely that the null hypothesis is true in the populations.

The results of statistical tests are sometimes in error, but fortunately, you have some control over the rates at which errors occur. There are four possibilities for the results of a statistical test.

  • True Positive – The statistical test fails to reject a null hypothesis that is true in the population.
  • True Negative – The statistical test rejects a null hypothesis that is false in the population.
  • False Positive – The statistical test rejects a null hypothesis that is true in the population. This is called a Type I error and is represented by α. The Type I error rate you will accept for a test is called the Confidence. Typically the confidence is set at 0.05, a 5% Type I error rate, although sometimes 0.10 (more acceptable error) or 0.001 (less acceptable error) are used.
  • False Negative – The statistical test fails to reject a null hypothesis that is false. This is called a Type II error and is represented by β. The ability of a particular comparison to avoid a Type II error is represented by 1-β and is called the Power of the test. Typically, power should be at least 0.8 for a 20% Type II error rate.

When you design a statistical test, you specify the hypotheses including the number of populations and directionality, the type of test, the confidence, and the number of observations in your representative sample of the population. From the sample, you calculate the mean and standard deviation. You calculate the test statistic and compare it to standard values in a table based on the distribution. If the test statistic is greater than the standard value, you reject the null hypothesis. When you reject the null hypothesis the comparison is said to be significant. If the test statistic is less than the standard value, you fail to reject the null hypothesis and the comparison is said to be nonsignificant. Most statistical software now provide exact probabilities, called p-values, that the null hypothesis is false so no tables are necessary.

2-4 ERRORS cat-with-kittens-e1464736782810After you conduct the test, there are two pieces of information you need to determine – the sensitivity of the test to detect differences, called the effect size, and the power of the test. The power of the test will depend on the sample size, the confidence, and the effect size. The effect size also provides insight into whether the test results are meaningful. Meaningfulness is important because a test may be able to detect a difference far smaller than what might of interest, such as a difference in mean student heights less than a millimeter. Perhaps surprisingly, the most common reason for being able to detect differences that are too small to be meaningful is having too large a sample size. More samples are not always better.


It seems like there are hundreds of kinds of statistical tests, and in a way there are, but most are just variations of the concept of the accuracy in terms of the precision. In most tests, you calculate a test statistic and compare it to a standard. If the test statistic is greater than the standard, the difference is larger than might have been expected by chance, and is said to be statistically significant. For the most part, statistical software now reports exact probabilities for statistical tests instead of relying on manual comparisons.

Don’t worry too much about remembering formulas for the statistical tests (unless a teacher tells you to). Most testing is done using software with the test formulas already programmed. If you need a test formula, you can always search the Internet.

Tests depend on the scales of the data to be used in the statistical comparison. Usually, the dependent variable (the measurements of the phenomenon) is continuous and the independent variable (the divisions of the populations being tested) is categorical for parametric tests. Sometimes there are also grouping variables used as independent variables, called effects. In advanced designs, continuous-scale variables used as independent variables are called covariates. Some other scales of measurement for the dependent variable, like binary scales and restricted-range scales, requires special tests or test modifications.

Here are a few of the most common parametric statistical tests.






1 0 Pop mean vs constant z-test, t-test
1 Pop level 1 mean vs Pop level 2 mean z-test, t-test
2 or more   ANOVA F-test
2 0   z-test, t-test
  any   ANOVA F-test
3 or more any any ANOVA F-test


2-5 TEST shutterstock_100483381z-Tests and t-Tests

The z-test and the t-test have similar forms relating the difference between a population mean and a constant (one-population test) or two population means (two-population test) to some measure of the uncertainty in the population(s). The difference in the tests is that a z-test is for Normally distributed populations where the variance is known and t-tests are for populations where the variance is unknown and must be estimated from the sample. t-Tests depend on the number of observations made on the sample of the population. The greater the sample size, the closer the t-test is to the z-test. Adjustments of two-population t-tests are made when the sample sizes or variances are different in the two populations. These tests can also be used to compare paired (e.g., before vs after) data.


Unlike t-tests that are calculated from means and standard deviations, F-tests are calculated from variances. The formula for the one-way ANOVA F-test is:

  • F = explained variance / unexplained variance, or
  • F = between-group variability / within-group variability, or
  • F = Mean square for treatments / Mean square for error

These are all equivalent. Also, as it turns out, F = t2.

2-6 TESTχ2 Tests

The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in mutually exclusive categories of a contingency table. The test statistic is the square of the observed frequency minus the expected frequency divided by the expected frequency.

Nonparametric Tests

Nonparametric tests are also called distribution-free tests because they don’t rely on any assumptions concerning the frequency distribution of the test measurements. Instead, the tests use ranks or other imposed orderings of the data to identify differences. Here are a few of the most common nonparametric statistical tests.

Dependent Variable Scale

Levels of Categorical Independent Variable


Categorical Percentage of the target population binomial test
2 matched groups McNemar’s test for 2×2 contingency tables
2 or more independent groups Chi-square test for contingency tables
2 independent groups Fisher’s exact test
Continuous 2 matched groups Wilcoxon rank-sum test

Wilcoxon sign-rank test

2 independent groups Mann-Whitney U test

Wilcoxon-Mann Whitney test

2 or more matched groups Friedman test
2 or more independent groups Kruskall-Wallace H test


PUBLISHED by catsmob.comAssumptions

You make a few assumptions in conducting statistical tests. First you assume your population is real (i.e., not a phantom population) and that your samples of the population are representative of all the possible measurements. Then, if you plan to do a parametric test, you assume (and hope) that the measurements of the phenomenon are Normally distributed and that the variances are the same in all the populations being compared. The closer these assumptions are met, the more valid are the comparisons. The reason for this is that you are using Normal distributions, defined by means and variances, to represent the phenomenon in the populations. If the true distributions of the phenomenon in the populations do not exactly follow the Normal distribution, the comparison will be somewhat in error. Of course, the Normal distribution is a theoretical mathematical distribution so there is always going to be some deviation from it and real world data. Likewise with variances in multi-population comparisons. Thus, the question is always how much deviation from the assumptions is tolerable before the test becomes misleading.

Data that do not satisfy the assumptions can often be transformed to satisfy the assumptions. Adding a constant to data or multiplying data by a constant does not affect statistical tests, so transformations have to be more involved, like roots, powers, reciprocals, and logs. Box-Cox transformations are especially useful but are laborious to calculate without supporting software. Ultimately, ranks and nonparametric tests can be used in which there is no assumption about the Normal distribution.

Next, we’ll see how it all comes together …

2-8 One does not
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at,, or other online booksellers.

Posted in Uncategorized | Tagged , , , , , , , , | 1 Comment

Dare to Compare – Part 1

IMG_9756_1509155888545_69974046_ver1.0_640_480In school, you probably had to line up by height now and then. That wasn’t too difficult. There weren’t too many individuals being lined up and they were all in the same place at the same time. An individual’s place in line was decided by comparing his or her height to the heights of other individuals. The comparisons were visual; no measurements were made. Everyone made the same decisions about the height comparisons. You didn’t need statistics to solve the problem. So why might you ever need statistics to compare heights?


Statistics are primarily concerned about groups of individuals or items, especially those having some fundamental commonalities. These groups are called populations. Populations are more difficult to compare than pairs of individuals because you have to define the population and measure the characteristics of the phenomenon that you want to compare. Statistical comparisons, or statistical tests as they are usually called, can involve one population (comparing a population phenomenon to a constant), two populations (comparing a population phenomenon the phenomenon in another population), or three or more populations . You can also compare just one phenomenon (called univariate tests) or two or more phenomena (called multivariate tests). You can test if the phenomena are equal (called a nondirectional or two-sided test) or less then/greater then (called a directional or one-sided test).

For example, you might want to compare the heights of male high school freshmen in two different school districts. There would be a two-population test – male high school freshmen in school district 1 and male high school freshmen in school district 2. The phenomenon you want to compare is the height of the two populations. But, it’s not as easy as just visually comparing the heights of pairs of individuals because they are not located in the same place. You have to measure at least some of the heights of the individuals in the two populations.


Fortunately, you don’t have to measure every individual in the population so long as you measure a representative sample of the individuals in the populations. You can improve your chances of getting a representative sample by using the three Rs of variance control — Reference, Replication, and Randomization.

How many samples should you have? No, the answer isn’t as many as possible. Some people think the answer is 30 samples, but that’s a myth based on a misunderstood tradition. Like potato chips and middle managers, too many can be as bad as not enough. It’s a matter of resolution.


If you were comparing two individuals, you would only be concerned with whether one height is greater than, equal to, or less than the other height. When you’re comparing populations, there’s not just one height but many, and you only know what some of the heights are (hopefully a representative sample of them). That’s where distributions come in.

In statistical testing, a frequency distribution refers to the number of times each value of the measured phenomena appears in the population. A bar chart of these values, with the values on the horizontal axis and their frequencies on the vertical axis, is called a histogram. Histograms often looks like a bell, which is why they are called bell curves.

Measured phenomena that have a histogram that looks like a bell curve have many values located at the middle of the distribution and fewer values farther away from the center, called the tails. The center of the distribution of values is estimated by the average. The variability of the values, how far they stretch along the horizontal axis, is estimated by the variance or the standard deviation, the square root of the variance.

DistributionA bell curve is usually assumed to represent a Normal distribution. The average and the variance of the values are called parameters of the distribution because they are in the mathematical formula that defines the form of the distribution.

Having a mathematical equation that you can use as a model of the frequency of phenomenon values in the population is advantageous because you can use the distribution model to represent the characteristics of the population.

Statistical Comparisons

Once you have data on the phenomenon from the representative sample of the population, you calculate descriptive statistics for the population. Statistical comparisons consider both the accuracy (i.e., the difference between the measured heights and the true heights in the population of individuals) and the precision (i.e., how consistent or variable are the heights) of the measurements of the population. Formulas for statistical tests usually involve some measure of accuracy divided by some measure of precision.

Statistical tests that compare the distributions of population characteristics are called parametric tests. They are usually based on the Normal distribution and involve using averages as measures of the center of the population distribution and standard deviations as measures of the variability of the distribution. (This is not always the case but is true most of the time.) The average and standard deviation are called test parameters. You can still test whether population means are equal (called non-directional or two-sided tests because differences in both tails of the Normal distribution are considered) or less then/greater then (called directional or one-sided tests because the difference in only one tail of the Normal distribution is considered).

Statistical tests that don’t rely on the distributions of the phenomenon in the populations are called nonparametric tests. Nonparametric tests usually involve converting the data to ranks and analyzing the ranks using the median and the range.

And, there is still a lot more to know about statistical comparisons … more to come

ELZ1_7Kittens_1aRead more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at,, or other online booksellers.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Catalog of Models

page-cat-models-catWhether you know it or not, you deal with models every day. Your weather forecast comes from a meteorological model, usually several. Mannequins are used to display how fashions may look on you. Blueprints are drawn models of objects or structures to be built. Maps are models of the earth’s terrain. Examples are everywhere.

Models are representations of things, usually an ideal, a standard, or something desired. They can be true representations, approximate (or at least as good as practicable), or simplified, even cartoonish compared to what they represent. They can be about the same size, bigger, or most typically, smaller, whatever makes them easiest to manipulate. They can represent:

  • Physical objects that can be seen and touched
  • Processes that can be watched
  • Behaviors that can be observed
  • Conditions that can be monitored
  • Opinions that can be surveyed.

The models themselves do not have to be physical objects. They can be written, drawn, or consist of mathematical equations or computer programming. In fact, using equations and computer code can be much more flexible and less expensive than building a physical model.

Stats with Cats Models 10-23-2017

Classification of Models

There are many ways that models are classified, so this catalog isn’t unique. The models may be described with different terms or broken out to greater levels of detail. Furthermore, you can also create hybrid models. Examples include mash-ups of analytical and stochastic components used to analyze phenomena such as climate change and subatomic particle physics. Nevertheless, the catalog should give you some ideas for where you might start to develop your own model.

Physical Models

Your first exposure to a model was probably a physical model like a baby pacifier or a plush animal, and later, a doll or a toy car. From then, you’ve seen many more – from ant farms to anatomical models in school. You probably even built your own models with Legos, plastic model kits, or even a Halloween costume. They are all representations of something else.


Physical models aren’t used often for advanced applications because they are difficult and expensive to build and calibrate to a realistic experience. Flight simulators, hydrographic models of river systems, and reef aquariums are well known examples.

Conceptual Models

Strat modelModels can also be expressed in words and pictures. These are used in virtually all fields to convey mental images of some mechanism, process, or other phenomenon that was or will be created. Blueprints, flow diagrams, geologic fence diagrams, anatomical diagrams are all conceptual models. So are the textual descriptions that go with them. In fact, you should always start with a simple text model before you embark on building a complex physical or mathematical model.

Mathematical and Computer Models

ROCKWARE strat_fence_cage_01Theoretical Models

Theoretical models are based on scientific laws and mathematical derivations. Both theoretical models and deterministic empirical models provide solutions that presume that there is no uncertainty. These solutions are termed exact (which does not necessarily imply correct). There is a single solution for given inputs.

Analytical Models

Analytical models are mathematical equations derived from scientific laws that produce exact solutions that apply everywhere. For example, F (force) = M (mass) times A (acceleration) and E(energy) = m (mass) times c2 (speed of light squared) are analytical models. Probably, most concepts in classical physics can be modeled analytically.

Numerical Models

Numerical models are mathematical equations that have a time parameter. Numerical models are solved repeatedly, usually on a grid, to obtain solutions over time. This is sometimes called a Dynamic Model (as opposed to a Static Model) because it describes time-varying relationships.

Empirical Models

Empirical models can be deterministic, probabilistic, stochastic, or sometimes, a hybrid of the three. They are developed for specific situations from measured data. Empirical models differ from theoretical models in that the model is not necessarily fixed for all instances of its use. There may be multiple reasonable empirical models that can apply to a given situation.

Deterministic Models

Deterministic empirical models presume that a mathematical relationship exists between two or more measurable phenomena (as do theoretical models) that will allow the phenomena to be modeled without uncertainty (or at least, not much uncertainty, so that it can be ignored) under a given set of conditions. The difference is that the relationship isn’t unique or proven. There are usually assumptions. Biological growth and groundwater flow models are examples of deterministic empirical models

12-sistwins-cats.w710.h473Probability Models

Probability models are based on a set of events or conditions all occurring at once. In probability, it is called an intersection of events. Probability models are multiplicative because that is how intersection probabilities are combined. The most famous example of a probability model is the Drake equation, a summary of the factors affecting the likelihood that we might detect radio-communication from intelligent extraterrestrial life

Stochastic Models

Stochastic empirical models presume that changes in a phenomenon have a random component. The random component allows stochastic empirical models to provide solutions that incorporate uncertainty into the analysis. Stochastic models include lottery picks, weather, and many problems in the behavioral, economic, and business disciplines that are analyzed with statistical models.

Comparison Models

Bombay-cat-3In statistical comparison models, the dependent variable is a grouping-scale variable (one measured on a nominal scale). The independent variable can be either grouping, continuous, or both. Simple hypothesis tests include:

  • c2 tests that analyze cell frequencies on one or more grouping variables, and
  • t-tests and z-tests that analyze independent variable means in two or fewer groups of a grouping variable.

Analysis of Variance (ANOVA) models compare independent variable means for two or more groups of a dependent grouping variable. Analysis of Covariance (ANCOVA) models compare independent variable means for two or more groups of a dependent grouping variable while controlling for one or more continuous variables. Multivariate ANOVA and ANCOVA compare two or more dependent variables using multiple independent variables. There are many more types of ANOVA model designs.

Classification Models

Classification and identification models also analyze groups.

Clustering models identify groups of similar cases based on continuous-scale variables. There need be no prior knowledge or expectation about the nature of the groups. There are several types of cluster analysis, including hierarchical clustering, K-Means clustering, two-step clustering, and block clustering. Often, the clusters or segments that are used as inputs to subsequent analyses. Clustering models are also known as segmentation models.

cute-dog-and-cat-hd-wallpaperClustering models do not have a nominal-scale dependent variable, but most classification models do. Discriminant analysis models have a nominal-scale dependent variable and one or more continuous-scale independent variables. They are usually used to explain why the groups are different, based on the independent variables, so they often follow a cluster analysis. Logistic regression is analogous to linear regression but is based on a non-linear model and a binary or ordinal dependent variable instead of a continuous-scale variable. Often, models for calculating probabilities use a binary (0 or 1) dependent variable with logistic regression.

There are many analyses that produce decision trees, which look a bit like organization charts. C&R (Classification and Regression Trees) split categorical dependent variables into its groups based in continuous or categorical-scale independent variables. All splits are binary. CHAID (Chi-square Automatic Interaction Detector) generates decision trees that can have more than two branches at a split. A Random Forest consists of a collection of simple tree predictors.

Explanation Models

Explanation models aim to explain associations within or between sets of variables. With explanation models, you select enough variables to address all the theoretical aspects of the phenomenon, even to the point of having some redundancy. As you build the model, you discover which variables are extraneous and can be eliminated.

page-cat-models-kittenFactor Analysis (FA) and Principal Components Analysis (PCA) are used to explore associations in a set of variables where there is no distinction between dependent and independent variables. The two types of statistical analysis:

  • Create new metrics, called factors or components, which explain almost the same amount of variation as the original variables.
  • Create fewer factors/components than the original variables so further analysis is simplified.
  • Require that the new factors/components be interpreted in terms of the original variables, but they often make more conceptual sense so subsequent analyses are more intuitive.
  • Produce factors/components that are statistically independent (uncorrelated) so they can be used in regression models to determine how important each is in explaining a dependent variable.

Canonical Correlation Analysis (CCA) is like PCA only there are two sets of variables. Pairs of components, one from each group, are created that explain independent aspects of the dataset.

Regression analysis is also used to build explanation models. In particular, regression using principle components as independent variables is popular because the components are uncorrelated and not subject to multicollinearity.

Prediction Models

catSome models are created to predict new values of a dependent variable or forecast future values of a time-dependent variable. To be useful, a prediction model must use prediction variables that cost less to generate than the prediction is worth. So the predictor variables and their scales must be relatively inexpensive and easy to create or obtain. In prediction models, accuracy tends to come easy while precision is elusive. Prediction models usually keep only the variables that work best in making a prediction, and they may not necessarily make a lot of conceptual sense.

Regression is the most commonly used technique for creating prediction models. Transformations are used frequently. If a model includes one or more lagged values of the dependent variable among its predictors, it is called an autoregressive model.

Neural Networks is a predictive modeling technique inspired by the way biological nervous systems process information. The technique involves interconnected nodes or layers that apply predictor variables in different ways, linear and nonlinear, to all or some of the dependent variable values. Unlike most modeling techniques, neural networks can’t be articulated so they are not useful for explanation purposes.

Picking the Right Model

There are many ways to model a phenomenon. Experience helps you to judge which model might be most appropriate for the situation. If you need some guidance, follow these steps.

  • maxresdefaultStep 1 – Start at top of the Catalog of Models figure. Decide whether you want to create a physical, mathematical, or conceptual model. Whichever you choose, start by creating a brief conceptual model so you have a mental picture of what your ultimate goal is and can plan for how to get there.

If your goal is a physical or full blown conceptual model, do the research you’ll need to identify appropriate materials and formats. But this blog is about mathematical models, so let’s start there

  • Step 2 – If you want to select a type of mathematical model, start on the second line of the Catalog of Models figure and decide whether your phenomenon fits best with a theoretical or an empirical approach.

If there are scientific or mathematical laws that apply to your phenomenon, you’ll probably want to start with some type of theoretical model. If there is a component of time, particularly changes over time periods, you’ll probably want to try developing a numerical model. Otherwise, if a single solution is appropriate, try an analytical model.

  • Step 3 – If your phenomenon is more likely to require data collection and analysis to model, you’ll need an empirical model. An empirical model can be probabilistic, deterministic, or stochastic. Probability models are great tools for thought experiments. There are no wrong answers, only incomplete ones. Deterministic models are more of a challenge. There needs to be some foundation of science (natural, physical, environmental, behavioral, or other discipline), engineering, business rules, or other guidelines for what should go into the model. More often than not, deterministic models are overly complicated because there is no way to distinguish between components that are major factors versus those that are relatively inconsequential to the overall results. Both Probability and Deterministic models are often developed through panels of experts using some form of Delphi process.
  • Step 4 – If you need to develop a stochastic (statistical) model, go here to pick the right tool for the job.
  • Step 5 – Consider adding hybrid elements. Don’t feel constrained to only one type of component in building your model. For instance, maybe your statistical model would benefit from having deterministic, probability, or other types of terms in it. Calibrate your deterministic model using regression or another statistical method. Be creative.



Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at,, or other online booksellers.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | 2 Comments

How to Describe Numbers

Data catSay you wanted to describe someone you see on the street. You might characterize their sex, age, height, weight, build, complexion, face shape, hair, mouth and lips, eyes, nose, tattoos, scars, moles, and birthmarks. Then there’s clothing, behavior, and if you’re close enough, speech, odors, and personality. Your description might be different if you’re talking to a friend or a stranger, of the same or different sex and age. Those are a lot of characteristics and they’re sometimes hard to assess. Individual characteristics aren’t always relevant and can change over time. And yet, without even thinking about it, we describe people we see every day using these characteristics. We do it mentally to remember someone or overtly to describe a person to someone else. It becomes second nature because we do it all the time.

Most people don’t describe sets of numbers very often, though, so they don’t know how easy it actually is. You have to consider only a few characteristics, all of which are fairly easy to assess and will never change for the dataset. Once you learn how, it’s hardly a challenge to get it right, unlike describing the hot young guy who just robbed a bank wearing a clown costume.

What’s involved in describing a dataset? First, before considering any descriptive statistics, you have to assess two qualities.

  • Phenomenon and population or sample
  • Measurement scale

From this information, you’ll be able to determine what descriptive statistics to calculate.

Phenomenon and Population or Sample

This is a thinking exercise; there are no calculations.

First, determine what the numbers represent. What is the phenomenon they are related to? If there’s no context for the numbers, like it’s just a dataset for a homework problem, that’s fine too. But if you know something about the data, you might be able to judge whether your answer makes sense later when the calculations are done.


Next, think about the population from which the data were obtained. How is the population defined? Do you have all the possible measurements or entities? If not, you have a sample of the population, hopefully a sample that is a good representation of the population. This knowledge will help you judge whether your answer makes sense and will be consistent with other samples taken from the same population. Again, if there’s no context for the numbers, that’s fine. Now, all you have to decide is whether you want to describe the population or just the sample of the population for which you have measurements. If you’re not sure, assume you want to describe the population. All the fun stuff in statistics involves populations.

Measurement Scale


Scales of measurement express the phenomenon represented by the population. Simply put, scales are the ways that a set of numbers are related to each other. For example, the increments between scale values may all be identical, such as with heights and weights, or vary in size, such as with earthquake magnitudes and hurricane categories. The actual values of scales are called levels.

You have to understand the scale of measurement to describe data. There are a variety of types of measurement scales, but for describing a dataset you only need to pick from three categories:

  • Grouping Scales – Scales that define collections having no mathematical relationship to each other. The groups can represent categories, names, and other sets of associated attributes. These scales are also called nominal scales. They are described by counts and statistics based on counts, like percentages.
  • Ordered Scales – Scales that define measurement levels having some mathematical progression or order, commonly called ordinal scales. Data measured on an ordinal scale are represented by integers, usually positive. Counts and statistics based on medians and percentiles can be calculated for ordinal scales.
  • Continuous Scales – Scales that define a mathematical progression involving fractional levels, represented by numbers having decimal points after the integer. These scales may be called interval scales or ratio scales depending on their other properties. Any statistic can be calculated for data measured on continuous scales.

There are other scales of measurement but that’s all you’ll need at this point.

Descriptive Statistics

Now you can get on to describing a set of numbers. You’ll only need to consider four attributes – frequency, central tendency, dispersion, and shape.

 Frequency refers to the number of times the level of a scale appears in a set of numbers. It is used mostly for nominal (grouping) scales and sometimes with ordinal scales. The level with the highest frequency is called the mode. Frequency is used most effectively to show how scale levels compare to each other, such as with percentages or in a histogram.


Central Tendency refers to where the middle of a set of numbers is. It is used mostly for continuous (interval or ratio) scales and often with ordinal scales. There are many statistics that may be used to describe where the center of a dataset is, the most popular of which are the median and the mean. The median is the exact center of a progression-scale dataset. There are exactly the same number of data values less than and greater than the median. You determine the median by sorting the values in the dataset and counting the values from the extremes until you find the center. The mean, or average, is the center of a progression-scale dataset that is determined by a calculation. There may not be an equal number of data values less than and greater than the mean. You determine the mean by adding all the values in the dataset and dividing that sum by the number of values. The mean or the median is used in most statistical testing to find differences in data populations.

 Dispersion refers to how spread out the data values are. It is used for continuous (interval or ratio) scales but only rarely with ordinal scales. There are many ways to describe data dispersion but the most popular is the standard deviation. You calculate the standard deviation by:

  1. Subtracting the mean of a dataset from each value in the dataset
  2. Squaring each subtracted value
  3. Adding all the squared values
  4. Dividing the sum of the squared values by the number of values in the dataset (if you’re describing a sample) or by the number of values in the dataset minus 1 (if you’re describing a population).

The standard deviation is used in statistical testing to find differences in data populations.

5518606-pics-of-kittensShape refers to the frequency of the values in a dataset at selected levels of the scale, most often depicted as a graph. For ordinal scales, the graph is usually a histogram. For continuous scales, the graph is usually a probability plot, although sometimes histograms are used. Shapes of continuous scale data can be compared to mathematical models (equations) of frequency distributions. It’s like comparing a person to some well-known celebrity; they’re not identical but are similar enough to provide a good comparison. There are dozens of such distribution models, but the most commonly used is the normal distribution. The normal distribution model has two parameters – the mean and the standard deviation.

There are many other statistics that can be used to describe datasets, but most of the time, this is all you need:


For example, a nominal-scale dataset would be described by providing counts or percentages of observations in each group. An ordinal-scale dataset would be described by providing counts or percentages for each level, the median and percentiles, and ideally, a histogram. A continuous-scale dataset would be described by providing the closest distribution model and estimates of its parameters, such as “normally distributed with a mean of 10 and a standard deviation of 2.” Continuous-scale datasets can be described so succinctly because the distribution-shape specification contains so much of the telling information.

Now isn’t that a lot easier than describing that hot bank robber wearing a clown costume?


Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at,, or other online booksellers.


Posted in Uncategorized | 3 Comments

Visualizations versus Infographics

Visualizations and infographics are both visual representations of data that are often confused. In fact, there is not a clear line of demarcation between the two. Both are informative. Both can be static or animated. Both require a knowledgeable person to create them.


Visualizations Explore

Data visualizations are created to make sense of data visually and to explore data interactively. Visualization is mostly automatic, generated through the use of data analysis software, to create graphs, plots, and charts. The visualizations can use the default settings of the software or involve Data Artistry and labeling (i.e., these Enhanced Visualizations fall in the intersection of the two circles in the figure). The processes used to create visualizations can be applied efficiently to almost any dataset. Visualizations tend to be more objective than infographics and better for allowing audiences to draw their own conclusions, although the audience needs to have some skills in data analysis. Data visualizations do not contain infographics.

Infographics Explain

Infographics are artistic displays intended to make a point using information. They are specific, elaborate, explanatory, and self-contained. Every infographic is unique and must be designed from scratch for visual appeal and overall reader comprehension. There is no software for automatically producing infographics the way there is for visualizations. Infographics are combinations of illustrations, images, text, and even visualizations designed for general audiences. Infographics are better than visualizations for guiding the conclusions of an audience but can be more subjective than visualizations.

Visualization Infographic
Objective Analyze Communicate
Audience Some data analysis skills General audience
Components Points, lines, bars, and other data representations Graphic design elements, text, visualizations
Source of Information Raw data Analyzed data and findings
Creation Tool Data analysis software Desktop publishing software
Replication Easily reproducible with new data Unique
Interactive or Static Either Static
Aesthetic Treatment Not necessary Essential
Interpretation Left to the audience Provided to the audience


img_8475c (1)

Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at, or other online booksellers.


Posted in Uncategorized | Tagged , , , , , , , , | 1 Comment

How to Analyze Text

Statisticians love to analyze numbers, but what do they do when what they want to explore is unformatted text? It happens all the time. The text may come from opencat-diary-ended responses on surveys, social networking sites, email, online reviews, public comments, notations (e.g., medical, customer relations), documents and text files, or even recorded and transcribed interactions. But before anything can happen, you have to accomplish three tasks:

  • Get the text into a spreadsheet or other software that you can use to manipulate it.
  • Break the text into analyzable fragments – letters, words, phrases, sentences, paragraphs, or whatever.
  • Assign properties to the text fragments

How you might complete these tasks depends on what you want to do and the software you have. Nonetheless, you’ll be surprised by how much you can do with just a spreadsheet and an internet connection if you have the time and focus. This article will show you how.


Ther0402a6_fd87fbc829ec41faaf10aa7aa1cbed88-mv2_d_2000_1333_s_2e are several ways that you can analyze text. You can:

  • Count the occurrence of specific letters, words, or phrases, often summarized as Word Clouds. There are quite a few free web sites that will help you construct word clouds.
  • Categorize text by key themes, topics, or commonalities, called Text Mining.
  • Classify attitudes, emotions, and opinions of a source toward some topic, called Sentiment Analysis or opinion mining. There are many applications of sentiment analysis in business, marketing, customer management, political science, law, sociology, psychology, and communications.
  • Explore relationships between words using a Word Net. The relationships can reflect definitions or other commonalities.

Some of these analyses can be performed using free web apps, others, require special software.

Specialized Software

Some text analytics can be performed manually, but it is a time consuming process so having software can be crucial. Unfortunately, the biggest and best software is proprietary, like SAS and SPSS, and costs a lot. There are also free and low-cost alternatives, as well as free web sites that preform less sophisticated analyses. There are a lot of software options so there are probably a lot of people analyzing text. Let Google be your guide.

Manual Analyses

Even if you don’t have access to specialized software for text analyses, you can also still perform two types of analyses with nothing more than a spreadsheet program and an internet connection. You can count the number of times that a letter, word, or phrase appears in a text passage. Word frequency turns out to be relatively easy to produce but once you have the counts, the analysis and interpretation may be a bit more challenging. You can also do simple topic analyses or sentiment analyses. Parsing the sentences or sentence fragments and analyzing them is straightforward but time consuming, though the interpretation is usually easier.

Word Counts

If you are just looking for keywords or counting words for some diagnostic purpose, you’ll find that it’s not that difficult. Here’s how to do word counts.

Step 1 – Find the text you want to analyze.

This is usually easy except for there being so many choices. You have to start with an electronic file. If you have hard copy, you’ll have to sc
an it and correct the errors. If you have text from separate sources, you’1399360333213ll want to aggregate them to make things easier. If you have text on a website, you can usually highlight it and copy it using <ctrl-C>. If the passage is long, you can use <ctrl-A> to select everything before copying it, but you’ll have to edit out the extraneous material. You can do these operations in most word processors.

Step 2 – Scrub the data

You should scrub the text to be sure you’ll be counting the correct things. Take out entries that aren’t part of the flow of the text, like footnotes and section numbers. Correct misspellings. Take out punctuation that might become associated with words, like em dashes.

Step 3 – Count the words.

The quickest way to count words is to go to an Internet site for that purpose. Just copy your scrubbed text, paste it into the box on the site, and press submit. You’ll get a column of words and their frequencies. Parse the numbers from the text and you’re ready to analyze the data. It’s a good idea to review the results of the counting to be sure no errors have crept into the process.

Another way to do this solely in a spreadsheet is to replace all the punctuation with blanks and then replace the blanks with paragraph marks. This will give you a column of words. Copy it and remove the duplicates then you can use a formula to count each word.

Once you have the counts, the analysis is up to you. You can compare word statistics from different sources or analyze word frequencies within a single source. The possibilities are endless. Interpretation is another matter. Here are some examples.


One thing you can do with word counts is to produce a word cloud. There are many web sites that will generate these graphics. My favorite is Wordle, but be advised, you have to use Internet Explorer for it to work. Here’s an example of a word cloud produced with Wordle.


Text Mining

Topic or Sentiment Analyses are straightforward but more time consuming than word counts. Unless you are analyzing text for work or school, relax and turn on Netflix. This isn’t very sophisticated, but it’ll take a while and you’ll need frequent breaks to maintain your focus.

There are six steps.

Step 1 – Get the Data into a Spreadsheet

As with word counts, you have to get the text file into a text manager, preferably a spreadsheet. Highlight your text or use <ctrl A> and then <ctrl C> and <ctrl V>. You’ll need to parse any block text into sentences or whatever length fragment you want to analyze. You can usually do this by replacing periods with paragraph marks. Start with a small dataset, perhaps fewer than fifty fragments, until you get used to the process.

Step 2 – Scrub the Responses

Format the fragments into a single column with one fragment per row. Delete extraneous fragments. Don’t worry about misspellings and punctuation. If you make a mistake, <ctrl Z> will undo it.

Step 3 – Assign Descriptors

In a column next to the column with the fragments, enter your first descriptor. It can be a keyword, theme, sentiment, length, or whatever you want to analyze. Unless you have predetermined descriptors you are looking for, don’t worry too much about the descriptors you use. You’ll review and edit them in the next step.

cat-writingStep 4 – Count the Fragments Assigned to Each Descriptor

When you count the fragments assigned to each descriptor, you’ll probably find a few descriptors with only a few fragments. Consider combining them with other descriptors. When you’re satisfied with the assignments, you might want to subdivide the descriptor groups with another set of descriptors.

Step 5 – Repeat Steps 3 and 4

You can repeat the last two steps as many times as you feel is necessary. You can use these hierarchical descriptor groups to characterize subsets of the text so don’t have too many or too few fragments in each descriptor group. When you’re done, your data set would look something like this.


If you have a predetermined set of descriptors, you can assign each one to a column of the spreadsheet and code them as 0 or 1 for presence or absence.

Step 6 – Analyze

Once you have built your data set, you can analyze it statistically by counts and percentages, or graphically using word clouds. Consider this example. On December 29, 2016, Tanya Lynn Dee asked the question on her Facebook page, “Without revealing your actual age, what [is] something you remember that if you told a younger person they wouldn’t understand?” There were over 1,000 responses (at the time I saw the post), which I copied and classified into common themes. The results are here.

To learn more about analyzing text for its sentiment, read Sentiment Analysis
nearly everything you need to know by MonkeyLearn.

So, try analyzing some text (and other things) at home. You won’t need parental supervision.


Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data analysis at, or other online booksellers.

Posted in Uncategorized | Tagged , , , , , , , , , | 5 Comments