We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover up all the tracks, to not worry about the blind alleys or describe how you had the wrong idea first, and so on. So there isn’t any place to publish, in a dignified manner, what you actually did in order to get to do the work.
Richard Feynman, Nobel Lecture, 1966
No plan survives implementation. So whether you plan to conduct a data analysis, manage subordinates analyzing data, review results produced by other data analysts, or just have a data analysis conducted for you, there are a few situations that you should be aware of. These are the unexpected events that have little or nothing to do with statistics that can place you in the middle of an awkward if not mean-spirited conflict.
As you might expect, most tales of the unprojected comes down to the quirks of the project participants. Yes, everyone has horror stories of corrupted data, lost files, computer crashes, and the like, but people—how they behave and communicate—are usually what send statisticians screaming in disbelief, frustration, and rage.
Here are seven ways project participants can derail your data analysis project.
The Statistician’s Organization
You would think that communications within your own organization wouldn’t be a big issue. Well, people are people. I once did a project for a manager who said he had an urgent deadline. But first, he delayed a week in providing the data. Then he demanded a partial draft report well ahead of the scheduled review date. He tried to use the hurriedly prepared report to convince his superiors that poor quality work by the staff was making the client dissatisfied. As it turned out, his superiors had already figured out that it was his own incompetence and rude behavior that was upsetting the client. I was lucky; he wasn’t. He was fired shortly after the project was completed successfully. In business, the players don’t wear jerseys. You can’t always tell who’s on your side.
The Client and the Statistician
This is the relationship that you as the statistician have the greatest chance to manage. Usually the relationship is a good one or else you wouldn’t have been selected to do the work. During the project, be sure you are clear on any differences between what the client wants, what the client asks for, and what the client needs. Be sure you are clear on how the client plans to use the results. You don’t want the results misrepresented in a way that will affect your reputation. There are many examples of clients repackaging results in ways you might not expect. I had one client use a report I prepared for a conference presentation. Although he knew nothing about statistics, the karaoke PowerPoint got him management approval to travel on the company’s tab. Fortunately for me, conference attendees tend to zone out when you put numbers on the screen so it wasn’t a big deal. I had another client reuse a spreadsheet I created to conduct some statistical tests on data they supplied. The Client’s Project Manager didn’t realize that I had manually entered some intervening results (viz., tests for Normality and outliers) from another application. That apparently continued for a decade until a new Project Manager from the Client’s office called me to ask how the spreadsheet worked. Sometimes what you don’t know can hurt you.
The Client’s Organization
No matter who your client contact is, he or she works for someone else who in turn works for someone else and so on. Within their organization, then, there may be a variety of competing interests. Even your contact may not be aware of some of the office politics. Management may want a quick answer. Accounting may want documentation of your work before paying you. The legal department may want you to guarantee your results or have your report phrased in certain ways. The plant manager may resent the intrusion of the home office who you work for. I once worked for a client who in turn worked for the ultimate, bill-paying client. The contract I had with my client specified that I had sixteen weeks from the time they supplied the final analysis-ready dataset to complete the analysis. However, my client had agreed to deliver the report to their client by a firm deadline. My client’s project manager held the kickoff meeting and then disappeared leaving the project in the hands of an experienced subordinate. A few months later, the subordinate was reassigned to another project leaving the project to the junior-lever staffer who had been collecting the data. I finally got the data four days, not four months, before the firm deadline specified by my client’s client. Guess who got the blame. So beware, you may be the one who has to accommodate all the different interests in getting your work done.
The Client and the Stakeholders
You and your analysis may never be seen by anyone outside the client’s organization. Your client, on the other hand, may have to make a decision based on your work that is of great interest to shareholders; employees; customers; neighbors; local action groups; the media, and even the public. Consequently, you have to be sensitive to the client’s thinking about how your results will be perceived by the stakeholders. He or she may present your results in simplistic terms that may not be technically correct. I had a client with whom I was conducting an annual employee satisfaction survey. Previous surveys had used five-level scales (i.e., very satisfied, somewhat satisfied, neither satisfied nor dissatisfied, somewhat dissatisfied, and very dissatisfied) which indicated that about forty percent of the employees were satisfied, ten percent were dissatisfied, and about half were sitting on the fence. We wanted to know if the prevalence of neither satisfied nor dissatisfied responses was attributable to apathy or paranoia (there was a no opinion option so that wasn’t a factor), so we switched to a four-level scale by eliminating the middle choice. The responses for the four-level scale indicated that about sixty percent of the employees were satisfied and forty percent were dissatisfied. My client’s boss presented the results to the company’s management and staff as a twenty point improvement in employee satisfaction and nominated several people for company awards. Even the most innocent of actions can invoke the law of unintended consequences.
The Client and the Reviewer
There may be reviewers for your work that are not part of the client’s organization. Some reviewers may be linked to the client, such as a legal firm hired by the client for advice. Other reviewers may be independent or even antagonistic to the client, such as regulatory or law enforcement agencies. Sometimes clients dig in their heels and refuse reviewer requests. This can cause delays that can wreck havoc with your schedule and staffing. Sometimes clients tell you to just give the reviewer whatever he or she wants. This can involve out-of-scope work that might impact your budget. The strangest client-reviewer dynamic I have ever seen involved a client-reviewer relationship that was alternately cooperative and adversarial. When the client was obliging, the reviewer, who represented a regulatory agency, was demanding. When the reviewer was acquiescent, the client was obstinate. I was told to stop work, then start again, then stop, then go. As it turned out, the regulatory agency was trying to extract a larger settlement from the client who, as a large multinational corporation, was perceived to have deep pockets. What the reviewer (and I) didn’t know was that the client was in the process of declaring bankruptcy and was stalling so any settlement with the agency wouldn’t complicate their filing. In the end, there was no settlement, the multinational corporation was liquidated, and the regulatory agency had to start over with the successors. They ended up settling for a small fraction of what the bankrupt client had first offered. You can’t win if everybody is playing by different rules.
The Statistician and the Reviewer
Don’t assume that the reviewer knows as much about statistics as you do. He or she may just have been the only person available to review your work. Even so, most of the time relationships between statisticians and reviewers are fairly straightforward. There may be differences of opinion over an approach or the number or sources of study samples, but usually this relationship is handled professionally by both sides. There are times, though, when inflated egos and hidden agendas cause conflict. One reviewer I worked with agreed to an analysis plan that called for a specific statistical procedure. After the data were collected and the analysis was completed, the reviewer refused to approve the report because “the analysis didn’t work out the way [he thought] it should.” After trying two other procedures with the same result, he relented. On another project that involved a statistical comparison to a control group, the reviewer was surprised that the difference was not significant, even though he had participated in the selection of the control group. He demanded and got a new analysis on new samples from a new control group. The results were the same and he backed down. Yet another reviewer refused to approve an analysis unless published references were provided for the analytical procedure. When the references were provided, the reviewer refused to approve the analysis unless additional statistical studies were done to support the analysis. When the statistical studies supported the analysis, even the reviewer’s support staff encouraged her to approve the analysis. She refused because she “didn’t understand it.” Sometimes, no matter how correct you are, no matter how patient you are, you can’t win.
The Reviewer’s Organization
You usually can’t do much to change interactions in the reviewer’s organization. I’ve had cases in which the reviewer was told to reject the report before it was even submitted. One reviewer I worked with, a university professor contracted with a regulatory agency, provided unusual comments on a statistical analysis. Each part of the review consisted of a few paragraphs of eloquent prose describing some statistical issue related to the analysis followed by one paragraph containing an unintelligible tirade against the analysis, the statistician, and the client. On a hunch, I searched the Internet and found the textbook the professor was using to teach one of his graduate courses. The well-written comments provided by the reviewer were taken verbatim from the textbook. When I informed the Agency the reviewer worked for about his plagiarism, they withdrew the comments but elected not to take any action against the professor. If you can walk away from an engagement with your sanity and a few dollars in your pocket, consider it a success.
Read more about using statistics at the Stats with Cats blog. Join other fans at the Stats with Cats Facebook group and the Stats with Cats Facebook page. Order Stats with Cats: The Domesticated Guide to Statistics, Models, Graphs, and Other Breeds of Data Analysis at Wheatmark, amazon.com, barnesandnoble.com, or other online booksellers.