Ray's Spreadsheet Publications
Ray Panko's Spreadsheet Research Website
Panko, Raymond R., “What Do We Know About Spreadsheet Errors?” Journal of End User Computing, 10(2) Spring, 1998, special issue on large-scale end user development, 15-21.
This paper what we knew about spreadsheet errors going into the 21st century. Although a good deal of additional evidence is available today, nearly everything in the paper is still true today. Rather sad.
Panko, R.R. and Port, D.A., End User Computing: “The Dark Matter (and Dark Energy) of Corporate IT,” Journal of Organizational and End User Computing, 25(3), 2013, 1-19.
Dan and I wrote this paper to summarize data on the nature and importance of spreadsheets in organizations today. It is a good source if you are looking for information on this topic.
Panko, Raymond R., “Spreadsheet (and Other) “The Cognitive Science of Spreadsheet Errors: Why Thinking Is Bad,” Proceedings of the 46th Hawaii International Conference on System Sciences, Kihei, Hawaii, January 7-10, 2013.
The big change in spreadsheet error research in the last few years has been growing insights from cognitive science and neuroscience. We know that we make errors. This paper indicates why we do even when we are being careful. It also suggests why we deny it and why our intuition about error rates and how to reduce them are systematically biased.
Panko, Raymond R., “Applying Code Inspection to Spreadsheet Testing,” Journal of Management Information Systems, 16(2), Fall 1999, 159-176.
Spreadsheeting and programming have very similar development error rates. Commercial software developers spend 30% to 50% of their development effort ton testing, and this does not include the cost of rework. This level of testing is missing in spreadsheet development. This paper discusses how developers can test spreadsheets. Spreadsheeting is not risky. Spreadsheet development without 30% or more time being spent on testing is unprofessional.
Panko, Raymond R., ”Spreadsheets and Sarbanes–Oxley: Regulations, Risks, and Control Frameworks,” Communications of the AIS, 17(9), May 2006.
Companies subject to compliance regimes are looking at their affected processes in depth. They are seeing many spreadsheets, and they are seeing that they cannot simply replace these spreadsheets. In Sarbanes–Oxley compliance, a number of companies have received negative attestations due to their spreadsheets, and the PCAOB has recently directed auditors to spend more effort on spreadsheets.
Panko, Raymond R. and Aurigemma, Salvatore, “Revising the Panko-Halverson Taxonomy of Spreadsheet Errors,” Decision Support Systems, Volume 49, Issue 2, May 2010, Pages 235-244.
The Panko–Halverson spreadsheet error taxonomy has been widely used. This paper presents an updated taxonomy that reflects human error research findings outside of spreadsheeting.
Panko, Raymond R., “Two Experiments in Reducing Overconfidence in Spreadsheet Development,” Journal of Organizational and End User Computing, 19(1), January–March 2007, 1-23.
Overconfidence has long been an impediment to improving spreadsheet development. This paper discussed overconfidence and overconfidence in spreadsheeting. It also discusses the result of an experiment in which subjects received feedback about their errors. This reduced overconfidence and errors, but only slightly. The principle of giving feedback to people on their accuracy has been effective in programming and other areas. However, the results of the experiment indicate that a lot of feedback is likely to be necessary to reduce spreadsheet developer overconfidence.
In the 1980s, there was a kind of Grand Unification in human error research. Researchers studying human errors in many different fields found that human error rates were comparable for cognitive activities of comparable complexity. Spreadsheet error research is a tiny subset of human error research, and it is important to understand what human error research says about error and how to reduce it to acceptable levels.
Panko, Raymond R. and Halverson, R. H., Jr. “Spreadsheets on Trial: A Framework for Research on Spreadsheet Risks,” Proceedings of the Twenty-Ninth Hawaii International Conference on System Sciences, Volume II, Kihei, Maui, January, 1996, pp. 326-335.
This was our first major paper on spreadsheet errors. Among other things, this paper introduced our taxonomy of spreadsheet errors. We divided errors into quantitate and qualitative and gave breakdowns for each. For quantitate errors, we introduced the mechanical-logical-omission trichotomy based on human error research. We also discussed how spreadsheet error rates can be measured (this paper introduced the cell error rate) and discussed how error density varies over the systems development life cycle.
Panko, Raymond R. & Richard Halverson, Jr., “Understanding Spreadsheet Risks,” Office Systems Research Journal, 14(2), Fall 1996, 1-11.
This paper reflects much of the information in the preceding Spreadsheets on Trial paper.
Panko, Raymond R., End User Computing: Management, Applications, and Technology, Wiley: New York, 1988.
This was the first textbook in end user computing. It was the top seller in an incredibly small textbook market. The section on modeling looked at different ways to visualize models, including the cellular approach in spreadsheets, the precedence arrow approach, and the formulas approach used in IFPS and other financial modeling programs. It also introduced two major prescriptions for spreadsheet design, namely not hardwiring numbers inside formulas and not entering the same number twice in different places. Not following these two rules is an example of a qualitative error, which can lead to a quantitative error later.
Aurigemma, S.A. and Panko, R.R., “Evaluating the Effectiveness of Static Analysis Programs versus Manual Inspection in the Detection of Natural Spreadsheet Errors,” Journal of Organizational and End User Computing (26, 1), 2014, 47-65.
Companies that sell spreadsheet auditing software believe that their static analysis programs (to use the programming term) can drastically reduce testing time. In this study, students in a spreadsheet elective evaluated two SAPs on a set of spreadsheets developed by students in a previous study. In parallel, students each inspected one of the spreadsheets in the corpus to find errors. It appears that SAPs are like grammar checkers. They are worth using, but you need to see them as only a first step. Not a great first step, either.
Panko, R.R., “Users and Uses Research: Understanding the ‘Familiar Unknown,’” Journal of Organizational and End User Computing, 25(3), 2013.
Although everybody believes that they understand end user computing, this paper argues that they do not. It also argues that in not understanding end user computing, they have a very poor understanding of IT in corporations, in which central information systems are not the center of gravity any longer, if they ever war.
Panko, Raymond R., “Spreadsheet Errors: What We Know. What We Think We Can Do.” Proceedings of the Conference on Spreadsheet Risks, Audit and Development Methods, University of Greenwich, London, Sponsored by the European Spreadsheet Risks Interest Group (EuSpRIG), July 17-18, 2000.
Panko, Raymond R. and Halverson, Richard P., Jr., “An Experiment in Collaborative Spreadsheet Development,” 2(4) Journal of the Association for Information Systems, July 2001.
The experiment described in this paper had participants develop a spreadsheet alone or in groups of three working together in real time. Triads made significantly fewer errors than people working alone.
Panko, Raymond R. & Sprague, Ralph H., Jr., “Hitting the Wall: Errors in Developing and Code Inspecting a 'Simple' Spreadsheet Model,” Decision Support Systems, 22(4), April 1998, 337-353.
This paper introduced the Wall task.
Panko, Raymond R. & Halverson, Richard Jr., “Are Two Heads Better than One? (At Reducing Errors in Spreadsheet Modeling),” Office Systems Research Journal, 15(1), Spring 1997, 21-32.
A forerunner of “An Experiment in Collaborative Spreadsheet Development,” this paper had participants work alone, in dyads, and in tetrads. Dyads did not do better than individuals working alone, while tetrads did much better. The original working title was, “Four-Headed Coed Wins Spreadsheet Development Competition.”