ArticlePDF Available

Abstract and Figures

Although spreadsheet programs are used for small "scratchpad" applications, they are also used to develop many large applications. In recent years, we have learned a good deal about the errors that people make when they develop spreadsheets. In general, errors seem to occur in a few percent of all cells, meaning that for large spreadsheets, the issue is how many errors there are, not whether an error exists. These error rates, although troubling, are in line with those in programming and other human cognitive domains. In programming, we have learned to follow strict development disciplines to eliminate most errors. Surveys of spreadsheet developers indicate that spreadsheet creation, in contrast, is informal, and few organizations have comprehensive policies for spreadsheet development. Although prescriptive articles have focused on such disciplines as modularization and having assumptions sections, these may be far less important than other innovations, especially cell-by-cell code inspection after the development phase.
Content may be subject to copyright.
A preview of the PDF is not available
... This issue goes even further when experts are aware of end-users' erroneous spreadsheet documents causing serious financial losses and search for explanations. In his research, Panko concluded that the behavior of end-users can be explained by their bad habit of thinking [10]- [12]. However, our research in computer problem-solving approaches, attention modes [7] [14], thinking modes [6], and mathability reveal that thinking is not bad, but applied in incorrect, nonsuitable situations, primarily focusing on low-mathability approaches in hyper attention mode. 1 Closely related to this issue is the topic of digital literacy and competence, because not even the new generations are born with digital skills which need to be consciously acquired and continuously developed, similar to mother tongue language skills. ...
... On the one hand, end-users are not able to function, as they should according to the definition. On the other hand, their productivity, effectiveness is much lower than excepted [10]- [12] [22]. Consequently, they perfectly fulfill the good customer image of software companies but lack computational thinking skills [21]. ...
... Considering this purpose, the major characteristic of the digital documents is changeability, modifiability. This intention is well served with the definition of correctly edited and formatted spreadsheet [10]- [12] [28] [29] and text documents [18]. ...
Article
The dissemination of research results might be as crucial as the research itself. The widely accepted two major forms of dissemination are written papers and live presentations. On the surface, if we see the problem in hyper attention mode, these documents are different in nature. However, their preparation requires the same problemsolving approach, and beyond that, they share the fundamental rules of design since both are extended text documents, with varying proportions of text and/or non-text contents and static and/or dynamic media types. Closely related to this problem is the phenomenon of different types of attention (hyper and deep attention), thinking (fast and slow) modes, and problem-solving approaches (high- and low-mathability). In the world of immense and various forms of data and information, the role of so-called hyper attention is fundamental and inevitable, but the presence of deep attention is essential as well. In the present paper the knowledge items involved and shared in the design and the preparation of text-based documents are detailed from the view of concept-based problem-solving, the perspective of attention and thinking modes, along with samples originated from various sources and subject matters. The aims are to discuss the theoretical background of effective and efficient document design and preparation and to call attention to the consequences of ignoring, neglecting the proper use of attention types and thinking modes
... From the viewpoint of personal attitude to spread risk, almost a half of the respondents believe that the risk entailed by the use of spreadsheet is low, fewer than 10% believe that the risk does not exist, or that there is an extremely high risk. This can be interpreted as a product of overconfidence, which was proven several times during the research, and for which Panko (1998) points out that it is "corrosive because it tends to blind people to the need for taking steps to reduce risks." It is alarming that one third of respondents working in top management believe that there are no spreadsheet risks. ...
... The fact that development of applications was optional and unrelated to the purposes of this study reflects the situation in industry where the ability to develop small applications is a necessary part of many jobs (Jawahar & Elango, 2001), yet few spreadsheet developers have spreadsheet development in their job descriptions (Panko, 2000). Because the successful performance of their 'company' had direct and significant implications for their grade in the course, the allocation of grades provided external motivation for performance in the game. ...
... Researchers (Panko, 2005;Powell et al., 2007) have identified various errors such as: ...
Thesis
Full-text available
How long is it going to last, and what is it going to cost? These are the two most basic questions asked during a turnaround. These questions are asked by all parties affected by the turnaround. The answers to these questions may influence an employee to leave or stay or if an investor will want to invest in the turnaround. Being able to answer these basic questions may shape the outcome of the actual turnaround. In this thesis, an algebraic model was developed to determine the duration, the breakeven point, the cash nadir point and the value of the nadir during a turnaround. An autoethnographic approach was adopted to understand the research problem, which resulted in a process of mathematising the knowledge gained from real-world experiences. Fundamental moments of turnarounds were derived from the Variable Finance Capacity model to answer the key questions of a turnaround. The thesis provides a framework for utilising the model and fundamental moments to make informed decisions
... Tools that are built on top of Microsoft Excel, such as qBase or DART-PCR, involve copy/paste manipulations from the raw data files, necessitating the careful triage of results to fit the confines of the formulas and dataflow. This can lead to calculation errors that go unnoticed [6]. Recently developed tools, such as SATQPCR [7], PIPE-T [8] or Auto-qPCR [9], come with limitations regarding input formats, the ability to easily adjust plotting parameters, or the lack of comprehensive functionalities such as inter-plate calibration. ...
Article
Full-text available
Background Reverse transcription quantitative real-time PCR (RT-qPCR) is a well-established method for analysing gene expression. Most RT-qPCR experiments in the field of microbiology aim for the detection of transcriptional changes by relative quantification, which means the comparison of the expression level of a specific gene between different samples by the application of a calibration condition and internal reference genes. Due to the numerous data processing procedures and factors that can influence the final result, relative expression analysis and interpretation of RT-qPCR data are still not trivial and often necessitate the use of multiple separate software packages capable of performing specific functions. Results Here we present qRAT, a stand-alone desktop application based on R that automatically processes raw output data from any qPCR machine using well-established and state-of-the-art statistical and graphical techniques. The ability of qRAT to analyse RT-qPCR data was evaluated using two example datasets generated in our laboratory. The tool successfully completed the procedure in both cases, returning the expected results. The current implementation includes functionalities for parsing, filtering, normalizing and visualisation of relative RT-qPCR data, like the determination of the relative quantity and the fold change of differentially expressed genes as well as the correction of inter-plate variation for multiple-plate experiments. Conclusion qRAT provides a comprehensive, straightforward, and easy-to-use solution for the relative quantification of RT-qPCR data that requires no programming knowledge or additional software installation. All application features are available for free and without requiring a login or registration.
... On the other hand, it is well-documented that the flexibility of spreadsheets also makes them error prone [79][80][81]83]. Without formal types or data structures, spreadsheets suffer from classes of error that in traditional programming languages are easily detected, or completely prevented. ...
... However, sport scientists and performance analysts should be aware of the dangers and risks associated with data analysis (and subsequently, visualisation) in spreadsheet programs. For instance, upon an audit of 13 real-world spreadsheets, an average of 88% contained errors (Panko, 1998). Common issues with spreadsheets, which do not bode well for data analysis and subsequent visualisation, include inconsistent naming, extra white spaces between characters in cells, misrepresenting dates, incorrectly coding missing data, irregular data sheets and performing calculations on raw data files (Broman & Woo, 2018). ...
... Translation error, logic error [70]; Assignment bug, Iteration bug, Array bug [36]; Logical bug [28]; Lexical bugs [29] Language Error (LE) ...
Preprint
Full-text available
Generative machine learning models have recently been applied to source code, for use cases including translating code between programming languages, creating documentation from code, and auto-completing methods. Yet, state-of-the-art models often produce code that is erroneous or incomplete. In a controlled study with 32 software engineers, we examined whether such imperfect outputs are helpful in the context of Java-to-Python code translation. When aided by the outputs of a code translation model, participants produced code with fewer errors than when working alone. We also examined how the quality and quantity of AI translations affected the work process and quality of outcomes, and observed that providing multiple translations had a larger impact on the translation process than varying the quality of provided translations. Our results tell a complex, nuanced story about the benefits of generative code models and the challenges software engineers face when working with their outputs. Our work motivates the need for intelligent user interfaces that help software engineers effectively work with generative code models in order to understand and evaluate their outputs and achieve superior outcomes to working alone.
... Finally, this theme also probed techniques that are employed when documenting spreadsheets. Statistics in Table 7 support the argument that documentation is rare in spreadsheets (Panko, 1998). They reveal that employees simply use cell comments or text in their spreadsheets as approaches to document their spreadsheet models. ...
Preprint
Full-text available
This paper explores the impacts of spreadsheets on business operations in a water utility parastatal in Malawi, Sub-Saharan Africa. The organisation is a typical example of a semi-government body operating in a technologically underdeveloped country. The study focused on spreadsheet scope of use and life cycle as well as organisational policy and governance. The results will help define future spreadsheet usage by influencing new approaches for managing potential risks associated with spreadsheets in the organization. Generally, findings indicate that the proliferation of spreadsheets in the organization has provided an enabling environment for business automation. The paper also highlights management, technological and human factor issues contributing to high risks associated with the pervasive spreadsheet use. The conclusions drawn from the research confirms that there is ample room for improvement in many areas such as implementation of comprehensive policies and regulations governing spreadsheet development processes and adoption.
... Finally, this theme also probed techniques that are employed when documenting spreadsheets. Statistics in Table 7 support the argument that documentation is rare in spreadsheets (Panko, 1998). They reveal that employees simply use cell comments or text in their spreadsheets as approaches to document their spreadsheet models. ...
Conference Paper
Full-text available
This paper explores the impacts of spreadsheets on business operations in a water utility parastatal in Malawi, Sub-Saharan Africa. The organisation is a typical example of a semi-government organisation operating in a technologically underdeveloped country. The study focused on spreadsheet scope of use and life cycle as well as organisation policy and governance. The results will help define future spreadsheet usage by influencing new approaches for managing potential risks associated with spreadsheets in the organization. Generally, findings indicate that the proliferation of spreadsheets in the organization has provided an enabling environment for business automation. The paper also highlights management, technological and human factor issues contributing to high risks associated with the pervasive spreadsheet use. The conclusions drawn from the research confirms that there is ample room for improvement in many areas such as implementation of comprehensive policies and regulations governing spreadsheet development processes and adoption.
Article
Full-text available
This research explores how group- and organizational-level factors affect errors in administering drugs to hospitalized patients. Findings from patient care groups in two hospitals show systematic differences not just in the frequency of errors, but also in the likelihood that errors will be detected and learned from by group members. Implications for learning in and by work teams in general are discussed.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Chapter
The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: 1) blocked subject-project studies, 2) replicated project studies, 3) multi-project variation studies, and 4) single project studies. The four applications are in the areas of, respectively, 1) software testing strategies, 2) Cleanroom software development, 3) characteristic software metric sets, and 4) software error analysis.