PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.


One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practice. Focusing on bias antecedents, effects and mitigation techniques, we identified 65 articles (published between 1990 and 2016), which investigate 37 cognitive biases. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work.
[P1] G. Allen and J. Parsons, “Is query reuse potentially harmful? an-
choring and adjustment in adapting existing database queries,”
Information Systems Research, vol. 21, no. 1, pp. 56–77, 2010.
[P2] G. Allen and B. J. Parsons, “A little help can be a bad thing:
Anchoring and adjustment in adaptive query reuse,” ICIS 2006
Proceedings, p. 45, 2006.
[P3] G. J. Browne and V. Ramesh, “Improving information require-
ments determination: a cognitive perspective,” Information &
Management, vol. 39, no. 8, pp. 625–645, 2002.
[P4] G. Calikli and A. Bener, “Empirical analyses of the factors
affecting confirmation bias and the effects of confirmation bias
on software developer/tester performance,” in Proceedings of
the 6th International Conference on Predictive Models in Software
Engineering. ACM, 2010, p. 10.
[P5] G. Calikli, A. Bener, and B. Arslan, “An analysis of the effects of
company culture, education and experience on confirmation bias
levels of software developers and testers,” in Proceedings of the
32nd ACM/IEEE International Conference on Software Engineering-
Volume 2. ACM, 2010, pp. 187–190.
[P6] G. Calikli, A. Bener, T. Aytac, and O. Bozcan, “Towards a metric
suite proposal to quantify confirmation biases of developers,” in
Empirical Software Engineering and Measurement, 2013 ACM/IEEE
International Symposium on. IEEE, 2013, pp. 363–372.
[P7] G. Calikli, A. Bener, B. Caglayan, and A. T. Misirli, “Modeling
human aspects to enhance software quality management,” in
Thirty Third International Conference on Information Systems, 2012.
[P8] G. C¸ alıklı and A. B. Bener, “Influence of confirmation biases of
developers on software quality: an empirical study,” Software
Quality Journal, vol. 21, no. 2, pp. 377–416, 2013.
[P9] G. Calikli and A. Bener, “An algorithmic approach to missing
data problem in modeling human aspects in software devel-
opment,” in Proceedings of the 9th International Conference on
Predictive Models in Software Engineering. ACM, 2013, p. 10.
[P10] N. Chotisarn and N. Prompoon, “Forecasting software damage
rate from cognitive bias in software requirements gathering
and specification process,” in Information Science and Technology
(ICIST), 2013 International Conference on. IEEE, 2013, pp. 951–
[P11] N. Chotisarn and N. Prompoon, “Predicting software damage
rate from cognitive bias in software design process,” in Proceed-
ings of the 2013 International Conference on Information, Business and
Education Technology (ICIBET 2013). Atlantis Press, 2013.
[P12] P. Conroy and P. Kruchten, “Performance norms: An approach to
rework reduction in software development,” in Electrical & Com-
puter Engineering (CCECE), 2012 25th IEEE Canadian Conference
on. IEEE, 2012, pp. 1–6.
[P13] K. A. De Graaf, P. Liang, A. Tang, and H. Van Vliet, “The impact
of prior knowledge on searching in software documentation,” in
Proceedings of the 2014 ACM symposium on Document engineering.
ACM, 2014, pp. 189–198.
[P14] J. DeFranco-Tommarello and F. P. Deek, “Collaborative problem
solving and groupware for software development,” Information
Systems Management, vol. 21, no. 1, pp. 67–80, 2004.
[P15] I. Hadar, “When intuition and logic clash: The case of the object-
oriented paradigm,” Science of Computer Programming, vol. 78,
no. 9, pp. 1407–1426, 2013.
[P16] R. Jain, J. Muro, and K. Mohan, “A cognitive perspective on pair
programming,” AMCIS 2006 Proceedings, p. 444, 2006.
[P17] K. Mohan and R. Jain, “Using traceability to mitigate cognitive
biases in software development,” Communications of the ACM,
vol. 51, no. 9, pp. 110–114, 2008.
[P18] K. Mohan, N. Kumar, and R. Benbunan-Fich, “Examining com-
munication media selection and information processing in soft-
ware development traceability: An empirical investigation,”
IEEE Transactions on Professional Communication, vol. 52, no. 1,
pp. 17–39, 2009.
[P19] R. Mohanani, P. Ralph, and B. Shreeve, “Requirements fixation,”
in Proceedings of the 36th International Conference on Software Engi-
neering. ACM, 2014, pp. 895–906.
[P20] M. Nurminen, P. Suominen, S. ¨
o, and T. K¨
“Applying semiautomatic generation of conceptual models to
decision support systems domain,” in IASTED International Con-
ference on Software Engineering (SE 2009). ACTA Press, 2009.
[P21] P. Ralph, “Possible core theories for software engineering,” in
Software Engineering (GTSE), 2013 2nd SEMAT Workshop on a
General Theory of. IEEE, 2013, pp. 35–38.
[P22] P. Ralph, “Toward a theory of debiasing software development,”
in EuroSymposium on Systems Analysis and Design. Springer, 2011,
pp. 92–105.
[P23] J. E. Robbins, D. M. Hilbert, and D. F. Redmiles, “Software
architecture critics in argo,” in Proceedings of the 3rd international
conference on Intelligent user interfaces. ACM, 1998, pp. 141–144.
[P24] E. Shalev, M. Keil, J. S. Lee, and Y. Ganzach, “Optimism bias in
managing it project risks: A construal level theory perspective,”
in European conference on information systems (ECIS), 2014.
[P25] O. Shmueli, N. Pliskin, and L. Fink, “Explaining over-
requirement in software development projects: an experimental
investigation of behavioral effects,” International Journal of Project
Management, vol. 33, no. 2, pp. 380–394, 2015.
[P26] F. Shull, “Engineering values: From architecture games to agile
requirements,” IEEE Software, vol. 30, no. 2, pp. 2–6, 2013.
[P27] F. Shull, “Our best hope,” IEEE Software, vol. 31, no. 4, pp. 4–8,
[P28] K. Siau, Y. Wand, and I. Benbasat, “The relative importance
of structural constraints and surface semantics in information
modeling,” Information Systems, vol. 22, no. 2-3, pp. 155–170, 1997.
[P29] B. G. Silverman, “Critiquing human judgment using knowledge-
acquisition systems,” AI Magazine, vol. 11, no. 3, p. 60, 1990.
[P30] E. D. Smith and A. Terry Bahill, “Attribute substitution in sys-
tems engineering,” Systems engineering, vol. 13, no. 2, pp. 130–
148, 2010.
[P31] L. M. Leventhal, B. M. Teasley, D. S. Rohlman, and K. Instone,
“Positive test bias in software testing among professionals: A re-
view,” in International Conference on Human-Computer Interaction.
Springer, 1993, pp. 210–218.
[P32] W. Stacy and J. MacMillan, “Cognitive bias in software engineer-
ing,” Communications of the ACM, vol. 38, no. 6, pp. 57–63, 1995.
[P33] A. Tang, “Software designers, are you biased?” in Proceedings of
the 6th International Workshop on SHAring and Reusing Architectural
Knowledge. ACM, 2011, pp. 1–8.
[P34] A. Tang and M. F. Lau, “Software architecture review by associa-
tion,” Journal of Systems and Software, vol. 88, pp. 87–101, 2014.
[P35] P. Tobias and D. S. Spiegel, “Is design the preeminent protagonist
in user experience?” Ubiquity, vol. 2009, no. May, p. 1, 2009.
[P36] R. J. Wirfs-Brock, “Giving design advice,” IEEE Software, vol. 24,
no. 4, 2007.
[P37] E. D. Smith, Y. J. Son, M. Piattelli-Palmarini, and A. Terry Bahill,
“Ameliorating mental mistakes in tradeoff studies,” Systems En-
gineering, vol. 10, no. 3, pp. 222–240, 2007.
[P38] N. C. Haugen, “An empirical study of using planning poker for
user story estimation,” in Agile Conference, 2006. IEEE, 2006, pp.
[P39] A. J. Ko and B. A. Myers, “A framework and methodology for
studying the causes of software errors in programming systems,”
Journal of Visual Languages & Computing, vol. 16, no. 1-2, pp. 41–
84, 2005.
[P40] K. Siau, Y. Wand, and I. Benbasat, “When parents need not have
children - cognitive biases in information modeling,” in Inter-
national Conference on Advanced Information Systems Engineering.
Springer, 1996, pp. 402–420.
[P41] S. Chakraborty, S. Sarker, and S. Sarker, “An exploration into
the process of requirements elicitation: A grounded approach,”
Journal of the association for information systems, vol. 11, no. 4, p.
212, 2010.
[P42] J. Parsons and C. Saunders, “Cognitive heuristics in software en-
gineering applying and extending anchoring and adjustment to
artifact reuse,” IEEE Transactions on Software Engineering, vol. 30,
no. 12, pp. 873–888, 2004.
[P43] M. G. Pitts and G. J. Browne, “Improving requirements elicita-
tion: an empirical investigation of procedural prompts,” Informa-
tion systems journal, vol. 17, no. 1, pp. 89–110, 2007.
[P44] G. Calikli, B. Aslan, and A. Bener, “Confirmation bias in software
development and testing: An analysis of the effects of company
size, experience and reasoning skills,” Workshop on Psychology of
Programming Interest Group (PPIG), 2010.
[P45] C. Mair and M. Shepperd, “Human judgement and software met-
rics: vision for the future,” in Proceedings of the 2nd international
workshop on emerging trends in software metrics. ACM, 2011, pp.
[P46] M. Jørgensen, “Identification of more risks can lead to increased
over-optimism of and over-confidence in software development
effort estimates,” Information and Software Technology, vol. 52,
no. 5, pp. 506–516, 2010.
[P47] M. Jørgensen and T. Halkjelsvik, “The effects of request formats
on judgment-based effort estimation,” Journal of Systems and
Software, vol. 83, no. 1, pp. 29–36, 2010.
[P48] M. Jørgensen, K. H. Teigen, and K. MoløKken, “Better sure than
safe? over-confidence in judgement based software development
effort prediction intervals,” Journal of systems and software, vol. 70,
no. 1-2, pp. 79–93, 2004.
[P49] M. Jørgensen, “Individual differences in how much people are
affected by irrelevant and misleading information,” in Second
European Conference on Cognitive Science, Delphi, Greece, Hellenic
Cognitive Science Society, 2007.
[P50] K. Moløkken and M. Jørgensen, “Expert estimation of web-
development projects: are software professionals in technical
roles more optimistic than those in non-technical roles?” Empiri-
cal Software Engineering, vol. 10, no. 1, pp. 7–30, 2005.
[P51] T. K. Abdel-Hamid, K. Sengupta, and D. Ronan, “Software
project control: An experimental investigation of judgment with
fallible information,” IEEE Transactions on Software Engineering,
vol. 19, no. 6, pp. 603–612, 1993.
[P52] T. Connolly and D. Dean, “Decomposed versus holistic estimates
of effort required for software writing tasks,” Management Sci-
ence, vol. 43, no. 7, pp. 1029–1045, 1997.
[P53] M. Jørgensen and D. I. Sjøberg, “Software process improvement
and human judgement heuristics,” Scandinavian Journal of Infor-
mation Systems, vol. 13, no. 1, p. 2, 2001.
[P54] K. Moløkken-Østvold and M. Jørgensen, “Group processes in
software effort estimation,” Empirical Software Engineering, vol. 9,
no. 4, pp. 315–334, 2004.
[P55] G. Calikli and A. Bener, “Preliminary analysis of the effects of
confirmation bias on software defect density,” in Proceedings of
the 2010 ACM-IEEE International Symposium on Empirical Software
Engineering and Measurement. ACM, 2010, p. 68.
[P56] E. Løhre and M. Jørgensen, “Numerical anchors and their strong
effects on software development effort estimates,” Journal of
Systems and Software, vol. 116, pp. 49–56, 2016.
[P57] M. Jørgensen and B. Faugli, “Prediction of overoptimistic predic-
tions,” in 10th International Conference on Evaluation and Assess-
ment in Software Engineering (EASE). Keele University, UK, 2006,
pp. 10–11.
[P58] M. Jørgensen and T. Gruschke, “Industrial use of formal software
cost estimation models: Expert estimation in disguise?” in Proc.
Conf. Evaluation and Assessment in Software Eng.(EASE05), 2005,
pp. 1–7.
[P59] M. Jorgensen and S. Grimstad, “Over-optimism in software de-
velopment projects:” the winner’s curse”,” in Electronics, Commu-
nications and Computers, 2005. CONIELECOMP 2005. Proceedings.
15th International Conference on. IEEE, 2005, pp. 280–285.
[P60] M. Jørgensen and D. Sjøberg, “The importance of not learning
from experience,” in Proc. European Software Process Improvement
Conf, 2000, pp. 2–2.
[P61] K. Moløkken and M. Jørgensen, “Software effort estimation:
Unstructured group discussion as a method to reduce individual
biases,” in The 15th Annual Workshop of the Psychology of Program-
ming Interest Group (PPIG 2003), 2003, pp. 285–296.
[P62] K. Molokken-Ostvold and N. C. Haugen, “Combining estimates
with planning poker–an empirical study,” in Software Engineering
Conference, 2007. ASWEC 2007. 18th Australian. IEEE, 2007, pp.
[P63] J. A. O. G. da Cunha and H. P. de Moura, “Towards a substantive
theory of project decisions in software development project-
based organizations: A cross-case analysis of it organizations
from brazil and portugal,” in Information Systems and Technologies
(CISTI), 2015 10th Iberian Conference on. IEEE, 2015, pp. 1–6.
[P64] H. van Vliet and A. Tang, “Decision making in software archi-
tecture,” Journal of Systems and Software, vol. 117, pp. 638–644,
[P65] J. A. O. da Cunha, F. Q. da Silva, H. P. de Moura, and F. J. Vas-
concellos, “Decision-making in software project management: A
qualitative case study of a private organization,” in Proceedings of
the 9th International Workshop on Cooperative and Human Aspects of
Software Engineering. ACM, 2016, pp. 26–32.
... Other fields more closely related to engineering design, such as software engineering, have studied bias within their field (Mohanani et al. 2020). Mohanani et al. (2020) systematically reviewed bias in software engineering and identified what bias has been researched. ...
... Other fields more closely related to engineering design, such as software engineering, have studied bias within their field (Mohanani et al. 2020). Mohanani et al. (2020) systematically reviewed bias in software engineering and identified what bias has been researched. Though Mohanani's work shows bias has been explored in a field that is more closely related to engineering design, he also finds that there have been limited studies on ways to mitigate bias (Mohanani et al. 2020). ...
... Mohanani et al. (2020) systematically reviewed bias in software engineering and identified what bias has been researched. Though Mohanani's work shows bias has been explored in a field that is more closely related to engineering design, he also finds that there have been limited studies on ways to mitigate bias (Mohanani et al. 2020). The goal of this work is not only to pinpoint potential bias but to drive future work to find ways to mitigate that bias. ...
Full-text available
Engineering design research has focused on developing and refining methods and evaluating design education in design education, design research and design in practice. One important aspect that is not thoroughly investigated is the influence of bias on design within these spaces of design. Bias is known to impact the interpretation of information, decision-making and practices in all areas. These factors are vital in engineering design education, practice and research, emphasizing the importance of investigating bias. The first goal of this study is to highlight and synthesize existing bias research in design education, research and practice. The second goal is to identify areas where bias may be under-researched or under-reported in design. To achieve these goals, a comparative analysis is performed against a comparable field: medicine. Many parallels exist between both fields. Patient–provider and designer–end-user relationships are comparable. Medical education is comparable to design education with the cooperative, inquiry-based and integrated learning pedagogy approaches. Lastly, physicians and design engineers both solve cognitively complex systems-oriented problems. Leveraging research on bias in medicine enables us to highlight gaps in engineering design. Recommendations are made to help design researchers address these gaps.
... While such heuristics save time and reduce complexity, they also lead people to systematic biases in their decision-making (Frid-Nielsen and Jensen 2021). SE researchers recognized many such biases in our field, although overlooked categories of biases remain (Mohanani et al. 2020). For instance, regarding the social biases category in SE literature, Mohanani et al. (2020) identified studies concerning only the bandwagon effect: people's tendency to align with the majority opinion and to do or believe things because the majority of other people are doing or believing it VandenBos (2015). ...
... SE researchers recognized many such biases in our field, although overlooked categories of biases remain (Mohanani et al. 2020). For instance, regarding the social biases category in SE literature, Mohanani et al. (2020) identified studies concerning only the bandwagon effect: people's tendency to align with the majority opinion and to do or believe things because the majority of other people are doing or believing it VandenBos (2015). However, this category also includes other biases, such as cultural bias, stereotypical bias, and attribution error (Fleischmann et al. 2014). ...
... Given the social-technical nature of our field, all of these might be worthy of further investigation. Moreover, Mohanani et al. (2020) reports that few debiasing techniques are explored in SE literature, which ultimately means we need to improve on investigations of interventions to deal with the biases that negatively impact SE practice. ...
Full-text available
Traditionally, Software Effort Estimation (SEE) has been portrayed as a technical prediction task, for which we seek accuracy through improved estimation methods and a thorough consideration of effort predictors. In this article, our objective to make explicit the perspective of SEE as a behavioral act, bringing attention to the fact that human biases and noise are relevant components in estimation errors, acknowledging that SEE is more than a prediction task. We employed a thematic analysis of factors affecting expert judgment software estimates to satisfy this objective. We show that estimators do not necessarily behave entirely rationally given the information they have as input for estimation. The reception of estimation requests, the communication of software estimates, and their use also impact the estimation values — something unexpected if estimators were solely focused on SEE as a prediction task. Based on this, we also matched SEE interventions to behavioral ones from Behavioral Economics showing that, although we are already adopting behavioral insights to improve our estimation practices, there are still gaps to build upon. Furthermore, we assessed the strength of evidence for each of our review findings to derive recommendations for practitioners on the SEE interventions they can confidently adopt to improve their estimation processes. Moreover, in assessing the strength of evidence, we adopted the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach. It enabled us to point concrete research paths to strengthen the existing evidence about SEE interventions based on the dimensions of the GRADE-CERQual evaluation scheme.
The question of when different programmers tend to commit the same errors is a critical issue for achieving fault diversity in fault tolerance. This problem is interdisciplinary and related to theories of human error in cognitive psychology. This paper proposes a psychological framework that combines Rasmussen’s performance levels with cross-level errors, represented by post-completion error, to model situations in which different programmers are prone to making the same errors. To validate the framework, we conducted an experiment where 200 student programmers independently solved the same problem, with the same tool and language. The results indicate that programmers unlikely commit the same errors in skill-based performances, most likely make the same errors in rule-based performances. These findings suggest that natural independent development may be less effective in preventing common errors in functions involving rule-based performance and post-completion scenarios, whereas it could be effective in preventing common errors in skill-based and knowledge-based performances. The results provided new insights into the strategies for avoiding coincident faults in N-version programming, from a human factor perspective.KeywordsSoftware ReliabilityFault ToleranceCoincident faultSoftware DiversityCognitive Model
Biases are a major issue in the field of Artificial Intelligence (AI). They can come from the data, be algorithmic or cognitive. If the first two types of biases are studied in the literature, few works focus on the last type, even though the task of designing AI systems is conducive to the emergence of cognitive biases. To address this gap, we propose a study on the impact of cognitive biases during the development cycle of AI projects. Our study focuses on six cognitive biases selected for their impact on ideation and development processes: Conformity, Confirmation, Illusory correlation, Measurement, Presentation, and Normality. Our major contribution is the realization of a cognitive bias awareness tool, in the form of a mind map, for AI professionals that address the impact of cognitive biases at each stage of an AI project. This tool was evaluated through semi-structured interviews and Technology Acceptance Model (TAM) questionnaires. User testing shows that (i) the majority admitted to being more aware of cognitive biases in their work thanks to our tool, (ii) the mind map would improve the quality of their decisions, their confidence in their realization, and their satisfaction with the work done, which impact directly their performance and efficiency, (iii) the mind map was well received by the professionals, who appropriated it by planning how to integrate it into their current work process: for awareness-raising purposes for the onboarding process of new employees and to develop reflexes in their work to question their decision-making.Keywordscognitive biasAI project developmentawarenessuser-centered design
Full-text available
Understanding heuristics and cognitive biases might lessen their impact since they can cause decision errors. From the responses to the survey for case studies, this study aims to investigate the fundamental understanding of heuristics and cognitive biases in engineering students and junior engineers from Germany and Thailand. The results indicated that the majority of participants knew very little about them. In a brief lecture, only few students from Germany knew about them. They had less of an impact on students who already knew them than on those who did not. However, engineers were unaware of them and are able to limit their effects. That implies that they can manage cognitive biases' effects without being aware of them. Therefore, experience is another crucial element in lessening the impact of biases. The behavior can also impact how much cognitive biases influence decision-making. Culture and environment affect the way of thinking. Although the finding suggests that knowledge is not the primary element in decreasing the impact of heuristics and cognitive biases, it is still vital to explain and further study the level of knowledge and the effectiveness of knowledge transfer to determine how they may have an impact.
Full-text available
Context: The role of expert judgement is essential in our quest to improve software project planning and execution. However, its accuracy is dependent on many factors, not least the avoidance of judgement biases, such as the anchoring bias, arising from being influenced by initial information, even when it's misleading or irrelevant. This strong effect is widely documented. Objective: We aimed to replicate this anchoring bias using professionals and, novel in a software engineering context, explore de-biasing interventions through increasing knowledge and awareness of judgement biases. Method: We ran two series of experiments in company settings with a total of 410 software developers. Some developers took part in a workshop to heighten their awareness of a range of cognitive biases, including anchoring. Later, the anchoring bias was induced by presenting low or high productivity values, followed by the participants' estimates of their own project productivity. Our hypothesis was that the workshop would lead to reduced bias, i.e., work as a de-biasing intervention. Results: The anchors had a large effect (robust Cohen's d = 1.19) in influencing estimates. This was substantially reduced in those participants who attended the workshop (robust Cohen's d = 0.72). The reduced bias related mainly to the high anchor. The de-biasing intervention also led to a threefold reduction in estimate variance. Conclusion: The impact of anchors upon judgement was substantial. Learning about judgement biases does appear capable of mitigating, although not removing, the anchoring bias. The positive effect of de-biasing through learning about biases suggests that it has value.
Conference Paper
Full-text available
The majority of software developers work in teams and are thus influenced by team norms. Norms are shared expectations of how to behave and regulate the interaction between team members. Our aim of this study is to gain more knowledge about team norms in software teams and to increase the understanding of how norms influence teamwork in agile software development projects. We conducted a study of norms in four agile teams located in Norway and Malaysia. The analysis of 22 interviews revealed that we could extract a varied set of both injunctive and descriptive norms. Our results suggest that team norms have an important role in enabling team performance.
Conference Paper
Full-text available
This paper explores the possibility that requirements engineering is, in principle, detrimental to software project success. Requirements engineering is conceptually divided into two distinct processes: sense making (learning about the project context) and problem structuring (specifying problems, goals, requirements, constraints, etc.). An interdisciplinary literature review revealed substantial evidence that while sense making improves design performance, problem structuring reduces design performance. Future research should therefore investigate decoupling the sense making aspects of requirements engineering from the problem structuring aspects.
Conference Paper
Full-text available
Background: During all levels of software testing, the goal should be to fail the code. However, software developers and testers are more likely to choose positive tests rather than negative ones due to the phenomenon called confirmation bias. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them. In the literature, there are theories about the possible effects of confirmation bias on software development and testing. Due to the tendency towards positive tests, most of the software defects remain undetected, which in turn leads to an increase in software defect density. Aims: In this study, we analyze factors affecting confirmation bias in order to discover methods to circumvent confirmation bias. The factors, we investigate are experience in software development/testing and reasoning skills that can be gained through education. In addition, we analyze the effect of confirmation bias on software developer and tester performance. Method: In order to measure and quantify confirmation bias levels of software developers/testers, we prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. These tests were conducted on the 36 employees of a large scale telecommunication company in Europe as well as 28 graduate computer engineering students of Bogazici University, resulting in a total of 64 subjects. We evaluated the outcomes of these tests using the metrics we proposed in addition to some basic methods which we inherited from the cognitive psychology literature. Results: Results showed that regardless of experience in software development/testing, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Moreover, the results of the analysis to investigate the relationship between code defect density and confirmation bias levels of software developers and testers showed that there is a direct correlation between confirmation bias and defect proneness of the code. Conclusions: Our findings show that having strong logical reasoning and hypothesis testing skills are differentiating factors in the software developer/tester performance in terms of defect rates. We recommend that companies should focus on improving logical reasoning and hypothesis testing skills of their employees by designing training programs. As future work, we plan to replicate this study in other software development companies. Moreover, we will use confirmation bias metrics in addition to product and process metrics in for software defect prediction. We believe that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade.
Ainslie argues that our responses to the threat of our own inconsistency determine the basic fabric of human culture. He suggests that individuals are more like populations of bargaining agents than like the hierarchical command structures envisaged by cognitive psychologists. The forces that create and constrain these populations help us understand so much that is puzzling in human action and interaction: from addictions and other self-defeating behaviors to the experience of willfulness, from pathological over-control and self-deception to subtler forms of behavior such as altruism, sadism, gambling, and the 'social construction' of belief. This book integrates approaches from experimental psychology, philosophy of mind, microeconomics, and decision science to present one of the most profound and expert accounts of human irrationality available. It will be of great interest to philosophers and an important resource for professionals and students in psychology, economics and political science.
Do you… Use a computer to perform analysis or simulations in your daily work? Write short scripts or record macros to perform repetitive tasks? Need to integrate off-the-shelf software into your systems or require multiple applications to work together? Find yourself spending too much time working the kinks out of your code? Work with software engineers on a regular basis but have difficulty communicating or collaborating? If any of these sound familiar, then you may need a quick primer in the principles of software engineering. Nearly every engineer, regardless of field, will need to develop some form of software during their career. Without exposure to the challenges, processes, and limitations of software engineering, developing software can be a burdensome and inefficient chore. In What Every Engineer Should Know about Software Engineering, Phillip Laplante introduces the profession of software engineering along with a practical approach to understanding, designing, and building sound software based on solid principles. Using a unique question-and-answer format, this book addresses the issues and misperceptions that engineers need to understand in order to successfully work with software engineers, develop specifications for quality software, and learn the basics of the most common programming languages, development approaches, and paradigms.
The most profound conflict in software engineering is not between positivist and interpretivist research approaches or Agile and Heavyweight software development methods, but between the Rational and Empirical Design Paradigms. The Rational and Empirical Paradigms are disparate constellations of beliefs about how software is and should be created. The Rational Paradigm remains dominant in software engineering research, standards and curricula despite be contradicted by decades of empirical research. The Rational Paradigm views analysis, design and programming as separate activities despite empirical research showing that they are simultaneous and inextricably interconnected. The Rational Paradigm views developers as executing plans despite empirical research showing that plans are a weak resource for informing situated action. The Rational Paradigm views success in terms of the Project Triangle (scope, time, cost and quality) despite empirical researching showing that the project triangle omits critical dimensions of success. The Rational Paradigm assumes the analysts elicit requirements despite empirical research showing that analysts and stakeholder co-construct preferences. The Rational Paradigm views professionals as using software development methods despite empirical research showing that methods are rarely used, very rarely used as intended, and typically weak resources for informing situated action. This article therefore elucidates the Empirical Design Paradigm, an alternative view of software development more consistent with empirical evidence. Embracing the Empirical Paradigm is crucial for retaining scientific legitimacy, solving numerous practical problems and improving software engineering education.