Decision Quality Is a System Property
Why even smart, experienced people make predictably poor decisions in organisations
„The significant problems we face cannot be solved at the same level of thinking we were at when we created them.“
In last weeks article, I examined psychological safety as a key condition for learning and speaking up at work.
This article broadens the lens and looks at decision quality as a property of the system in which such conditions either enable or suppress good judgement.
Why experience misleads when feedback is weak
In organisations, poor decision quality is often treated as a competence problem.
When projects fail, someone must have weighed the options incorrectly. When risks are overlooked, someone did not think carefully enough. When teams get stuck, the explanation is often a lack of critical thinking.
This framing feels plausible because it places responsibility on individuals, but it explains reality poorly.
Across organisations, the same pattern appears again and again. Even highly intelligent, motivated and experienced people make decisions that appear surprisingly fragile in hindsight. Not occasionally, but systematically. This is not because they lack ability, but because the decision system surrounding them consistently enables certain forms of thinking while suppressing others.
The central thesis of this article is therefore straightforward:
Decision quality is not a stable individual skill. It is a property of the system in which decisions are made.
1.) Under uncertainty, the best feedback architecture matters more than the best person
Intuitive expertise develops only when two conditions are met: the environment is relatively stable and feedback is frequent, clear and valid.
Most organisational decisions do not meet these conditions. Strategic choices, hiring decisions, prioritisation, product roadmaps or reorganisations typically produce delayed and ambiguous feedback. Outcomes are noisy, causal links are difficult to trace and learning signals remain inconsistent. As a result, people are unable to reliably calibrate their judgement.
In such contexts, experience quickly becomes misleading. Confidence may increase, but accuracy does not necessarily follow. Successful decisions are often interpreted retrospectively as evidence of competence, even though they may just as plausibly be explained by favourable conditions, chance or external effects. This dynamic is often described as outcome bias.
When learning conditions are weak, decision quality does not improve through “better people”.
It improves through better feedback architectures: explicit hypotheses, clearly defined success criteria, short review cycles and, where possible, a clean separation between cause and effect.
2.) Heuristics and biases are rarely character flaws but artefacts of the environment
Organisations often talk about biases as if they were individual weaknesses.
In many cases, the opposite is true. Biases are a predictable consequence of poorly designed informational environments in businesses.
People rely on heuristics, simple decision rules, because they are efficient in many everyday situations. Problems arise when environments systematically provide misleading, distorted or overly complex signals. Under such conditions, heuristics become insufficient and systematic misjudgements increase.
This requires a shift in perspective. Changing individuals without changing environments usually fails.
Heuristics are not the enemy. They are a fundamental feature of human cognition. The real problem lies in decision environments that reliably miscalibrate judgement and provide little incentive for reflection.
3.) Why intelligence does not protect against bias and hierarchy further blocks self-correction
A common assumption is that smarter people also reason more rationally.
Research paints a more sober picture. Cognitive ability and rational judgement are related but distinct constructs. High intelligence does not reliably reduce susceptibility to bias. In some cases, it even increases the likelihood of error by enabling more convincing rationalisations of flawed judgements.
This dynamic is especially relevant in leadership contexts. Experience, status and narratives converge at higher hierarchical levels. Decisions are not only made but actively defended, often because they are tied to personal identity and the image of infallible leadership. The higher the hierarchy, the stronger the bias blind spot tends to be: the tendency to see one’s own judgement as less biased than that of others.
The implication is uncomfortable but important.
Self-correction is not a reliable default mechanism. Precisely where it would matter most, it is least likely to occur.
For this reason, structural correction mechanisms are not a form of paternalism. They represent a necessary upgrade of the system itself.
4.) Psychological safety determines whether existing knowledge enters decisions at all
The availability of knowledge is not the same as its use. The fact that people know something does not mean that this knowledge is accessed at the moment a decision is made.
This is where the concept of psychological safety becomes relevant. It describes the shared belief within a team that interpersonal risks are acceptable, like asking questions, expressing doubts, admitting mistakes or voicing dissent.
When this safety is absent, familiar patterns emerge. Meetings without disagreement, risks addressed too late, errors concealed and apparent consensus replacing genuine clarification.
This is more than a matter of wellbeing or corporate health initiatives. It is decision infrastructure.
Teams with higher psychological safety show more robust learning behaviour and are more likely to contribute diverse perspectives and concerns. Under conditions of uncertainty, this is particularly important. A greater diversity of perspectives covers more potential failure points and allows correction to occur early, before consequences become costly.
It is also important to draw a clear distinction. Psychological safety is not the same as just being nice.
Sustained performance usually emerges where high standards and safety coexist. High standards without safety create fear. Safety without standards creates unfocused comfort.
5.) Decision architecture: framing, information distribution and AI as amplifiers
Decisions are rarely made in neutral frames.
Language, presentation and defaults systematically influence preferences, even when the underlying problem remains objectively identical. What is made salient appears more important. What is omitted from tables, dashboards or memos loses weight.
Group dynamics further intensify this effect. In teams, shared information tends to dominate discussion, while unshared knowledge is overlooked. This produces a form of pseudo-consensus based on incomplete assessments.
Technological tools add another layer of complexity.
AI and automation are not neutral add-ons. They already shape decision environments in noticeable ways. They can foster overreliance or trigger rejection after visible errors, a phenomenon known as algorithm aversion. They also tend to shift responsibility. When something goes wrong, it often becomes unclear who actually exercised judgement.
This leads back to the central thesis.
AI does not automatically improve decisions. It improves decisions only when governance, feedback, roles and accountability are clearly designed. Otherwise, it amplifies the existing architecture, including its weaknesses.
Conclusion
When decisions fail, intelligence is rarely the missing ingredient.
Most organisations already possess the expertise they need. What they lack is an environment in which this expertise is allowed to surface, collide and correct itself before consequences become costly.
Decision quality improves when systems are deliberately designed to support good thinking under pressure through clear feedback loops, low social costs for dissent, disciplined information structures and conscious decision design.
This is the shift in perspective.
Organisations do not need smarter people.
They need systems that stop neutralising the intelligence they already have.
L. A.
-
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated Cognition and the Culture of Learning. Educational Researcher, 18(1), 32–42. https://doi.org/10.3102/0013189X018001032
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Edmondson, A. (1999). Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
Edmondson, A. C., & Lei, Z. (2014). Psychological Safety: The History, Renaissance, and Future of an Interpersonal Construct. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 23–43. https://doi.org/10.1146/annurev-orgpsych-031413-091305
Frazier, M. L., Fainshmidt, S., Klinger, R. L., Pezeshkan, A., & Vracheva, V. (2017). Psychological Safety: A Meta‐Analytic Review and Extension. Personnel Psychology, 70(1), 113–165. https://doi.org/10.1111/peps.12183
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic Decision Making. Annual Review of Psychology, 62(1), 451–482. https://doi.org/10.1146/annurev-psych-120709-145346
Halpern, D. F. (1998). Teaching critical thinking for transfer across domains: Disposition, skills, structure training, and metacognitive monitoring. American Psychologist, 53(4), 449–455. https://doi.org/10.1037/0003-066X.53.4.449
Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526. https://doi.org/10.1037/a0016755
Larrick, R. P., & Feiler, D. C. (2015). Expertise in Decision Making. In G. Keren & G. Wu (Hrsg.), The Wiley Blackwell Handbook of Judgment and Decision Making (1. Aufl., S. 696–721). Wiley. https://doi.org/10.1002/9781118468333.ch24
Milkman, K. L., Chugh, D., & Bazerman, M. H. (2009). How Can Decision Making Be Improved? Perspectives on Psychological Science, 4(4), 379–383. https://doi.org/10.1111/j.1745-6924.2009.01142.x
Morewedge, C. K., Yoon, H., Scopelliti, I., Symborski, C. W., Korris, J. H., & Kassam, K. S. (2015). Debiasing Decisions: Improved Decision Making With a Single Training Intervention. Policy Insights from the Behavioral and Brain Sciences, 2(1), 129–140. https://doi.org/10.1177/2372732215600886
Parasuraman, R., & Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
Pronin, E., Lin, D. Y., & Ross, L. (2002). The Bias Blind Spot: Perceptions of Bias in Self Versus Others. Personality and Social Psychology Bulletin, 28(3), 369–381.
Simon, H. A. (1955). A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), 99. https://doi.org/10.2307/1884852
Stanovich, K. E. (2008). Individual Differences in Reasoning and the Algorithmic/Intentional Level Distinction in Cognitive Science. In J. E. Adler & L. J. Rips, Reasoning: Studies of Human Inference and its Foundations (S. 414–436). Cambridge University Press.
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48(6), 1467–1478. https://doi.org/10.1037/0022-3514.48.6.1467
Tversky, A., & Kahneman, D. (1981). The Framing of Decisions and the Psychology of Choice. Science, 211(4481), 453–458. https://doi.org/10.1126/science.7455683
Reflection starts with dialogue.
If you’d like to share a thought or question, you can write to me at contact@lucalbrecht.com
Thinking from Scratch
by Luc Albrecht
Exploring how we think, decide and create clarity