How AI Reshapes Human Judgement
Why algorithmic assistance shifts how and when we think
„We enjoy the comfort of opinion without the discomfort of thought.“
In the previous article, we examined why expertise reduces errors without eliminating blind spots. This post shifts the focus to what happens when those blind spots interact with AI assistance.
AI changes the order of thinking
When people worry about AI and decision-making, they often focus on whether it´s general use is right or wrong. This framing misses a more subtle and more important shift. The presence of AI does not primarily change human ability to reason. It changes the order in which reasoning happens.
When algorithmic advice is available, human judgement reliably drifts toward the suggested answer, even when individuals are fully capable of reaching an independent conclusion. This effect appears across tasks and domains. The key mechanism is not loss of agency but sequencing. An answer is presented first, reasoning follows second, and correction becomes optional.
From a cognitive perspective, algorithmic suggestions function as anchors. An initial reference point is introduced early in the judgement process, and subsequent reasoning adjusts around it. Adjustment requires effort. As a result, people tend to correct insufficiently away from the suggested answer rather than evaluate the problem from scratch.
What makes this particularly important is that the presence of AI does not reduce people’s ability to reason. It reduces the likelihood that reasoning is deployed in a genuinely independent way. The mind treats the algorithmic output as a plausible starting point and reallocates effort accordingly.
Closely related to this is a second shift: error monitoring declines. Once a coherent answer is available, the motivation to actively verify or search for counterarguments decreases. This is not because people become incapable of detecting errors, but because verification feels less necessary. Monitoring is effortful, and effort is conserved when a solution already appears to be on the table.
In other words, AI does not replace human judgement. It quietly reorganises when judgement is exercised and how much scrutiny it receives.
Why pressure strengthens algorithmic influence
The influence of algorithmic advice is not constant. It becomes markedly stronger under time pressure, cognitive load or uncertainty. In such conditions, reliance on algorithmic suggestions increases even when the advice is objectively incorrect.
This pattern is best understood as a resource management problem rather than a failure of critical thinking. Under load, people default to heuristics that reduce effort while preserving acceptable performance. Algorithmic advice becomes an efficient proxy for judgement, especially when alternatives would require sustained analytic work.
From the perspective of dual-process models, control shifts away from deliberate correction and toward acceptance. System 1 processes dominate, and System 2 is engaged selectively or not at all. Importantly, this does not mean that people believe the algorithm is infallible. It means that questioning it becomes comparatively costly.
Uncertainty amplifies this effect. When internal confidence is low, external guidance becomes more attractive. Algorithmic outputs offer structure where the problem itself feels ambiguous. Under these conditions, accepting advice can feel like a rational trade-off rather than a cognitive shortcut.
The consequence is not uniformly worse decisions, but a systematic change in how errors arise. Under pressure, people are less likely to challenge algorithmic outputs, not because they cannot, but because the situational incentives discourage it.
Fluency, trust and the illusion of correctness
One of the most powerful yet underestimated drivers of trust in AI-assisted judgement is linguistic fluency. Outputs that are clear, coherent and internally consistent feel more reliable than fragmented or hesitant ones, even when objective accuracy is unchanged.
Fluency operates as a validity cue. Information that is easy to process feels more familiar, more confident and more credible. In everyday cognition, this heuristic is often useful. Problems arise when fluency is misinterpreted as correctness or expertise.
Highly fluent algorithmic responses create an illusion of correctness. As the explanation is internally consistent, there is little subjective indication that further scrutiny is required. Error monitoring weakens not because people suspend judgement, but because nothing in the output triggers escalation into a stricter analytic mode.
This effect is not manipulation. It reflects a normal feature of human cognition. Coherence reduces perceived uncertainty. Reduced uncertainty reduces the felt need to check.
The result is a subtle shift in trust calibration. Confidence becomes decoupled from evidence quality and more closely tied to presentation quality. When this happens, errors are not obvious, and correction becomes unlikely unless external constraints force it.
Fluency therefore matters not only for communication but for judgement itself. It shapes when people decide to stop thinking.
Offloading, instability and long-term blind spots
Beyond immediate decisions, AI also affects judgement over time. Repeated cognitive offloading to digital systems preserves short-term efficiency, but it can weaken internal representations that support later evaluation and error detection.
Offloading reduces the need to construct and maintain internal models. When systems reliably provide answers, people invest less effort in encoding relationships, constraints and background structure. In the short term, this is efficient. In the longer term, it leaves thinner internal benchmarks for plausibility checking.
When internal representations are weaker, judgement becomes more dependent on external input. This has two consequences. First, detecting errors becomes harder because there is less internal structure against which outputs can be evaluated. Second, judgements become more unstable when advice varies.
When algorithmic outputs are inconsistent, people tend to adapt their preferences and evaluations to the currently presented answer rather than maintaining a stable internal reference frame. Instead of anchoring to a self-generated standard, they rationalise the output they see. Confidence fluctuates accordingly.
This does not imply loss of agency or control. It reflects context dependence. Judgement becomes more prompt-sensitive and less internally anchored. Over time, this instability can itself become a blind spot, especially when people are unaware that their evaluations are shifting.
Taken together, these effects show that AI does not merely influence individual decisions. It reshapes the cognitive material used to make future ones.
A quiet conclusion
AI systems do not fail at thinking. They change the environment in which thinking takes place.
They reorder cognition by placing answers before reasoning. They reduce monitoring by offering plausible solutions early. They amplify reliance under pressure. They exploit normal fluency cues that signal when thinking can stop. And over time, they alter the internal structures that support judgement and error detection.
None of this requires assuming irrationality, loss of agency or technological dominance. It follows directly from how human cognition manages effort, uncertainty and confidence.
Blind spots in human–AI interaction do not emerge because people stop thinking. They emerge because thinking is redirected, postponed or quietly stabilised around externally provided answers.
Understanding this shift is the first step toward using AI without losing sight of where judgement still matters most.
L.A.
-
Chen, Y., Kirshner, S. N., Ovchinnikov, A., Andiappan, M., & Jenkin, T. (2025). A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do? Manufacturing & Service Operations Management, 27(2), 354–368. https://doi.org/10.1287/msom.2023.0279
Cheung, V., Maier, M., & Lieder, F. (2025). Large language models show amplified cognitive biases in moral decision-making. Proceedings of the National Academy of Sciences, 122(25), e2412015122. https://doi.org/10.1073/pnas.2412015122
Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089
Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569. https://doi.org/10.1038/s41598-023-31341-0
Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–22. https://doi.org/10.1145/3706598.3713778
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. (1998). Automation Bias: Decision Making and Performance in High-Tech Cockpits. The International Journal of Aviation Psychology, 8(1), 47–63. https://doi.org/10.1207/s15327108ijap0801_3
Prahl, A., & Van Swol, L. (2017). Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691–702. https://doi.org/10.1002/for.2464
Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity (No. arXiv:2506.06941). arXiv. https://doi.org/10.48550/arXiv.2506.06941
Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006. https://doi.org/10.1006/ijhc.1999.0252
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776–778. https://doi.org/10.1126/science.1207745
Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research, 2(2), 140–154. https://doi.org/10.1086/691462
Reflection starts with dialogue.
If you’d like to share a thought or question, you can write to me at contact@lucalbrecht.com
Thinking from Scratch
by Luc Albrecht
Exploring how we think, decide and create clarity