Menu
Martin Fowler·April 2, 2026

Managing Software Debt and AI in System Development

This article discusses various forms of 'debt' in software systems—technical, cognitive, and intent debt—and introduces a 'Tri-System theory of cognition' involving humans and AI. It highlights how AI's increasing role in coding shifts the focus from writing code to verification, emphasizing the need for robust testing and a re-organization around validation to ensure system correctness and quality.

Read original on Martin Fowler

Understanding System Health Through Debt Metaphors

The article introduces a framework for understanding system health by categorizing different types of "debt" that accumulate during software development. These debts hinder a system's evolution and a team's ability to reason about and change it effectively. Recognizing these distinct forms of debt is crucial for architects and engineering leaders to develop targeted strategies for mitigation and ensure long-term system maintainability and adaptability.

  • Technical debt: Resides in the code, stemming from implementation choices that compromise future changeability. This directly impacts the system's architectural flexibility.
  • Cognitive debt: Lives within the team, reflecting an erosion of shared understanding of the system. This can lead to knowledge silos and slower decision-making regarding architectural changes.
  • Intent debt: Resides in project artifacts, arising when the system's original goals and constraints are poorly captured or maintained. This limits the system's alignment with business objectives and makes it harder for both human and AI agents to evolve it as intended.

AI as System 3: Cognitive Surrender vs. Offloading

Extending Kahneman's "Thinking Fast and Slow" model, which describes human System 1 (intuition) and System 2 (deliberation), a new paper introduces AI as "System 3." This concept differentiates between two modes of AI interaction: cognitive offloading (strategic delegation to AI for deliberation) and cognitive surrender (uncritical reliance on AI, bypassing human deliberation). System designers should aim to facilitate cognitive offloading through clear API contracts, structured inputs, and understandable outputs, while guarding against cognitive surrender by integrating validation layers.

💡

Designing for AI Integration

When integrating AI-generated components or code, design verification mechanisms to prevent cognitive surrender. This includes robust testing frameworks, clear monitoring dashboards, and human-in-the-loop review processes to ensure AI outputs meet quality and correctness criteria.

The Shift to Verification in AI-Driven Development

With LLMs increasingly generating code, the critical activity in software development shifts from writing code to verification. The challenge lies in defining "correctness" which is often context-dependent and multifaceted, especially in complex, distributed systems. This reorientation requires organizational changes, moving from tracking code output to tracking the validation of that output.

  • Redefining roles: Engineering teams may need to reallocate resources, with fewer engineers focused on coding and more on defining acceptance criteria, designing test harnesses, and monitoring outcomes.
  • Architectural implications: This shift necessitates robust testing infrastructure, automated validation pipelines, and comprehensive observability tools that can efficiently verify AI-generated components and system behavior across a microservice architecture.
technical debtcognitive debtsoftware architectureLLMsAI in engineeringverificationtestingsystem health

Comments

Loading comments...