This article posits that code should be considered a 'materialized view' of deeper architectural decisions and invariants, not the ultimate source of truth. It explores how AI accelerates this shift, emphasizing the importance of defining clear intent, constraints, and a 'decision graph' as the primary architectural assets. The core idea is to manage decisions and meaning explicitly, treating code as a computable projection, to improve system comprehension and adaptability.
Read original on Dev.to #architectureThe article challenges the traditional view of code as the 'source of truth' for a system. Instead, it proposes that code functions more like a materialized view or a cache, derived from more fundamental architectural elements: intent, invariants, and decisions. This perspective shifts the focus from writing code faster to articulating and managing these underlying principles with greater clarity.
Code is a Materialized View
In this paradigm, the true 'source of truth' for a system lies in its defined invariants, constraints, and the decision graph. Code is merely a projection of these elements, implying it can be rebuilt or regenerated if the foundational definitions are robust. This is analogous to how databases prioritize schema and change logs over a current data snapshot.
The advent of AI has accelerated this shift. While AI can write code quickly, it also exposes weaknesses in undefined intent or architectural ambiguities much faster. This moves the primary constraint from speed of writing code to clarity of thinking and the ability to formalize intent and invariants. AI doesn't break architecture; it reveals its weak spots more rapidly.
Instead of treating architectural decisions as mere documents (like ADRs), the article suggests elevating them to system primitives. This means decisions can be versioned, checked, and tied to metrics. Architecture then becomes a system for managing decisions under uncertainty, rather than just a static set of layers. This approach structures development around meaningful context, allowing AI to execute within human-defined boundaries.
The concept of 'comprehension debt' highlights the challenge of retaining understanding generated during development. When knowledge isn't anchored to structured artifacts (beyond ephemeral conversations or unpersisted AI interactions), meaning can vanish. In this new model, documentation, defined as the formulation of intent, invariants, and constraints, becomes the primary interface for controlling the system, rather than an afterthought. It's about building a system where navigating meaning is more efficient than recovering it.