This article discusses the evolving role of AI in software development, focusing on frameworks that embed engineering disciplines into AI coding assistants and the architectural considerations of local versus cloud-based AI models. It touches upon the importance of internal quality in AI-generated code and the financial implications of large-scale AI infrastructure investments.
Read original on Martin FowlerRahul Garg's framework, Lattice, aims to reduce friction in AI-assisted programming by embedding battle-tested engineering disciplines like Clean Architecture, Domain-Driven Design (DDD), and design-first methodologies directly into AI coding assistants. This addresses common AI assistant issues such as silently making design decisions and forgetting constraints. Lattice introduces a tiered system of composable skills (atoms, molecules, refiners) and a living context layer (.lattice/ folder) to accumulate project standards and review insights, enabling the system to adapt and apply project-specific rules over time.
Lattice Framework
Lattice uses composable skills (atoms, molecules, refiners) and a living context layer to operationalize engineering patterns in AI-assisted programming. It integrates concepts like Clean Architecture and DDD to improve code quality and align AI output with engineering standards.
The article highlights a significant architectural debate: whether to rely on powerful cloud-based AI models or leverage "good enough" local models. While cloud models are currently more powerful, they incur substantial costs and require shipping sensitive data. Companies like Apple are investing less in cloud AI, potentially betting on a future where sophisticated local hardware and open-source models become dominant, echoing the shift from mainframes to personal computers. This strategy offers cost savings and enhanced data privacy.
Willem van den Ende's approach to a local agentic development setup demonstrates the viability of using open, local models. Key assumptions include that the quality of the agent's "harness" (coding agent, skills, extensions) can be as crucial as the model itself, and that investing in local setups provides a stable base where engineering effort compounds. This also brings the benefit of a Zero Trust Architecture by sandboxing powerful AI tools.
Kent Beck's "Genie Tarpit" metaphor, inspired by Fred Brooks' "Mythical Man-Month," raises concerns about the internal quality of AI-generated code. AI tools often prioritize immediate functionality over the long-term maintainability and evolvability of software. This creates a "plausible deniability" where the AI claims success even if the code lacks the internal quality necessary for future development. The core question is whether large language models (LLMs) can eventually overcome this quality deficit, making internal quality less critical, or if well-organized, readable code will remain essential for both human and AI understanding, especially for complex systems.