This article explores the evolving role of software engineers in an AI-driven world, introducing concepts like "supervisory engineering" and "the middle loop" where engineers direct and evaluate AI-generated code. It emphasizes architectural principles, particularly the design of systems with easily replaceable components, to facilitate the integration and continuous regeneration of AI-generated code. The discussion highlights a shift from code creation to verification and the need for new frameworks to manage AI in engineering workflows.
Read original on Martin FowlerAs AI tools become more prevalent in software development, the role of engineers is shifting. Instead of purely creation-oriented tasks, engineers are increasingly engaged in what's termed supervisory engineering work. This involves directing AI, evaluating its output for correctness, and making necessary corrections. This change introduces a "middle loop" in the development process, situated between the traditional inner loop (coding, testing, debugging) and the outer loop (CI/CD, deployment, observation). The middle loop is where human engineers supervise AI performing tasks they once did manually.
A crucial implication of AI-generated code is the need for regenerative software. This paradigm shifts the focus from producing code to safely replacing it. For this to be effective, systems must be architected as networks of easily replaceable components, rather than monolithic applications. This echoes long-standing goals in software architecture but becomes even more critical with the rapid iteration capabilities of AI.
Key Architectural Constraints for Replaceable Components
To enable regenerative software and maximize the benefits of AI-generated components, consider these architectural characteristics: * Limited Communication Patterns: Standardized and few interaction mechanisms between components. * Clear Data Ownership: Each dataset should have exclusive mutation authority by a single component. * Defined Evaluation Surfaces: Components must have clear, independently verifiable behaviors. * Natural Component Granularity: Components should be sized based on data ownership and evaluation boundaries, not arbitrary lines of code.
The article references several maturity models (Bassim Eledath's 8 levels of Agentic Engineering and Steve Yegge's 8 levels in 'Welcome to Gas Town') that describe the progression of AI integration in engineering workflows. These models illustrate a spectrum from basic AI-assisted tools like tab completion to highly autonomous multi-agent systems. Understanding these stages can help organizations plan their adoption strategy and identify the skills needed for each level.
The discussion also touches upon the shift in code review, suggesting a move from human-centric reviews to layered evaluation filters, where AI can compare options, enforce guardrails, and assist in adversarial verification, with humans primarily defining acceptance criteria.