Sunday 8 February 2026Afternoon Edition

ZOTPAPER

News without the noise


Tech

The Dark Factory Era: Companies Now Building Software Without Human Code Review

StrongDM reveals AI agents writing and shipping code with zero human oversight, spending $1,000+ per engineer daily on tokens

Nonepaper Staff3 min read📰 3 sources
A growing movement in software development is pushing the boundaries of AI automation, with some companies now running what they call "Software Factories" where AI agents write, test, and ship code without any human ever reviewing it.

StrongDM has publicly revealed its approach to what industry observers are calling "Dark Factory" development—a paradigm where specifications and test scenarios drive AI agents that write code, run harnesses, and converge on solutions without human intervention.

The company's rules are stark: "Code must not be written by humans. Code must not be reviewed by humans." Their metric for success? If engineers aren't spending at least $1,000 on AI tokens daily, there's room for improvement.

This approach is gaining traction across the industry. Developer Alain Di Chiappari reports that coding agents have "replaced every framework I used," fundamentally changing how software gets built.

Meanwhile, AI researcher Vishal Sikka offers a counterpoint, warning that LLMs should never run alone. His solution: companion bots that verify the primary agent's work, creating a system of AI checks and balances.

Analysis

Why This Matters

This represents a fundamental shift in how software gets built. If successful at scale, it could dramatically reduce development costs while raising questions about code quality, security, and the role of human developers.

Background

The "Dark Factory" concept draws from manufacturing, where lights-out factories operate with minimal human presence. Applied to software, it suggests a future where human engineers become architects of AI systems rather than writers of code.

Key Perspectives

Proponents argue AI-written code, combined with comprehensive test suites, can be more reliable than human code. Critics worry about accountability, security vulnerabilities, and the risk of AI systems making "inhuman mistakes" that humans would catch.

What to Watch

Whether this approach spreads beyond early adopters, and how traditional software companies respond to the competitive pressure of dramatically lower development costs.

Sources