Advanced15 min readAI DevelopmentChris Tansey

The Synthetic Developer: Working with AI Coding Agents

Learn how to work with coding agents as synthetic developers—fast, tireless, and dangerous. Master spec-driven development and the new workflow loop.

Published December 1, 2024

synthetic developercoding agentsGitHub Copilotspec-driven developmentAI coding

The Synthetic Developer: Working with AI Coding Agents

It's 11 PM. Your coffee went cold an hour ago. You're reviewing a pull request from your coding agent.

The code compiles. The tests pass. The function does exactly what it was told to do. But something's off—it's solving a problem you didn't ask it to solve. The authentication flow works perfectly, but it implements OAuth when you needed SAML.

The agent isn't broken. Your spec was.

This is the fundamental reality of working with synthetic developers. They're fast. They're tireless. They're shockingly capable. And they will build exactly what you ask for—which is dangerous when what you ask for isn't what you need.

Treat a coding agent like a zealous apprentice. It is fast, cheap, and dangerous. It will build the wall in record time—but it might build it two inches to the left. Your job is not to hold the hammer. Your job is to check the plumb line.

The 55% Reality

The productivity gains are no longer theoretical. GitHub ran a controlled experiment with 95 professional developers building an HTTP server in JavaScript:

MetricWith CopilotWithout Copilot
Average time1h 11min2h 41min
Completion rate78%70%
Speed improvement55% faster—

But the speed isn't the full story. What developers reported:

  • 73% said Copilot helped them stay in flow
  • 87% said it preserved mental effort on repetitive tasks

"I have to think less," one developer reported, "and when I have to think it's the fun stuff."

The market has voted: GitHub Copilot alone has crossed $300 million ARR with 51% enterprise adoption.

The Spec Bottleneck

Here's what the productivity stats don't tell you: faster code generation makes spec quality more important, not less.

If your spec is wrong and your developer writes code slowly, you have time to catch the mistake. If your spec is wrong and your synthetic developer builds in minutes what used to take days, you've multiplied the wrong output.

Bad spec + fast AI = wrong code faster.

Your Role Changed

You're not writing code anymore—or at least, not primarily. You're defining what code should do.

For decades, the core skill of software development was translating requirements into working code. Developers were valued for their ability to take a vague idea and turn it into something that compiled and ran.

That model is obsolete. Your synthetic developer can translate requirements into code. What it cannot do is:

  • Decide what the requirements should be
  • Resolve ambiguity
  • Ask clarifying questions

It will build whatever you specify, whether that specification makes sense or not.

The spec IS the work. The code is execution.

The Anatomy of a Good Spec

A spec that works with synthetic developers has five components. Miss any one, and you're gambling on interpretation.

1. Intent

What outcome are we trying to achieve?

Not what to build—why to build it. The agent needs to understand the purpose, not just the task.

BadGood
"Build a password reset function""Enable users to recover account access when they forget their password, reducing support tickets and improving security"

Intent lets the agent make reasonable choices when the spec is silent. Should the reset link expire? Intent suggests yes—security matters.

2. Constraints

What boundaries must be respected?

Every spec has explicit requirements and implicit assumptions. Make them explicit:

  • "Must complete in under 3 seconds"
  • "No SMS—email only"
  • "Must work with existing authentication service"
  • "Cannot store plaintext passwords under any circumstances"

Include non-goals: "This is not a full account management system—only password reset."

3. Tests

How do we know it works?

Acceptance criteria that can be verified:

  • Reset link expires after 24 hours
  • Works on mobile browsers
  • Fails gracefully if email service is unavailable
  • Generates audit log entry for every reset attempt

The tests aren't an afterthought—they're part of the spec. They define what success looks like in verifiable terms.

4. Done

What does complete look like?

Not just "code written"—deployment state and approval requirements:

  • Deployed to staging environment
  • QA approved
  • Security review passed
  • Documentation updated
  • Monitoring alerts configured

5. Context

What does the agent need to know?

Existing systems, established patterns, technology constraints:

  • "Uses the existing AuthService module"
  • "React frontend with Tailwind CSS"
  • "PostgreSQL database with existing users table"
  • "Follows company error-handling conventions documented in /docs/standards"

The Spec Template

ComponentExample
IntentUser can reset password via email
ConstraintsMust complete in under 3 seconds; email only; no new dependencies
TestsLink expires after 24h; works on mobile; audit log generated
DoneDeployed to staging, QA approved, docs updated
ContextUses existing AuthService; React frontend; PostgreSQL

When You Must Still Code

The zealous apprentice is capable, but not infinitely capable. There are situations where synthetic developers should not be trusted:

Novel Architecture

AI agents learn from patterns. When you're pioneering new architecture, there are no patterns to follow. First-of-its-kind systems need humans at the keyboard.

Security-Critical Paths

Authentication. Authorization. Financial transactions. The stakes are too high for probabilistic output. Security requires code that always works.

Integration Debugging

The agent sees one system. Humans see the seams. Cross-system debugging requires cross-system knowledge.

Ambiguous Requirements

When the spec is unclear, AI will guess. Humans should ask.

The master craftsman still cuts the crown molding. Some work requires judgment, not just speed.

The New Development Loop

1. Specify (Human Work)

Write the spec. Intent, constraints, tests, done, context. Every component. This is the artifact that matters.

2. Generate (Agent Work)

Let the agent build. Don't hover. Don't intervene at every line. Trust the speed.

3. Review (Quality Assurance)

Check the plumb line, not every nail. Does this code match the spec? Are the constraints respected? Do the tests pass?

4. Iterate (Refinement)

The spec evolves based on what you learned. Refine the spec. The agent re-executes.

Three Rules for Synthetic Developers

1. Blame the drawing, not the hammer. If the output is wrong, the spec was wrong. The agent built what you asked for.

2. The spec IS the work. Your human effort goes into specification, not implementation.

3. Trust the speed; verify the spec. Let the agent build fast. Focus your attention on whether the spec was right.


Key Takeaways

  • Coding agents are fast, cheap, and dangerous—they'll build exactly what you ask for
  • 55% productivity gains are real, but they amplify spec quality (or lack thereof)
  • A good spec has 5 components: Intent, Constraints, Tests, Done, Context
  • Know when to code manually: Novel architecture, security-critical paths, integration debugging
  • The new loop: Specify → Generate → Review → Iterate

This framework is from Chapter 3 of Scaling Digital Capital: The Architect's Blueprint by Chris Tansey. Get the full blueprint for building AI-augmented organizations.