TDD with AI: Using Tests to Guide Intelligent Coding Agents
Overview
Software development is shifting toward agentic coding, but relying solely on AI to design your architecture often leads to a "good enough" trap. When you let an agent write both the implementation and the tests, you lose control over the developer experience and API design. Writing tests first—a classic
Prerequisites
To follow this workflow, you should be comfortable with
Key Libraries & Tools
- Laravel: The primary PHP framework used for the application.
- Pest PHP: A testing framework focused on simplicity and readability.
- OpenCode: An AI-powered development platform that uses models likeCodexto generate code.
- Laravel Boost: A package providing specific coding guidelines and skills to the AI agent.
Code Walkthrough: Defining the API
Instead of asking the AI to "create an exporter," we write a
it('exports data', function () {
// Arrange
User::factory()->create(['name' => 'Christoph']);
// Act
$csv = Exporter::export(User::class)
->columns([
'name' => 'Name',
'email' => 'Email'
])
->toCsv();
// Assert
expect($csv)->toContain('Name', 'Christoph');
});
In this snippet, we define a fluent interface. We decide that export() should accept a class name and columns() should take an associative array for renaming headers. By handing this test to the AI, we force it to follow our specific API design.
Syntax Notes
Notice the use of fluent methods. Each method in the Exporter class should return $this to allow chaining. The test also utilizes the Arrange-Act-Assert (AAA) pattern. This structure helps the AI understand the sequence of operations and what the final output should look like.
Tips & Gotchas
Generic prompts often result in 80% satisfaction. You might settle for naming conventions like for() when you actually prefer from(). To avoid this, never start with a blank prompt. Always provide the test file first. This prevents "code drift" where your application slowly fills with AI-generated patterns that don't match your style.
