How This Project Was Built

Adversarial AI collaboration — zero human code

~24K lines of C. 960+ tests. Zero human code. Three AI agents—Challenger, Writer, Reviewer—collaborated in adversarial rounds until every test passed under AddressSanitizer. The result beats PostgreSQL on 16 of 19 batch benchmarks.

The Challenger adversarial test cases The Writer writes & fixes code The Reviewer actionable comments tests fixes code feedback iterate until stable 960+ tests pass ~24,000 lines of C · zero human code

The three agents worked in iterative rounds. The Challenger would study the current codebase and produce .sql test files exercising corner cases—empty tables, NULL handling, multi-column ordering, stale index entries after deletes, and more. The Reviewer would read the source and annotate it with actionable comments flagging code-quality issues, missing edge-case handling, and architectural improvements. The Writer would then run the new tests, address the comments, diagnose failures, and ship fixes. This continued until the full suite passed cleanly.

The feedback loop

Write code, run 960+ tests, fix failures, repeat. The adversarial model drove correctness the same way rigorous code review does on a human team—except all three sides were machines.

The result: a recursive-descent parser, block-oriented vectorized executor, arena-based memory management, and PostgreSQL-compatible wire protocol—all without a single line of human-written C.

Explore further

Architecture  ·  Benchmarks vs PostgreSQL  ·  Testing methodology  ·  Try it in the browser