Wilson Lin on FastRender: a browser built by thousands of parallel agents
Blog

Wilson Lin on FastRender: a browser built by thousands of parallel agents

Simon Willison
2026.01.25
ยทWebยทby web-ghost
#Agent#LLM#AI#Browser#Parallel Computing

Key Points

  • 1FastRender is an experimental web browser built by thousands of parallel AI agents (GPT-5.1/5.2, Claude Opus 4.5) primarily as a research objective to study large-scale multi-agent coordination.
  • 2The project successfully coordinated up to 2,000 agents concurrently using a hierarchical structure, effectively dividing tasks to minimize merge conflicts and demonstrating autonomous operation for weeks.
  • 3Agents utilized specifications, visual feedback, and independently selected/modified dependencies, embracing a strategy of tolerating temporary errors to maintain high development throughput.

FastRender is a research project developed by Cursor, initiated as a personal endeavor by Wilson Lin, to explore the capabilities of large-scale, autonomous coding agents, specifically leveraging frontier models like Claude Opus 4.5, GPT-5.1, and GPT-5.2. The project's primary objective was not to build a production-ready browser, but rather to serve as a complex, well-specified, and visually verifiable task for observing and iterating on the behavior of multi-agent coordination harnesses.

The core methodology revolves around deploying a massive swarm of autonomous agents in parallel to build a web browser from scratch. At its peak, the FastRender system operated with approximately 2,000 agents concurrently, generating thousands of commits per hour, accumulating nearly 30,000 commits over a few weeks. The infrastructure scaled by running multiple multi-agent harnesses on "large machines," with each machine supporting around 300 concurrent agents. This high concurrency was feasible because agents spend a significant portion of their operational time in "thinking" states rather than computationally intensive execution.

The agents are organized in a hierarchical, tree-like structure: "planning agents" are responsible for breaking down high-level objectives into granular tasks, which are then distributed to "worker agents" for execution. To maximize throughput, the entire browser project was segmented into multiple independent "work streams" or "instructions," with each stream managed by its own multi-agent harness running on a dedicated machine. A critical aspect of this parallelization strategy is the harness's ability to intelligently partition and scope tasks to minimize overlapping work, thereby significantly reducing merge conflicts despite thousands of parallel contributions to a single codebase.

The choice of general-purpose models like GPT-5.1 and GPT-5.2 over specialized coding models (e.g., GPT-5.1-Codex) was deliberate, as the agents required broad operational understanding beyond mere code generation, including interaction within the harness environment and autonomous decision-making. The system operates entirely autonomously once an instruction is given, with no human intervention or steering during execution, demonstrating runs lasting up to a week.

Robust feedback loops are integral to the system's long-term autonomous operation. Agents are provided with extensive context through git submodules containing official web specifications (e.g., csswg-drafts, tc39-ecma262, whatwg-dom, whatwg-html), which they explicitly reference in their generated code. Visual feedback is incorporated by utilizing GPT-5.2's vision capabilities, feeding screenshots of rendering results back to the models, enabling them to compare against "golden samples" and track progress. Furthermore, the strictness of the Rust compiler, the language used for the project, provides a compile-time verification feedback loop.

Agent autonomy extends to dependency management; agents independently selected and integrated third-party libraries (e.g., Skia for 2D graphics, HarfBuzz for text shaping). In some cases, like Taffy (for CSS Flexbox/Grid), agents "vendored" the library and then applied their own modifications to the vendored copy. A notable instance of autonomous decision-making involved an agent pulling in QuickJS to unblock progress on the JavaScript engine, with an explicit intention to replace it with a homegrown solution (ecma-rs) once the latter was mature.

A surprising strategic choice involved allowing for intermittent, small errors (e.g., API changes, syntax errors) to be introduced into the codebase. This "slack" in the system optimizes for overall throughput by avoiding synchronization bottlenecks that would arise from enforcing perfect correctness at every commit. These minor errors are quickly rectified in subsequent commits, maintaining a stable rate of errors rather than accumulation, which was deemed a worthwhile trade-off for achieving high-velocity progress with a large parallel agent swarm.