AI & Agentic Coding

Apex, finally fast
enough for AI agents.

AI coding agents work in write-test-fix loops. That loop requires fast feedback. Deploying to a Salesforce org takes 5–10 minutes per iteration - longer than most agents are willing to wait. Nimbus runs tests in under a second.

Why Apex fell behind

Every modern language has a sub-second test loop. Python, TypeScript, Go, Rust - an AI agent can generate code, run tests, read the failure, fix the code, and rerun in under 30 seconds per iteration. That tight loop is what makes agentic coding work.

Apex didn't have that. Running tests meant authenticating to a Salesforce org, deploying code, waiting for the platform to execute, and downloading results. 5 to 10 minutes per iteration - if nothing goes wrong. An agent writing Apex either skipped testing entirely or burned through tool calls waiting for the org to respond.

Nimbus closes that gap. Tests run locally, in-process, against an embedded database. The same iteration that takes 10 minutes with an org takes under a second with Nimbus. Apex is now a first-class language for agentic workflows.

Without Nimbus
Agent writes Apex~5s
Authenticate to org~10s
Deploy code~3 min
Run tests on platform~2 min
Download results~5s
Agent reads result, fixes~5s
Per iteration~5–10 min
With Nimbus
Agent writes Apex~5s
nimbus test~200ms
Agent reads result, fixes~5s
Per iteration~10 sec

Works with any AI coding tool

Nimbus is a CLI. If your AI tool can run a shell command, it can run Nimbus. No API key, no plugin, no configuration beyond pointing at your project.

Claude Code

Runs in your terminal, reads your codebase, writes and runs code autonomously. With Nimbus, it tests every change without leaving your machine or waiting for a deploy.

Cursor

AI-native editor with inline code generation. Nimbus watch mode closes the loop - Cursor writes, the file saves, tests run, Cursor sees the result.

GitHub Copilot

Generates Apex inline in VS Code. The Nimbus VS Code extension runs tests immediately after Copilot completes a suggestion - no manual trigger required.

Any agent with shell access

If it can run a terminal command, it can run nimbus test. No credentials, no browser, no org. Just a binary and a project directory.

Claude Code example

Claude Code operates as an autonomous agent in your terminal. Give it a task - "add a before-insert trigger that enforces uniqueness on Account.Name" - and it writes the trigger, writes the test, runs Nimbus, reads the failure, fixes the code, and reruns until tests pass.

Without Nimbus, that loop requires Claude Code to either skip testing or deploy to a Salesforce org - which requires credentials, authentication, and 5+ minutes of platform time per iteration. Most agentic Apex workflows just skipped the test step entirely.

With Nimbus in the project, Claude Code discovers it via the CLAUDE.md file and uses it automatically - the same way it uses Jest for JavaScript or pytest for Python.

markdown
# CLAUDE.md

## Running tests

Use Nimbus to run Apex tests locally - no org required.

```bash
# Run all tests
nimbus test "*"

# Run a specific class
nimbus test "MyTriggerTest.*"

# Run and watch for changes
nimbus test:watch
```

Tests run against an embedded local database.
Each test is isolated in its own transaction.
Results appear immediately - no deployment needed.
Add a CLAUDE.md to your project root and Claude Code will use Nimbus automatically for all Apex test runs.

Watch mode for human + agent pairing

When you're working alongside an AI agent - reviewing its output, guiding its direction - watch mode keeps the feedback loop running without either of you having to trigger it manually.

The agent writes code, saves the file, and the test result appears immediately - in the terminal, in VS Code, or in the Dev UI in your browser. You see what passed, what failed, and what the agent should fix next, without switching context.

bash
# Start watch mode before handing off to the agent
nimbus test:watch

# Agent writes AccountTrigger.cls, saves it.
# [14:23:01] AccountTrigger.cls changed
# Running AccountTriggerTest... ✗ 1/3 (44ms)
#   testUniqueNameEnforcement: Expected exception, got none

# Agent reads the failure, fixes the trigger, saves again.
# [14:23:09] AccountTrigger.cls changed
# Running AccountTriggerTest... ✓ 3/3 (41ms)

# No commands needed between iterations.
# The loop runs itself.

Agents in CI

AI-generated code still needs a quality gate before it merges. Nimbus in CI gives you that gate without a connected org - the same test run the agent used locally, now enforced on every PR.

JUnit XML output integrates with GitHub Actions PR annotations. Cobertura coverage integrates with Codecov. An agent that writes undertested code will fail the coverage gate, same as a human would.

yaml
# .github/workflows/apex.yml
- name: Test agent-generated Apex
  run: nimbus test "*" --results-xml results.xml

- name: Upload test results
  uses: actions/upload-artifact@v4
  with:
    name: test-results
    path: results.xml

# PR annotations show exactly which tests failed
# and which lines the agent forgot to cover.

Apex at the speed of AI.

One command. No org. No credentials. Your AI agent can test Apex the same way it tests every other language.