Native async/await across TypeScript, Python, and Rust. Self-host one binary. Pay for the box, not the workflow.
Apache 2.0 · Server, SDKs, and tools
Live demo
Claude Haiku 4.5 plays black, one durable step per move.
Native async/await. Write normal Python, TypeScript, or Rust. No DSLs, no workflow definitions, no new mental models.
Formally modeled. Deterministically tested.
One binary. SQLite locally, Postgres in production. Free and open source, Apache 2.0 — server, SDKs, and tools. No per-execution pricing. No surprise bills.
brew install resonatehq/tap/resonate && resonate devServerless-native. Minimal state transitions. No per-execution pricing. At the extreme end, a workload that costs ~$80K/year on hosted alternatives costs under $100/year here.
Agent-native
Distributed systems are hard for humans and harder for agents who have to hold every moving part in a single context window. Resonate collapses the surface area — one programming model, one protocol, one binary — so an agent can design, write, test, and operate a distributed system end to end.
An agent's context is finite. Resonate asks for none of it — distributed async/await is already in the pre-training. No DSL, no workflow grammar, no bespoke language to page in before the real work starts.
The SDK is formally modeled and deterministically tested. An agent can prove the distributed system it just wrote is correct without spinning up real infrastructure. The result reproduces across runs.
AGENT.md in every repo. First-class Skill files. Reference examples organized by pattern, not by tutorial order. The ground truth is written for agents first, humans second.
Built-in
Ordinary async/await on the outside. Every distributed-systems primitive an agent would otherwise stand up from scratch, on the inside.
Self-hosted by default
Managed durable execution platforms make you provision a tenant, wire up auth, learn a console, and wait on a support queue when something breaks. Resonate is a single binary. A coding agent ships it to any cloud unattended. A human does the same in a few clicks.
Getting started
Local · SQLite
$ brew install resonatehq/tap/resonate
$ resonate dev
# server on :8001 — SDK connects to localhost
Zero config. Zero account. A resonate dev server runs embedded SQLite, so you iterate entirely on your laptop.
Production
Any cloud · Postgres
$ export DATABASE_URL=postgres://...
$ resonate serve
# same binary, durable Postgres backend
The binary you ran locally is the binary that ships. Set DATABASE_URL, run resonate serve, and Resonate is in production.
Open source · Deploy anywhere
The server, the SDKs, and the tools are Apache 2.0. If a platform can run a statically-linked binary or a container, it can run Resonate. Pick your favorite and point it at a Postgres URL.
fly launch→ Railway · railway up→ Render · render deploy→ Fargate / ECS · docker run→ Cloud Run · gcloud run deploy→ Bare metal · systemd unitSame binary. Same command. Different box.
The same statically-linked binary runs both the dev loop and production. Dev mode embeds SQLite; serve mode points at Postgres. Nothing else changes.
Nothing to stand up before Resonate can run. No tenant to provision, no auth broker to configure, no managed queue to attach. A single process is the whole system.
No API keys, no quotas, no support ticket when something breaks. You own the binary, the data, and the uptime. Nothing calls home.
The cost story · Greenfield
The expensive part of durable execution is not CPU — it is the state transitions and the pricing model built around them. We changed both.
This is the math for greenfield projects — projects that get to choose their durable-execution stack from scratch. Why pay for a managed license when the open-source binary runs on the cloud you already have?
Other durable execution platforms bill per state transition — every step, signal, timer, and retry ticks the meter. Resonate is designed so the meter rarely moves. A workflow that logs hundreds of events elsewhere logs a handful here.
The server is a single statically-linked binary with Postgres. No sidecars, no task queues to provision, no control plane to keep warm. One process on one machine runs production workloads for most teams.
You pay for the box Resonate runs on. That is the entire invoice. Scale up and the marginal cost is CPU, not per-workflow fees.
At the extreme end, a workload that costs ~$80K/year on hosted alternatives costs under $100/year on Resonate. Orders of magnitude lower cost.
Based on internal benchmarks of Resonate running on Railway ($7/mo) against per-transition pricing on managed orchestration platforms for high-throughput workloads.
See pricingCustom components · Brownfield
Durable execution is a protocol, not a product. If Kafka, Postgres, gRPC, NATS, or something in-house is already in production, that is the substrate — Resonate crafts custom components for it.
For brownfield projects — the systems already in production that can't be rewritten — Resonate crafts custom components for your stack.
The protocol, the SDK, and the programming model stay the same. Your existing infrastructure becomes the durable execution engine.
Tell us your stack →Your existing stack
Custom-built server, same protocol, same SDK. Licensed per-server. You own the source.
Community
Discord threads, journal posts, the repos. All public.
“I have my agent waiting for all sorts of tool calls, and this is really interesting — it is indeed a natural use case for Resonate.”
thebookworm2851
Building with agents
“Fully serverless deployment for a countdown and a deep research agent. Cloud Run hosting the server, function definitions that run on demand and scale infinitely.”
tomasperez
Resonate on GCP Cloud Run
Notes from the engineering team.
Apache 2.0. Star the ones you'd build with.