We're putting a small, opinionated stack into the world and calling it Plumb.
It does four things:
/v1/chat/completions that signs every response with ed25519 and settles batches on Base.HubRegistry.(modelHash, inputHash) and receive a signed callback.Existing AI APIs are black boxes. You send prompts, you get completions, you trust the operator. If that operator silently swaps models, rate-limits you, or loses your data — you have no artifact to point at.
Plumb's design bet is that every completion should leave an artifact — a signed receipt with request/response hashes, model id, cost, and an ed25519 signature over a canonical payload. You can verify that receipt against a public key registry without trusting the operator, and you can find it on a public block explorer.
Plumb runs natively on a VPS. No Kubernetes, no Docker unless you want it, no managed PaaS. Node 22, Postgres 16 + pgvector, Redis, Foundry for contracts, Next.js for the frontends, Caddy for TLS, systemd for orchestration. pnpm install && pnpm -r build && systemctl enable --now.
If you want a hosted instance, plumbtech.xyz is the one we run. Credits are PLMB; top-ups go through Settlement.deposit() on Base Sepolia during preview.
The next phase is the public marketing and docs site you're reading now. After that, the console gets its full per-route build-out — API keys, receipts tab, hub upload wizard, memory inspector. The SDK grows hub + memory clients and we ship an OpenAPI spec auto-emitted from the gateway routes.
And longer-term: TEE-backed attestation quotes for the receipt pipeline; zkML proofs for small-enough models; streaming Pipe fulfillments for consumer contracts that want partial results.
Thanks for reading. The code is open, the receipts are signed, the explorer is public.
— Dustin · 2026-04-23