Each backend is a simple DADL file - a small YAML that declares the REST API of the service to ToolMesh, which then exposes those tools to Claude. Most of the publicly available DADLs (currently 20 with 1,833 tools in total) were drafted by an LLM in minutes and tuned from there. The registry is public.
Here is the HN API as DADL - the API behind this very page:
tools:
get_top_stories:
method: GET
path: /topstories.json
access: read
description: "Up to 500 top story IDs, ordered by HN ranking"
get_item:
method: GET
path: /item/{id}.json
access: read
description: "Get story, comment, job, poll, or pollopt by ID"
params:
id: { type: integer, in: path, required: true }
How can a single agent access so many backends without creating context overflow? Code Mode. Naively, every tool and schema goes into context - 50,000+ tokens before the agent does anything useful. ToolMesh compresses that to ~1,000 by giving the model a typed API surface and letting it ask for endpoint details only when it needs them. That is the difference between "doesn't scale" and "please add 10 more, it's fine!". ToolMesh can also connect to other MCP servers, rendering them code mode capable as well.Security in mind: credentials never reach the model (they are injected at runtime). ToolMesh runs a fail-closed pipeline: auth -> authz -> credential injection -> exec -> output gate -> audit. CallerClass lets the same API have different policy per client type (local dev assistant vs hosted agent vs CI bot). Every call lands in a SQLite-queryable audit log - "what did the agent do Tuesday?" becomes a SQL query, not a shrug.
ToolMesh is not magic. APIs with stateful flows or weird auth still need care, and an LLM with a great tool surface can still pick the wrong tool. You still need sane policy.
Try before cloning: https://demo.toolmesh.io is a public instance with the HN API loaded (login dadl/toolmesh). Connect Claude Desktop, Claude Code, or ChatGPT in 30 seconds: https://toolmesh.io/demo
GitHub: https://github.com/DunkelCloud/ToolMesh Docs: https://toolmesh.io DADL Spec + Registry: https://dadl.ai
Apache 2.0, single Go binary or Docker, no SaaS dependency.
If you think of your full ops stack - what DADLs would you like to have available to your LLM?