OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context.
This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours.
Edit: I see you, making edits to the readme to make it sound more human-written since I commented ;) https://github.com/gavrielc/nanoclaw/commit/40d41542d2f335a0...
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
I don't mind it if I have good reason to believe the author actually read the docs, but that's hard to know from someone I don't know on the internet. So I actually really appreciate if you are editing the docs to make them sound more human written.
As I said in my comment, no shade for writing the code with Claude. I do it too, every day.
I wasn’t “irked” by the readme, and I did read it. But it didn’t give me a sense that you had put in “time and effort” because it felt deeply LLM-authored, and my comment was trying to explore that and how it made me feel. I had little meaningful data on whether you put in that effort because the readme - the only thing I could really judge the project by - sounded vibe coded too. And if I can’t tell if there has been care put into something like the readme how can I tell if there’s been care put into any part of the project? If there has and if that matters - say, I put care into this and that’s why I’m doing a show HN about it - then it should be evident and not hidden behind a wall of LLM-speak! Or at least; that’s what I think. As I said in a sibling comment, maybe I’m already a dinosaur and this entire topic won’t matter in a few years anyway.
I get using AI, I do all day everyday day it feels like, but this comes off as not having respect for others time.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
Not one line of code I wrote 20 years ago has the same economic value as East German currency.
All code is social ephemera. Ethno objects. It lacks intrinsic value of something like indoor plumbing.
It's electrical state in a machine. Our only real goal was convince people the symbols on the screen were coupled to some real world value while it is 100% decoupled from whatever real physical quantity we are tracking.
We all been Frank from Always Sunny; we make money, line go up. We don't define truth. The churn of physics does that.
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
It isn’t “have it your way”, he graciously made code available, use it or leave it.
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
Assuming the written/generated text is well written/generated, of course.
(I'm a human btw)
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
Who are you going to arrest and/or sue when you run a chat bot "at your own risk" and it shoots you in the foot?
The point is to recognise that certain patterns has a cost in the form of risks, and that cost can be massively outsize of the benefits.
Just as the risk of giving a poorly vetted employee unfettered access to the company vault.
In the case of employees, businesses invest a tremendous amount of money in mitigating the insider risks. Nobody is saying you should take no risks with AI, but that you should be aware of how serious the risks are, and how to mitigate them or manage them in other ways.
Exactly as we do with employees.
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.
What if a thermonuclear war breaks out? What's your backup plan for this scenario?
I genuinely can't tell which is more likely to happen in the next decade. If I have to guess I'll say war.
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
Wouldn't a crypto wallet with a small amount deposited be smarter?
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
they paid $0, it's all VC money printing for now
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
Is this materially different than giving all files on your system 777 permissions?
It's more (exactly?) like pulling a .sh file hosted on someone else's website and running it as root, except the contents of the file are generated by a LLM, no one reads them, and the owner of the website can change them without your knowledge.
Yes, because I can't read or modify your files over the internet just because you chmod'ed them to 777. But with Clawdbot, I can!
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
Thankfully the official Agent SDK Quickstart guide says that you can: https://platform.claude.com/docs/en/agent-sdk/quickstart
In particular, this bit:
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
> Third-party harnesses using Claude subscriptions create problems for users and are prohibited by our Terms of Service.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
$70 or whatever to check if there's milk... just use your Claude Max subscription.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
https://maordayanofficial.medium.com/the-sovereign-ai-securi...
At least 42,665 instances are publicly exposed on the internet, with 5,194 instances actively verified as vulnerable through systematic scanning.. The narrative that “running AI locally = security and privacy” is significantly undermined when 93% of deployments are critically vulnerable. Users may lose faith in self-hosted alternatives.. Governments and regulators already scrutinizing AI may use this incident to justify restrictions on self-hosted AI agents, citing security externalities.I’m confused as to what these claw agents actually offer.
WhatsApp (baileys) --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
The scheduled tasks seem like the major functional difference. Pretty cool.
Has anyone tried Anthropic’s “Cowork”? How does that compare?
This project violates Claude Code's Terms of Service by automating Claude to create an unattended chatbot service that responds to third-party messaging platforms (WhatsApp, and what you add ...).
The exact issues:
1. Automated, unattended usage - The system runs as a background service (launchd) that automatically responds to WhatsApp
messages without human intervention (src/index.ts:549-574)
2. Building a bot service - This creates a persistent bot that monitors messages and responds automatically, which violates restrictions on building derivative services on top of Claude
3. Third-party platform integration - Connecting Claude to WhatsApp (or other messaging platforms) to create an automated
assistant service isn't an authorized use case.
The README itself reveals awareness of this issue at line 41:
**No ToS gray areas.** Because it uses Claude Agent SDK natively with no hacks or workarounds, using your subscription with your auth token is completely legitimate (I think). No risk of being shut down for terms of service violations
(I am not a lawyer).
The defensive tone ("I think", "I am not a lawyer") indicates uncertainty about legitimacy. While using your own credentials doesn't automatically make automated bot services compliant—Anthropic's TOS restricts using their products to build automated chatbot services, regardless of authentication method.
The core violation: transforming Claude Code into an automated bot service that operates without human intervention, which is explicitly prohibited.1. Usage is not automated and unattended - it only responds to messages that are sent to it with a specific prefix "Andy:"
2. This is not a bot service. It is not crawling twitter and responding to posts. Hard to see how sending it messages through WhatsApp is any different than through ssh via the terminal
3. I don't think a custom piece of software running on my computer that pipes data from a program into the Agents SDK is a third party "platform" integration.
How is this different from running Agents SDK as part of a CI process?
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
If you've got an exploit for docker / linux containers, please share it with the class.
What I'm saying is that in practice, containers and VMs have both been quite secure.
Also, you can configure docker to run microvms too https://github.com/firecracker-microvm/firecracker-container...
Quick Start
git clone https://github.com/anthropics/nanoclaw.git
Is this an official Anthropic project? Because that repo doesn't exist.Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
> This is the anti-[OpenClaw](https://github.com/anthropics/openclaw).
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
If I want to add additional capabilities for myself, I'll contribute them to the project as skills for claude code to modify the code base, rather than directly to the source. I actually want to reduce the size of the base implementation and have a PR open to strip out 300-400 LOC
1. You can live in the future, and be at the bleeding edge of the latest AI tech, reaping the benefits. Be part of the solution.
2. You can stay in the past and get left behind, at the mercy of those who took the risks.
Unfortunately, all those solutions are shaky and could lead to a ban on your account.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
I realize you used Claude Agent SDK on purpose but I'd really like to this to be agent agnostic. Maybe I'll figure that out...