> "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners."
So the impetus for the acquisition was either the verification technology or to hire someone who has worked on verifying agent identity.
Does anyone know what exactly Moltbook's technology is, the technology being described by Meta? I can't find anything on the website related to this. The only "verification" they seem to have is an OAuth connection with Twitter.
edit: I guess it's this https://xcancel.com/moltbook/status/2023893930182685183
My pet theory is Meta got acquihire FOMO after seeing OpenAI acquire Openclaw/Peter Steinberger.
That's the thing though, there is interest in "metaverse" style programs. VRChat, the biggest one, got 80k concurrent users last month (all time peak) according to SteamDB. Seems low, but hardware is a limiting factor for them.
What happened is Facebook's version of this was a corporatized, simplified, G-rated fraction of what its competition is. Despite being in a medium where the defining factor is the ability to look out the eyes of anything vaguely humanoid, you could only be a generic human who only exists from the waist up, devoid of almost any self expression beyond maybe accessories or retexturing.
As a result, there was no audience: the people who already use VR aren't going to go to an inferior product. And the people who would buy a VR headset aren't going to waste their time on a ghost town.
Metas investments into VR make abundant sense as an effort to capitalize on a market where Meta was leading, has mindshare, and owns the platform (Oculus). If the bet paid off, or pays off, it would create a sorely lacking competitive moat and potentially provide Enterprise inroads where Meta is otherwise a non-player.
Apple went down the same road, they see the same potential profits. I don’t think either is guilty of contemporaneous dot-com-boom thinking or investments with regard to VR/AR.
Carmack was on board, he remembers Pets.com too.
The metaverse is what happens when you let your leadership/product team convince you that the key to speed up what you want to deliver is to throw people at the problem, and not put any constraints on deliverables.
The original plan for oculus is to establish a VR eco system that would have transitioned into AR glasses, allowing facebook to have a platform of its own.
VR was/is a bit niche, because it required lots of expensive hardware, and there were limited games/uses.
first logical step: remove the need for a high end PC, make the thing cheap.
That drops one barrier to adoption: expense.
The next one is, great I have this $400 device that does VR, but what can I actually _do_ on it? That means you need content and features. This is where it all turned to shit. Zuck looked at steam, and itunes and said: "make it so", and they started tapping up devs to make small games, and AAA to make big ones.
But, its expensive to port games, and it takes time, why not buy studios that are making great games and get them to make more? so they bought a bunch of indie studios. Those studios had to fight to keep their devs, because facebook normally fires/rehires, forcing everyone to re-interview for their job. Games devs aren't really hired because they don't pass the technicals (Don't know why, given that games devs need to be good or the FPS drops like shit.)
with all that upheaval, those games studios don't really produce extra games to sell.
All the while a small team had been making a roblox clone. It was slow and a bit buggy, and you could make shitty games. During lockdown we all had a play. Needed a new generation of hardware to work properly, because it was a unity game with a bunch of hacks to allow custom maps and rules.
Never mind, we are doing E N T E R P R I S E now. enter work rooms. Again a small initiative, which basically asked, can we make better VC if we are in VR? The answer is yes, yes you can, but selling it is hard. There were a lot of hard problems to solve, like needed to detect keyboards, how do you present your screen if you can see your computer? how can you do computer passthrough or virtual monitors in VR?
Zuck saw this and jizzed his pants, so made it a priority. This meant the small team (probably less than 40) swelled to like 4000. Most of the people who moved were not games devs, or had ever worked in graphics/3d. This meant that loads of silly lessons had to be learnt in prod. Nothing was stable, everything was high friction, and no, there was no public API to allow you third parties to integrated into the app.
For the longest time it took >5 minutes to join a VR meeting.
Basically Zuck loves features, and cant understand that user experience is way way more important than features. He throws engineers at the problem which means that instead of solving product issues, they endup solving people issues.
Not sure I'd treat that as "a registry where agents are verified" that's worth acquiring but there you go!
“Bears look smart, Bulls make money.”
Good for them, get the bag.
I hate that they did. But I appreciate that’s how the God awful world works.
Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.
My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.
On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.
Seems like it would be better to just remove those downsides (ads, ragebait, spam, etc) in the first place
This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.
Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.
Basically, this method is as about as full of holes as a sieve.
Almost everything viral on there was either directly written by a human or instructed by a human.
Agents didn’t even write posts on heartbeat.
The secret sauce is that they built a centralized database and assigned hash ids to registered agents.
This is apparently worth a lot of money now that executives have offloaded their common sense.
The deal brings Moltbook's creators — Matt Schlicht and Ben Parr — into Meta Superintelligence Labs (MSL)
Like that malware author who recently joined OpenAI did https://news.ycombinator.com/item?id=47028013 or that other one who went to his hairstylist and was enlightened while having a haircut that he should join OpenAI https://news.ycombinator.com/item?id=46920487
I thought hairstylist was a joke. Ohhh mann. "Now my hairstylist, who recognized ChatGPT as a brand more readily than she did Intel, was praising the technology and teaching me about it. "
In other words, Facebook has a strong financial incentive to misrepresent (to ad-viewing customers, if not to investors) exactly how much social-ness is present to experience, and how much approval and attention the user gets from participating.
Soon everything will be The Truman Show.
To me, this feels more like acquiring the name. Everyone's heard that 'trademark' so they want to have it so they could reuse it for whatever they make later.
at that point ... what are you even acquiring? If a shoddy bot social platform is all you want just vibe code it yourself, super-intelligence is around the corner but it's apparently not good enough to make a copy of a piece of software that was already written by bots?
The creator didn't write anything, the platform's buggy, the users are fake, it's like you're buying binders full of Lorem Ipsum copy pasta
I can see that becoming a viable new grift template
And yet, here we are.
the story does sound ridiculous ostensibly, but that's the press spin.
We could have an AI Dang.
"We trained the dang-AI on thousands of dang posts, and now it's a Zen master and wants to sit under a tree and contemplate bees."
I mean I also think this move doesn’t make sense, but I always find these type of comments interesting. Do people think they could do better in Mark’s shoes?
The posted price rarely reflects what founders actually receive after dilution, investor preferences, and stock vesting are factored in.
If you’re a founder, don’t let the acquisition narrative distract you from building a durable business.
Does Mark not know this?
I know there's a big advantage in capturing the market early, but in this case Moltbook hasn't captured any of it ...
Weird. With Meta's backing it is going to be successful anyway, but this is something they could have developed in-house in like a weekend.
I think it's pretty obvious that if there was nothing valuable there, no one would be using it.
He should probably hire a proper "number 2" (not someone political like sandberg) -- someone who "gets" the internet, like how he did when he was a harvard geek making a hot-or-not clone in his dorm room. I'm not sure acqui-hiring the moltbook founders is the move.
That being said, I think the one silver lining is that it seems like big-tech now has a willingness to hire people who actually ship things of value, like peter steinberger. Another nail in the coffin for leetcode, I hope.
Eventually there may be a big misstep, perhaps, something big enough to bring down the company. But he’s never come close to date. He’s so good at making money from ads that he can afford to keep burning cash on fruitless projects, hiring people that don’t deliver, building infrastructure he might not need. That’s a testament to his performance as a money maker.
Meta is an advertising machine. Not something I’d want to be associated with, but you cannot deny that he has built an incredible ad machine, probably the greatest ad machine ever built - whereas Google had to deliver sophisticated and costly tech to maintain their machine (maps, google search, gmail) meta’s only technical breakthrough has been to hyperscale a php website.
1. https://en.wikipedia.org/wiki/Social_bot#Meta
2. https://en.wikipedia.org/wiki/Dead_Internet_theory#Facebook
We've been building AgentSign (patent pending) which tackles this exact gap -- cryptographic identity for AI agents. Every agent gets an identity certificate, every action gets signed into an execution chain, and there's runtime code attestation before anything executes. Think zero trust but for agents, not humans.
The real question isn't whether agent networks will exist (clearly they will, Meta just paid for one). It's whether we'll let them run without any trust infrastructure underneath. Moltbook without trust verification = fake posts. Agent networks with cryptographic identity = agents you can actually hold accountable.
This has really started getting to me.
I used to really enjoy answering technical questions on Reddit when it was clear the asker was invested in a solution. That would come across as demonstrated understanding and competence, and it would be reflected in their writing.
The last several posts I thought to answer though clearly originated through a process of, "Hi ChatGPT, I want to solve a problem and haven't gotten anywhere asking you to do it for me. Please write a reddit post I can copy and paste..."
One of the telltale signs is that the post title will have poor grammar, but the post itself will be spotless, and full of bolded text emphasizing exactly what they need to stick into the AI tool to drive it in the direction they need.
The post was full of “this is not a scheduling conflict problem, this is a structural issue with the city”, “this is not me asking for a handout, this is struggling to survive within the system”
While I get that he might have written a paragraph of his experience, and asked ChatGPT to clean it up or reword it, it was just… whatever.
I understand that a lot of people would be very unhappy if this is true, but I can imagine from the perspective of a product person at OpenAI that it helps them in multiple ways.
most people are bots and many don't even have an internal monologue its sad
But still not interesting.
They didn't acquire Moltbook because of the software. Meta is far behind on the AI front especially as it applies to usage adoption. OpenClaw has begun showing new consumer use cases and Moltbook is directionally down a similar path.
They get the team that built it and have more people on the AI initiative who are consumer-centric.
I've watched Matt Schlicht from the team always experiment with cool new use cases of AI and other technologies and now him and Ben have a bigger lab with resources to potentially spawn out larger initiatives.
The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
Only Meta? Why not most of SV that's driven by ad revenue and data collection? Which big-tech company that pays crazy money is actually making the world a better place?
It's a worse version of Claude Code that you set up to work over common chat apps, from what I gather?
Why would I not just use a Discord/WhatsApp bot etc plugged into Claude Code/Codex?
Next, consider how you might deploy isolated Claude Code instances for these specific task areas, and manage/scale that - hooks, permissions, skills, commands, context, and the like - and wire them up to some non-terminal i/o so you can communicate with them more easily. This is the agent shape.
Now, give these agents access to long term memory, some notion of a personality/guiding principles, and some agency to find new skills and even self-improve. You could leave this last part out and still have something valuable.
That’s Openclaw in a nutshell. Yes you could just plug Discord into Claude Code, add a cron job for analyzing memory, a soul.md, update some system prompts, add some shell scripts to manage a bunch of these, and you’d be on the same journey that led Peter to Openclaw.
But a message bot + Claude Code/Codex would be the better version
(Not that I endorse that. I find peoope doing such wildly irresponsible.)
1) accessibility to non-technical folks. For the first time, they are having the Claude Code experience that we've had as software engineers for some time now
2) shared, community token context. Many end users are contributing to one agent's context together. This has emergent properties
If they land in the right org, they'll be allowed to maintain the open version (see https://www.mapillary.com/) However that's a rare outcome.
They'll be dumped in some org, and then bit by bit told that they can't do what they were doing before and now need to "forge alignment" or some other bullshit by posting on workplace.
They will need to deliver impact, But, as there are 3 other teams trying to do the same thing as you, you'll either be used as a battering ram by your org to smash the competition, or offered up as meat to save headcount.
Who are comfortable releasing systems with horrible security, while proudly stating they never read the code? And with metrics that can be gamed by anyone, but that got reported to literally the entire world?
> The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
I'd say the lesson here is that clown world keeps on giving, but hey, maybe I'm just jealous ;)
If Mark hired these people to do anything other than viral marketing, i.e. if he thinks they're visionaries who are going to make amazing apps, he's deluded.
Whom are you kidding? This is about getting ads in front of eyeballs, nothing else.
Some dumb idea which just hits at the right moment and makes a bunch of money.
Meta just saw two engineers actually execute on the joke about "building Facebook in a weekend" except that it then really took off in its target niche and generated a ton of press.
I don't doubt that they're interested in the AI aspect, but I suspect that a significant contributor was that they demonstrated competence right in the middle of Meta's wheelhouse so why not just grab these guys?
My exact state of mind since at least 2012 Mayan Flipocalypse.
For the lack of a better word, this feels like cope. In the modern world, being rich easily covers any of those other 'downsides'. Rich people will have a far better life than I and probably many other people here ever will, despite what the situation is like in the rest of their lives.
Worse, they are working for extreme sociopaths.
Also you might not like being the type of person that builds moltbook. People you like might not like that type of person either!
No reason to feel bad.
This is somewhat of a myth though, in most cases, suddenly becoming rich is absolutely fantastic.
there is no shame in just doing the building software bit. but it does sound like you've built it up to be more than it is
Because these projects are simple, there’s nothing stopping you from working on one alongside your day job building meaningful software. You can vibe-code something that actually tries to solve a real problem. You can vibe-code something interesting to learn how to generally use these tools. Although, don’t expect to get hired by OpenAI or Meta (or make any money off it).
Nope, turns out it is just a bunch of out of touch execs throwing shit at the wall and hoping something sticks. Fudging Llama 4 scores. Hiring Alexandr Wang for $14 billion. Making outlandish offers to poach AI talent from OpenAI, Anthropic and Google. Making dubious acquisitions like Manus. Now trying to chase the agents hype by acquiring a company that went viral for 5 minutes and has already been forgotten.
It is laughable how far out of the loop they are, and so desperate to fit in.
RoPE? The position encoding method published 2 years before Llama and already in models such as GPT-J-6B?
DPO, a method whose paper had no experiments with Llama?
QLoRA? The third in a series of quantization works by Tim Dettmers, the first two of which pre-dated Llama?
Moltbook was more of a meme - agents mostly orchestrated by users in the background.
Not something with motion like OpenClaw itself (with a real community).
If it knows it doesn't know something it can ask someone else, presumably some other LLM-agent, or actually a Reddit-like community of them. Just like people ask questions on Reddit?
I'd prefer an LLM which asks from someone else if it doesn't know the answer, than one that a) pretends it has the correct answer, or b) assumes and tells me the answer is unknowable?
I think it's a big idea. Why didn't they think about it earlier.
Those are real language models. Prompted into character by humans, but then given a lot of freedom.
Fake would be all of us typing to each other on this site and identifying as language models. At least, I am not a language model and I hope everyone else here isn't a language model.
In all seriousness, Moltbook is a start of something interesting and big. Maybe a very small start of something big, but already interesting.
This absolutely is a staple of moltbook.
> In all seriousness, Moltbook is a start of something interesting and big.
Sure, if you think fraud is interesting and big.
In the meantime let's have fun bro https://soundcloud.com/mjfresh/500-gouyad-ft-colmixddkeyz
Models communicating with models in an open forum can seem trivial, but it’s isn’t going to be. Which means observing how that works today, and over time, is important.
There can be lots of fraud and hype, yet still something important involved.
And Facebook certainly has an incentive from their perspective, to understand how that progresses. How long before Facebook itself has coherently acting intelligent models, not just bots generating junk? It’s going to happen sooner, rather than later.
Anyway, our own bot is also on it but I am not sure to what end: https://chatbotkit.com/hub/blueprints/the-algorithms-favorit...
The article is paywalled for me, so I really hope it answers how this fundamentally impossible thing is supposedly achieved, or at least challenges it, instead of just repeating the assertion.
Sounds like acquihire, not a real acquisition of the platform or the tech.
Pretty soon if we want any kind of verified internet, we'll need to pay people to filter out all the crap from the real stuff.
On one hand, yay automatization, on the other hand, I feel weirdly left out.
Have they? Did I miss something? Last I checked, there was no verification and most of the content shared from that site turned out to have been posted not by LLMs but rather (human) spammers, focused on Crypto grifts and creating hype.
Anyone more in this can happily correct me, but is there anything here of that sort, anything of value?
Compared to any prior social media acquire there doesn't seem a technically skilled team considering the exploits or an existing user base considering said user base is A) supposed to be bots by nature and secondly didn't even turn out to be that reliably, making this the first time someone wants bots and doesn't even get that.
Far is it from me to make strategic decisions for a company like Meta/Facebook, but the lack of a recent Llama release might merit more focus then spending on whatever this is.
Not much human content that i could see, probably even the Crypto grifters got bored with it after a couple of days.
The "acquisition" must have given guys that made the thing some favourable terms, and it was a condition for them to even consider working at Meta. Because there is no way a global top 10 market cap company announces this deal willingly.
OpenClaw was open source from the beginning.
- perhaps do one step better and go back and prevent transformers architecture from ever being made
- no wait let me go one more step back and prevent web 3 and blockchain from ever being made
- no no wait, lets go back further and prevent bitcoin from ever being made, maybe even figure out who satoshi is when he s publishing that paper
- dang no we need to go further and stop social media from ever being conceived
- last stop wait, let us stop the dawn of the internet
- sorry i ruined the entire timeline by trying to change one small thing havent I?
They need a good-enough LLM (llama) to cut content moderation costs, they need a good segmentation model (segment anything) for photo filters, AR/VR and photo/video content moderation.
For LLM frontier, they can wait it out to see AGI become a commodity they can buy after it is ready.
Interesting times!
Thereby eating their competition, either by stifling upcoming competitors or to gain degrees of monopoly power by joining with peers.
What would the world look like if you you simply could not do that?
What? OpenClaw was not open source? And I'm similarly surprised OpenAI would help "open" anything...
hmmmm
With Meta focusing so much on social networks (Facebook, Messenger, Whatsapp, Instagram, Threads) acquiring the first social network for AI agents makes sense. They can fix the technical debt later.
Meta couldn't vibecode a competitor themselves? WTF are yall doing over there?
I'm down voting every post that requires me to pay or subscribe to read. I mean come on people.
:-D
This is in the FAQ at https://news.ycombinator.com/newsfaq.html and there's more explanation here:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
Thanks Meta I needed a laugh!
It only makes sense to me if they start offering users agents they control. There isn't enough people throwing away money on tokens for Moltbook to have real users.
Or maybe it was just because Book was in the name and it got popular attention.
What a stupid timeline we are living in...