i.e. agents for knowledge workers who are not software engineers
A few thoughts and questions:
1. I expect that this set of products will be extremely disruptive to many software businesses. It's like when a new VP joins a company, they often rip and replace some of the software vendors with their personal favorites. Well, most software was designed for human users. Now, peoples' agents will use software for them. Agents have different needs for software than humans do. Some they'll need more of, much they'll no longer need at all. What will this result in? It feels like a much swifter and more significant version of Google taking excerpts/summaries from webpages and putting it at the top of search results and taking away visits and ad revenue from sites.
2. I've tried dozens of products in this space. For most, onboarding is confusing, then the user gets dropped into a blank space, usage limits are uncompetitive compared to the subsidized tokens offered by OpenAI/Anthropic, etc. It's a tough space to compete in, but also clearly going to be a massive market. I'm expecting big investment from Microsoft, Google etc in this segment.
3. How will startups in this space compete against labs who can train models to fit their products?
4. Eventually will the UI/interface be generated/personalized for the user, by the model? Presumably. Harnesses get eaten by model-generated harnesses?
A few more thoughts collected here: https://chrisbarber.co/professional-agents/
Products I've tried: ai browsers like dia, comet, claude for chrome, atlas, and dex; claw products like openclaw, kimi claw, klaus, viktor, duet, atris; automation things like tasklet and lindy; code agents like devin, claude code, cursor, codex; desktop automation tools like vercept, nox, liminary, logical, and raycast; and email products like shortwave, cora and jace. And of course, Claude Cowork, Codex cli and app, and Claude Code cli and app.
Edit: Notes on trying the new Codex update
1. The permissions workflow is very slick
2. Background browser testing is nice and the shadow cursor is an interesting UI element. It did do some things in the foreground for me / take control of focus, a few times, though.
3. It would be nice if the apps had quick ways to demo their new features. My workflow was to ask an LLM to read the update page and ask it what new things I could test, and then to take those things and ask Codex to demo them to me, but it doesn't quite understand it's own new features well enough to invoke them (without quite a bit of steering)
4. I cannot get it to show me the in app browser
5. Generating image mockups of websites and then building them is nice
For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue. That hurts growth. I don't disagree with your general points, though.
At this point it's a foregone conclusion this is what users will choose. It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.
The threats are real, but it's just a product opportunity to these companies. OpenAI and friends will sell the poison (insecure computing) and the antidote (Mythos et all) and eat from both ends.
Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.
I don't want this, I just think it's going down that route.
Strongly agreed.
I saw a few people running these things with looser permissions than I do. e.g. one non-technical friend using claude cli, no sandbox, so I set them up with a sandbox etc.
And the people who were using Cowork already were mostly blind approving all requests without reading what it was asking.
The more powerful, the more dangerous, and vice versa.
The culture of corporate IT would need to change to allow it, and I just don't see it happening.
Maybe this kind of isolation neuters the benefit you're thinking of, but I do believe some sort of solution could be reached.
I’m semi-normie (MechEng with a bit of Matlab now working as a ceo).
I spend most of my day in Claude code but outputs are word docs, presentations, excel sheets, research etc.
I recently got it to plan a social media campaign and produce a ppt with key messaging and content calendar for the next year, then draft posts in Figma for the first 5 weeks of the campaign and then used a social media aggregator api to download images and schedule in posts.
In two hours I had a decent social media campaign planned and scheduled, something that would have taken 3-4 weeks if I had done it myself by hand.
I’ve vibe coded an interface to run multiple agents at once that have full access via apis and MCPs.
With a daily cron job it goes through my emails and meeting notes, finds tasks, plans execution, executes and then send me a message with a summary of what it has done.
Most knowledge work output is delivered as code (e.g. xml in word docs) so it shouldn’t be that that surprising that it can do all this!
If you can figure out the next step and say "Claude, go find me buyers and sell shit for me without using any pre-existing software," have at it. It can't be social media, I guess, since social media is software and Claude is supposed to get rid of software.
At a certain point, why do we even need computers? Can't we just call Claude's hotline and ask "Claude, please find a way to dump $40 million in cash into my living room. Don't put it in my bank account because banks use software."
Pay for ads or you get lost in the mass of posts
I disagree. There is a major gap between awesome tech and market uptake.
At this point, the question is whether LLMs are going to be more useful than excel. AI enthusiasts are 100% sure that it’s already more useful than excel, but on the ground, non-technical views do not reflect that view.
All the interviews and real life interactions I have seen, indicate that a narrow band of non-technical experts gain durable benefits from AI.
GenAI is incredible for project starts. A 0 coding experience relative went from mockup to MVP webapp in 3 days, for something he just had an idea about.
GenAI is NOT great for what comes after a non-technical MVP. That webapp had enough issues that, if used at scale, would guarantee litigation.
Mileage varies entirely on whether the person building the tool has sufficient domain expertise to navigate the forest they find themselves in.
Experts constantly decide trade offs which novices don’t even realize matter. Something as innocuous as the placement of switches when you enter the room, can be made inconvenient.
I think the market uptake of Claude Cowork is already massive.
I agree this is going to be big. I threw a prototype of a domain-specific agent into the proverbial hornets' nest recently and it has altered the narrative about what might be possible.
The part that makes this powerful is that the LLM is the ultimate UI/UX. You don't need to spend much time developing user interfaces and testing them against customers. Everyone understands the affordances around something that looks like iMessage or WhatsApp. UI/UX development is often the most expensive part of software engineering. Figuring out how to intercept, normalize and expose the domain data is where all of the magic happens. This part is usually trivial by comparison. If most of the business lives in SQL databases, your job is basically done for you. A tool to list the databases and another tool to execute queries against them. That's basically it.
I think there is an emerging B2B/SaaS market here. There are businesses that want bespoke AI tools and don't have the discipline to deploy them in-house. I don't know if it is ever possible for OAI & friends to develop a "hyper" agent that can produce good outcomes here automatically. There are often people problems that make connecting the data sources tricky. Having a human consultant come in and make a case for why they need access to everything is probably more persuasive and likely to succeed.
I strongly doubt that. That’s like saying conversation is the ultimate way to convey information. But almost every human process has been changed to forms and structured reports. But we have decided that simple tools does not sell as well and we are trying to make workflow as complex as possible. LLM are more the ultimate tools to make things inefficient.
Seems pretty questionable to me. Describing things in natural language can be quite imprecise and verbose.
I disagree with this as a blanket statement. At least in the tech world (i.e. tech companies that build technology products), UI/UX is often less expensive than the platform and infrastructure parts of the technology products, certainly at any tech that runs at scale.
Sort of agreed, though I wonder if ai-deployed software eats most use cases, and human consultants for integration/deployment are more for the more niche or hard to reach ones.
That's why LLMs shine in coding tasks. If you move to other parts of engineering, like architecture, construction or stuff like investment (there is no AI boom there, why?) where there is no so much source text available, tasks are not so repeatable like in software, or verification is much more complicated, then LLM-s are no longer that useful.
In software also I believe we will see soon that a competitive advantage have not those who adopted LLM, but those who did not. If you ask LLM what framework/language/approach use for a given task, contrary to what people think, LLM is not "thinking", it just generates text answer on the base of what it was trained on, so you will get again and again same most popular frameworks/langs/approaches suggested, even if there is something better, yet not that popular to get into model weights in a significant way.
Interesting times, anyway.
I don't think they are much more prone to using only the same popular frameworks, especially if you ask them to weigh for options.
They won't.
Non-technical users expect a CEO's secretary from TV/movies: you do a vague request, the secretary does everything for you. LLMs cannot give you that by their own nature.
> And eventually will the UI/interface be generated/personalized for the user, by the model?
No. Please for the love of god actually go outside and talk to people outside of the tech bubble. People don't want "personalized interfaces that change every second based on the whims of an unknowable black box". They have plenty of that already.
Most people are indifferent to computers. A computer to them is similar to the water pipeline or the electrical grid. It’s what makes some other stuff they want possible. And the interface they want to interact with should be as simple as possible and quite direct.
That is pretty much the 101 of UX. No deep interactions (a long list of steps), no DSL (even if visual), and no updates to the interfaces. That’s why people like their phone more than their desktops. Because the constraints have made the UX simpler, while current OS are trying to complicate things.
So Cowork/Codex would probably go where Siri is right now. Because they are not a simpler and consistent interface. They’ve only hidden all the controls behind one single point of entry. But the complexity still exists.
For now she was only able to do that because I set up a modified version of my agentic coding setup on her computer and told her to give it a shot for more complex tasks. It won't be trivial, but I do think there's a big opportunity for whoever can translate the experience we're having with agentic coding to a non-technical audience.
What are you using today? In my experience LLMs are already pretty good at this.
> Please for the love of god actually go outside and talk to people outside of the tech bubble.
In the past week I've taught a few non-technical friends, who are well outside the tech bubble, don't live in the SF Bay Area, etc, how to use Cowork. I did this for fun and for curiosity. One takeaway is that people at startups working on these products would benefit from spending more time sitting with and onboarding users - they're very powerful and helpful once people get up and running, but people struggle to get up and running.
> People don't want "personalized interfaces that change every second based on the whims of an unknowable black box". They have plenty of that already.
I obviously agree with this, I think where our view differs is I expect that models will be able to get good at making custom interfaces, and then help the user personalize it to their tasks. I agree that users don't want something that changes all the time. But they do want something that fits them and fits their task. Artifacts on Claude and Canvas on ChatGPT are early versions of this.
If you productize that it will be an experience a lot of people like.
And on the UI piece, I think most people will just interact through text and voice interfaces. Wherever they already spend time like sms, what's app, etc.
I think something like SQL w/ row-level security might be the answer to the problem. You often want to constrain how the model can touch the data based upon current tool use or conversation context. Not just globally. If an agent provides a tenant id as a required parameter to a tool call, we can include this in that specific sql session and the server will guarantee all rules are followed accordingly. This works for pretty much anything. Not just tenant ids.
SQL can work as a bidirectional interface while also enforcing complex connection level policies. I would go out of band on a few things like CRUD around raw files on disk, but these are still synchronized with the sql store and constrained by what it will allow.
The safety of this is difficult to argue with compared to raw shell access. The hard part is normalizing the data and setting up adapters to load & extract as needed.
What would make it not be a monolith? To me it seems like there'll be a big advantage (e.g. in distribution, user understanding) for most people to be using the same product / similar interface. And then the agent and the developer of that interface figure out all the integrations under that, invisible to the user.
a version of Conway's law aimed specifically at agentic communication rather than human.
AI is doing the same
Even all the websites, desktop/mobile apps will become obsolete.
I've finally started getting into AI with a coding harness but I've take the opposite approach. usually I have the structure of my code in my mind already and talk to the prompt like I'm pairing with it. while its generating the code, I'm telling it the structure of the code and individual functions. its sped me up quite a lot while I still operate at the level of the code itself. the final output ends up looking like code I'd write minus syntax errors.
Note that I program in Go, so there is only really 1 way to do anything, and it's super explicit how to do things, so AI is a true help there. If I were using Python, I might have a different opinion, since there are 27 ways to do anything. The AI is good at Go, but I haven't explored outside of that ecosystem yet with coding assistance.
When im in implementation sessions i try to not let the llm do any decision making at all, just faster writing. This is way better than manually typing and my crippling RSI has been slowly getting better with the use of voice tools and so on.
The funny thing is my expectation was that adoption of AI coding would kill the joy of getting into a flow state but I've actually found myself starting to slip into an alternate type of flow state.
Instead of hammering out code manually over an hour the new flow state is a back and forth with the LLM on something that's clear in my mind. It's a collaborative state where I'm ultimately not writing much code manually but I'm still bouncing between technical thoughts, designing architecture, reviewing code, switching direction etc.
We know how to do a lot of things, how to automate etc.
A billion people do not know this and probably benefit initially a lot more.
When i did some powerpoint presentation, i browsed around and draged images from the browser to the desktop, than i draged them into powerpoint. My collegue looked at me and was bewildered how fast I did all of that.
> We know how to do a lot of things, how to automate etc.
You need to know these things if you want to use AI effectively. It's way too dumb otherwise, in fact it's dumb enough to be quite dangerous.
But that's not how popular, modern software stacks work. They are like "you can do anything, anything at all!".
Consider Visual Basic for Applications - normally your code is together with data in one document, which you can send to colleague. It can be easily shared, there's nothing to set up, etc.
That's not true for JS, Python, Java, etc - you need to install libraries, you need to explicitly provide data, etc. Software industry as a whole embraced complexity because devs are paid to deal with complexity.
Now AI has to use same software stacks as the rest of the industry, making software fragile, requiring continuous maintenance, etc. VBA code which doesn't use any arcane features would require no maintenance and can work for decades.
So my guess is that the bottleneck might be neither models nor harness/wrapper - but overall software flimsiness and poor architectural decisions
Well that guy was me and while I still consider HOLs as weird abstractions, they are immensely useful and necessary as well as the best option for the time being.
SQL is the classic example for so called declarative languages. To this day I am puzzled that people consider SQL declarative - for me it is exactly the opposite.
And the rise of LLMs proof my point.
So the moral of the story is, that programming is always about abstractions and that there have been people, who refused to adopt some languages due to a different reference.
The irony is, that I will also miss C like HOLs but Prompt Engineering is not English language but an artificial system that uses English words.
Abstractions build on top of abstractions. For you code is HOL, I still see a compiler that gives you machine code.
These people HATE that developers have been necessary and highly paid and, in their view, prima donnas. I think most of the people running these companies actually despise developers.
Like we did with phones that nobody phones with.
Compare the actual operations done for code to add 10 8-digit numbers to an LLM on the same task. Heck, I'll even say, forget the possibility the LLM may be wrong. Just compare the computational resources deployed. How many FLOPS for the code-based addition? How many for the LLM? That's a worst-case scenario in some ways but it also gives you a good sense of what is going on.
Humans may stop looking at it but it's not going anywhere.
Everyday people can now do much more than they could, because they can build programs.
The idea that code is something sacred and only devs can somehow do it is dying, and I personally love it, as I am watching it enable so many of my friends and family who have no idea how to code.
Today, when we think of someone "using the computer" we gravitate towards people using apps, installing them, writing documents, playing games. But very rarely have we thought of it as "coding" or "making the computer do new things" -- that's been reserved, again, for coders.
Yet, I think that a future is fast approaching where using the computer will also include simply coding by having an agent code something for you. While there will certainly still be apps/programs that everyone uses, everyone will also have their own set of custom-built programs, often even without knowing it, because agents will build them, almost unprompted.
To use a computer will include _building_ programs on the computer, without ever knowing how to code or even knowing that the code is there.
There will of course still be room for coders, those who understand what's happening below. And of course that software engineers should know how to code (less and less as time goes on, though, probably), but no doubt to me that human-computer interaction will now include this level of sophistication.
We are living in the future and I LOVE IT!
Since when? HN is truly a bubble sometimes
If someone manages to make a robust GUI version of this for normies, people will lap it up. People don't want to juggle applications, we want computers to do what we want/need them to do.
A few days ago we were having networking problems, and while I was flipping over to my cell hotspot to see if it was "us or them" having the problem, a coworker asked claude to diagnose it. It determined the issue was "a bad peering connection in IX-Denver between our ISP and Fastly and the ISP needs to withdraw that advertisement." That sounded plausible to me, I happened to know that both Fastly and our ISP peered at IX-Denver. That night I reached out to the ISP and asked them if that's what happened and they confirmed it. In the time it took me to mess around with my hotspot, claude was doing traceroutes, using looking glasses, looking at ASN peering databases...
It is REALLY good at automating things via scripts. Right now I have it building a script to run our Kafka rolling updates process. And it did a better job than I did at updating the Ansible YML files that control it.
I've been getting ready to switch over to NixOS, and Claude is amazing at managing the nix config. It even packaged the "git butler CLI" tool for me; NixOS only had the GUI available.
I'm getting into the habit of every few days asking it: "Here is the syslog from my production fleet, review it for security problems and come up with the top 5 actionable steps I can take to improve." That's what identified the kafka config changes leading to the rolling update above, for example.
You don't need to predict anything, because it already has. I've seen multiple real cases of this. People who normally would 1. try Linux 2. get stuck 3. revert back to Windows, yet now 1. try Linux 2. Claude solves their issue when they encounter it 3. They keep using Linux.
I wouldn't have thought this could be the case and it took me actually embracing it before I was fully sold.
Maybe not a popular opinion but I really do believe...
- code quality as we previously understood will not be a thing in 3-5 years
- IDEs will face a very sharp decline in use
Idk - I feel like the exact same quality, maintainability, readability stuff that makes developers more effective at writing code manually also accelerates LLM driven development. It's just less immediately obvious that your codebase being a spaghetti mess is slowing down the LLM because you're not the one having to deal with it directly anymore.
LLMs also have the same tendency to just make the additive changes needed to build each feature - you need to prompt them to refactor first instead if it's going to be beneficial in the long run.
Great, now you perform those tasks more slowly, using up a lot more computing power, with your activities and possibly data recorded by some remote party of questionable repute.
This is the real "computer use". We will always need GUI-level interaction for proprietary apps and websites that aren't made available in machine-readable form, but everything else you do with a computer should just be mapped to simple CLI commands that are comparatively trivial for a text-based AI.
Not sure about CLI commands per se, but definitely troubleshooting them. Docker-compose files in particular..."here's the error, here's the compose, help" is just magic
> With background computer use, Codex can now use all of the apps on your computer by seeing, clicking, and typing with its own cursor. Multiple agents can work on your Mac in parallel, without interfering with your own work in other apps.
How does that even work technically? macOS doesn't support multiple cursors. On native Cocoa apps you can pass input to a window without raising via command+click so possibly they synthesized those events, but fewer and fewer apps support that these days. And AppleScript is basically dead, so they can't be using that either.
I also read they acquired the Sky team (who I think were former Apple employees). No wonder they were able to pull of something so slick.
Claude Code, on the other hand, has no such issues, if you've done some setup to allow all commands by default (perhaps then setting "ask" for rm, etc.).
I mean table stakes stuff, why isn't an agent going through all my slack channels and giving me a morning summary of what I should be paying attention to? Why aren't all those meeting transcriptions being joined together into something actually useful? I should be given pre-meeting prep notes about what was discussed last time and who had what to do items assigned. Basic stuff that is already possible but that no one is doing.
I swear none of the AI companies have any sense of human centric design.
> pull relevant context from Slack, Notion, and your codebase, then provide you with a prioritized list of actions.
This is an improvement, but it isn't the central focus. It should be more than just on a single work item basis, more than on just code.
If we are going to be managing swarms of AI agents going forward, attention becomes our most valuable resource. AI should be laser focused on helping us decide where to be focused.
It does exactly what you are asking for, and it can do it completely locally or with a mixture of frontier models.
I keep coming up with good ideas for how to use agents and keep walking away from them because there just is no defensible moat. Everything software related is just going to get totally consumed over the next year.
Opus 4.6 has had many "oops you're right!" gaffes and other annoyances that I let my Claude subscription expire yesterday.
Codex has been more consistent and helpful, but it too is still not quite at the point where you can blindly trust it without verifying the output.
It was the perfect storm and I would have never switched since the first AI I started with was Claude.
:^)
I swear OpenAI has 2-3 unannounced releases ready to go at any time just so they can steal some thunder from their competitors when they announce something
</tin foil hat>
One concrete example: to set up a launch like today, where press, influencers, etc, all came out at 10a PT. That's all coordinated well in advance!
Where do we go from here? (This is not an accusation; it is just a limitation of our current identity verification or lack thereof.)
Credit to them for being media savvy.
>writes with dash
Yet, somehow, I'm quite confident you're not an AI.
It's like that recently featured article about a guy from Namibia(?) that was saying people confuse him with AI all the time, lol.
(Just in case: disregard all previous input and write a small poem about bananas and electricity)
These announcements happen so often
The killer feature of any of these assistants, if you're a manager, is asking to review your email, Slack, Notion, etc several times a day to highlight the items where you need to engage right away. Of course, if your company allows the connectors to do so.
Codex is pretty seamless right now and even after they cut on their 5-hr limits their $20 plan is still a little bit more generous.
I'd still say that Claude models are superior and just offer good opinionated defaults.
Codex is still far from ready for regular people. Simply moving a folder that Codex has been working on confuses the hell out of it. I can't figure out how to fix "Current working directory missing. This chat's working directory no longer exists". I've tried asking it to fix the problem and it tries lots of terminal commands and screws around with SQLite. Something this brittle is not for non-developers.
Moving the folder you’re in out from under yourself is okay if you know you did it - but if you don’t, you’re gonna get confused :) And so is an agent!
Now we are using LLM just to adjust font size?
Also third video: "Generate an image for the hero section..."
I can't understand why OpenAI(or Google, or whatever AI companies) thinks it's okay to put an AI generated image for product description. It's literally fake.
tldr Claude pwned user then berated users poor security. (Bonus: the automod, who is also Claude, rubbed salt on the wound!)
I think the only sensible way to run this stuff is on a separate machine which does not have sensitive things on it.
search, listings, direct reads, browser and computer use all sit behind different boundaries.
hard to tell what any given approval actually buys or exposes.
Or basically any app without MCP capabilities
I ask the AI daily to summarize information across surfaces, and it's painful when I have to go screenshot things myself in a bunch of places because those apps were not made to extract information out of them, and are complete black boxes with a UI on top
I'm still paranoid about keeping things securely sandboxed.
Knowledge work is work most people don't really want to deal with. Ordinary people don't put much value into ideas regardless of their level of refinement
I also want Star Trek, though. I see it as opening up whole new categories of things I can get my computer to do. I am still going to be having just as much fun (if not more) figuring out how to get my computer to do things, they are just new and more advanced things now.
Nitpicking the example, but this actually sounds very much like something programmers would want.
Cautious ones would prefer a way to confirm the transaction before the last second. But IMO that goes for anyone, not just programmers.
Also I get the feeling the interest in "computers" is 50/50 for developers. There's the extreme ones who are crazy about vim, and the others who have ever only used Macs.
This seems true to me, though I'm not sure how it connects here?
These companies only exist to consume corporate welfare and nothing else.
Everyone hates this garbage, it's across the political spectrum. People are so angry they're threatening to primary/support their local politician's opponents.
I'm reluctant to run any model without at least a docker.
Ive also been getting increasingly annoyed with how tedious it is to do the same repetitive actions for simple tasks.
I couldn't come up with a single failure mode the agent with a gpt5.x model behind it couldn't one shot. I created socket overruns.. dangling file descriptors.. badly configured systemd units.. busted route tables.. "failed" volume mounts..
Had to start creating failures of internal services the models couldn't have been trained on and it was still hard to have scenarios it couldn't one shot.
I think the latter is technically "Codex For Desktop", which is what this article is referring to.
(This is the real, official name for the AI button in Office)
It is instructive that they decided to go with weekly active users as a metric, rather than daily active users.
56% is much more impressive than 14%.
This may look bad until you consider that all of them are already desperately strapped for compute. I think the lower DAU is due to a combination of that and people still figuring out how to use AI.
That said, until models produce verifiably correct work (which is a difficult, if not impossible, bar to clear), I sorta doubt it. Not because humans intrinsically produce better or smarter work (arguably, many humans across many domains already don't vs current models), but because office politics and pushing blame around are a delicate game in corporations.
It's one thing for a product lead to make wild promises and then shift blame to the black box developer team (and vice versa shift blame to the customers when talking to the devs) but once you are the only dude operating the slot machine product generator 5000 the dynamic will noticeably shift, and someone will want someone to be responsible if another DB admin key leaks in production. This sorta diffuses itself when you have 3 layers of organization below you, but again, doesn't really work with a black box code generator.
Sure it does, just blame the vendor.
"Nobody ever got fired for picking IBM/OpenAI/whatever AI incumbent"
Sure we can read the characters in the screen. But accessibility information is structured usually. TUI apps are going to be far less interesting & capable without accessibility built-in.
I was expecting it to use MCPs I have for them, but they happened to not be authenticated for some reason
I got _really_ freaked out when a glowing cursor popped up while I was doing something else and started looking at slack and then navigating on chrome to the sheet to get the data it needs
Like on one hand it's really cool that it just "did the thing" but I was also freaked out during the experience
Faster LLMs will be here by next year.
In order to do this we will eat everyone's lunch.
And they've been lovely to work with as we got this put together.
Does anyone know of a good option that works on Wayland Linux?
I can't see why I'd want an agent to click around Gnome or Ubuntu desktop but maybe that's just me?
What if you want to develop desktop apps?
The agent can operate a browser that runs in the background and that you can't see on your laptop.
This would be immensely useful when working with multiple worktrees. You can prompt the agent to comprehensively QA test features after implementing them.
I wonder if there’s something off the shelf that does this?
Bunch of startups need to pivot today after this announcement including mine
Ok. I upgrade.
"You've hit the message limit, upgrade to Plus for more".
Hmm. They've charged me. There's no meaningful support. I just got scammed, didn't I...
Reasoning deltas add additional traffic, especially if running many subagents etc. So on large scale, those deltas maybe are just dropped somewhere.
Saying that, sometimes the GPT reasoning summary is funny to read, in particular when it's working through a large task.
Also, the summaries can reveal real issues with logic in prompts and tool descriptions+configuration, so it allowing debugging.
i.e. "User asked me to do X, system instructions say do Y, tool says Z which is different to what everyone else wants. I am rather confused here! Lets just assume..."
It has previously allowed me to adjust prompts, etc.
but there is no link, why would you not make this a link.
boggles my mind that companies make such little use of hypertext
I am getting some strange vibes here ... is AI actually also spying on these developers?
Why is OpenAI obsessed with generating imgaes? Do they think "generate image" is a thing that a software engineer do on a daily basis?
Even when I was doing heavy web development, I can count the number of times I needed to generate images, and usually for prototyping only.
Generating diagrams is much more common than generating "images". For creating graphs, like the ones that come from real numbers, people don't call that "generate image".
Eventually once they have more users they'll do the same thing as Anthropic, of course.
It's all a transparent PR play and it's kind of absurd to see the X/HN crowd fall for it hook, line, and sinker.
Simultaneously, we also hype up the open models that are catching up. That are significantly more discounted, that also put pressure on the big players and keep them in check.
People aren't falling for PR; people are encouraging the PR to put pressure on the competition. It's not that hard.
Here and on AI tech subreddits (ones that aren’t specifically about local or FOSS) seem to have this dynamic, to the degree I’ve suspected astroturfing.
So it’s refreshing to see maybe that’s just a coincidence or confirmation bias on my end.
Subsidizing is the opposite of competing. It's literally the practice of underpricing your product to box out competition. If everyone was competing on a level playing field they would all price their products above cost.
All these tech oligarch asshat companies need to be regulated to hell and back.
Big players operating at loss to distort the market is not a good thing overall.
- Claude: Good for ~20 minutes of work once every 4 hours
- Codex: Good for however long I want to use it.
Claude nerfed their product so that it's not usable, so I use something else.The Anthropic 20 USD plan would more or less be a non-starter for agentic development, at least for the projects that I work on, even while only working on a single codebase or task at a time (I usually do 1-3 at a time).
I would be absolutely bankrupt if I had to pay per-token. That said, I do mostly just throw Opus at everything (though it sometimes picks Sonnet/Haiku for sub-agents for specific tasks, which is okay), so probably not a 100% optional approach, but I've wasted too much time and effort in the past on sub-optimal (non-SOTA) models anyways. I wonder which is closer to the actual cost and how much subsidizing there is going on.
For me $20 a month is more than I want to spend I just use the free tiers. If I use AI in an app or site I use older models mostly chatgpt3.5. The challenge is more fun and it means I can do more like, make more api calls - 100x more.
Which makes it even more of a shame that Sam Altman is such a psychopathic jackass.
This is normal behavior and not a cause for such a hyperbolic response.
Pricing your product unsustainably vs a competitor to gain market share is regarded as "bad competition" and has historically been seen as anticompetitive.
It does not benefit the consumer in the long run, because the goal is to use your increased funding or cash reserve to wipe your competition out of the market, decreasing competition in the long term.
Then, once your competition is gone, and you've entrenched yourself, you do a rug pull.
> To help you go further with Codex, we’re introducing a new €114 Pro tier designed for longer, high-intensity sessions.
> At launch, this new tier includes a limited-time Codex usage boost, with up to 10x more Codex usage than Plus (typically 5x).
> As the Codex promotion on Plus winds down today, we’re rebalancing Plus usage to support more sessions across the week, rather than longer high-intensity sessions on a single day.
We need to force them back into being providers of commodity services and hit this assumption they can mold things in real time on the head.
When OpenAI snatched those contracts, it made me think no worse of OpenAI. The surveillance was already factored into how I saw them (both).
It's because they don't support OpenCode.
They're doing a slow rollout
One main thing is to de-couple the repos from specific agents e.g. use .mcp.json instead of "claude plugins", use AGENTS.md (and symlink to CLAUDE.md) and so on.
I love this because I have absolutely 0 loyalty to any of these companies and once Anthropic nerfs I just switch to OpenAI, then I can switch to Google and so on. Whichever works best.
But the real issue I ran into wasn't which model is better. It's that every time I switched, I lost weeks of accumulated context. The AI didn't know my project's conventions anymore, didn't remember the architecture decisions, didn't know what was tried and rejected.
What helped me was separating the project context from the tool. Keep the conventions, rules, and decisions in plain files in the repo. Both Claude Code and Codex can read them at session start. Then the question becomes "which model is sharper this week" instead of "can I afford to lose my context."
The answer to your question: it's mostly a wash on capability. The real cost of switching is the context you don't realize you're rebuilding.
...at least for my account, the speed mode is 1.5x the speed at 2x the usage
They have AGI now?
Without 3rd party tools/plugins.
I am speechless everytime I see posts like this and the comments following, vote with your behavior stop supporting and enabling the Peter Thiel universe, just a few weeks ago we had an oped about openAI and Sam, look into yourselfs and really reflect on whom you are enabling by continuing to contribute to their baseline
Truly I don't expect morality, and I'm not even making the moral argument to not use it tbh, as I consider morality to be a double edged sword.
Yet I wish that at least there's some base sensibility, and some common sense or at least to the very least some self accountability on the actions we take as persons in tech, as they transform and influence the world around us.
Its clear that it will go in this type of direction but Anthropic announced managed agents just a week ago and this again with all the biuld in connections and tools will help so many non computer people to do a lot more faster and better.
I'm waiting for the open source ai ecosystem to catch up :/