My subscription is only $10 a month, and it has unlimited inline suggestions. I just wonder if I’m missing anything.
Also means you don't have to deal with Cursor's busted VS Code plugins due to licensing or forking drift (e.g. Python intellisence, etc)
I think you misunderstand "swarms of agents", based on what you say above. An agent swarm, in my understanding and checked via a google search, does not imply working on multiple features at one time.
It is working on one feature with multiple agents taking different roles on that task. Like maybe a python expert, a code simplifier, a UI/UX expert, a QA tester, and a devils advocate working together to implement a feature.
How can multiple parallel agents some local and some in the cloud be working on a single task?
How can:
> All local and cloud agents appear in the sidebar, including the ones you kick off from mobile, web, desktop, Slack, GitHub, and Linear. (From the announcement, under “Run many agents in parallel”)
…be working on the same task?
Subagents are different, but the OP is not confused about what cursor is pushing, and it is not what you describe.
I think the people doing multiple brain threads at once are doing that because the damn tools are so fucking slow. Give it little while and I’m sure these things will take significantly less time to generate tokens. So much so that brand new bottlenecks will open up…
I bring it up not to be pedantic, but because if you think it implies multi-tasking and dismiss it, you are missing out on it's ability to help in single-tasking.
It's become the "no u r" argument of the AI age... :/
Good for you! Personally waiting for one agent to do something while I shove my thumb up my butt just waiting around for it to generate code that I'll have to fix anyway is peak opposite of flow state, so I've eagerly adopted agents (how much free will I had in that decision is for philosophers to decide) so there's just more going on so I don't get bored. (Cue the inevitable accusations of me astroturfing or that this was written by AI. Ima delve into that one and tell there was not. Not unless you count me having stonks in the US stock market as being paid off by Big AI.)
Curious to know more about your work:
Are your agents working on tangential problems? If so, how do you ensure you're still thinking at a sufficient level of depth and capacity about each problem each agent is working on?
Or are they working on different threads of the same problem? If so, how do you keep them from stepping on each other's toes? People mention git worktrees, but that doesn't solve the conflict problem for multiple agents touching the same areas of functionality (i.e. you just move the conflict problem to the PR merge stage)
It's easier when I have 10 simple problems as a part of one larger initiative/project. Think like "we had these 10 minor bugs/tweaks we wanted to make after a demo review". I can keep that straight. A bunch of agents working in parallel makes me notably faster there though actually reviewing all the output is still the bottleneck.
It's basically impossible when I'm working on multiple separate tasks that each require a lot of mental context. Two separate projects/products my team owns, two really hard technical problems, etc. This has been true before and after AI - big mental context switches are really expensive and people can't multitask despite how good we are at convincing ourselves we can.
I expect a lot of folks experience here depends heavily on how much of their work is the former vs the later. I also expect that there's a lot of feeling busy while not actually moving much faster.
Also, I have noticed, strangely, that Claude is noticeably less compliant than GPT. If you ask a question, it will answer and then try to immediately make changes (which may not be related). If you say something isn't working, it will challenge you and it was tested (it wasn't). For a company that is seems to focus so much on ethics, they have produced an LLM that displays a clear disregard for users (perhaps that isn't a surprise). Either way, it is a very bad model for "agent swarm" style coding. I have been through this extensively but it will write bad code that doesn't work in a subtle way, it will tell that it works and that the issues relate to the way you are using the program, and then it will do the same thing five minutes later.
The tooling in this area is very good. The problem is that the AI cannot be trusted to write complex code. Imo, the future is something like Cerbaras Code that offers a speed up for single-threaded work. In most cases, I am just being lazy...I know what I want to write, I don't need the AI to do it, and I am seeing that I am faster if I just single-thread it.
Only counterpoint to this is that swarms are good for long-running admin, housekeeping, etc. Nowhere near what has been promised but not terrible.
Its far closer to being a project manager than it is being a solo developer.
Unrelated problems simultaneously in the same git tree. Worktrees are unnecessary overhead if the area they're working in are disjoint. My Agents.md has instructions to commit early and often instead of one giant commit at the end, otherwise it wouldn't work.
> how do you ensure you're still thinking at a sufficient level of depth and capacity about each problem each agent is working on?
The context switching is hell and I have to force myself to dig deep into the MD file and understand things and not just rubber stamp the LLM output. It would be dishonest of me to say that I'm always 100% successful at that though.
This is in no way comparable to the "flow" state that programmers sometimes achieve, which is reached when the person has a clear mental model of the program, understands all relevant context and APIs, and is able to easily translate their thoughts and program requirements into functional code. The reason why interrupting someone in this state is so disruptive is because it can take quite a while to reach it again.
Working with LLMs is the complete opposite of this.
I see how people think its more productive, but honestly I iterate on my code like 10-15 times before it goes into production, to make sure it logs the right things, it communicates intent clearly, the types are shared and defined where they should be. It’s stored in the right folder and so on.
Whilst the laziness to just pass it to CC is there I feel more productive writing it on my own, because I go in small iterations. Especially when I need to test stuff.
Let’s say I have to build an automated workflow and for step 1 alone I need to test error handling, max concurrency, set up idempotency, proper logging. Proper intent communication to my future self. Once I’m done I never have to worry about this specific code again (ok some error can be tricky to be fair), but often this function is just practically my thought and whenever i need it. This only works with good variable naming and also good spacing of a function. Nobody really talks about it, but if a very unimportant part takes a lot of space in a service it should be probably refactored into a smaller service.
The goal is to have a function that I probably never have to look again and if I have to do it answers me as fast as possible all the questions my future self would ask when he’s forgotten what decisions needed to be made or how the external parts are working. When it breaks I know what went wrong and when I run it in an orchestration I have the right amount of feedback.
As others I could go very long about that and I’m aware of the other side of the coin overengineering, but I just feel that having solid composable units is just actually enabling to later build features and functionality that might be moat.
Slow, flaky units aren’t less likely to become an asset..
And even if I let AI draft the initial flow, honestly the review will never be as good as the step by step stuff I built.
I have to say AI is great to improve you as a developer to double check you, to answer (broad questions), before it gets to detailed and you need to experiment or read docs. Helps to cover all the basics
I have followed the usual autocomplete > VS Code sidebar copilot > Cursor > Claude Code > some orchestrator of multiple Codex/Claude Codes.
I haven’t experienced the flow state once in this new world of LLMs. To be honest it’s been so long that I can’t even remember what it felt like.
I spend that time watching it think and then contemplating the problem further since often, as deep and elaborate as my prompts are, I've forgotten something. I suspect it might be different if you are building something like a CRUD app, but if you are building a very complicated piece of software, context switching to a new topic while it is working is pretty tough. It is pretty fast anyway and can write the amount of code I would normally write in half a day in like 15 minutes.
I don't know how you'd manage a "swarm" of agents without pre-approving them all. When one has a diff, do you review it, and then another one comes in with an unrelated diff, and you context switch and approve that, then a third one comes in with a tool use it wants to do... That sounds absolutely exhausting.
The only way I can imagine needing to run multiple agents in parallel for code gen is if I’m just not reviewing the output. I’ve done some throwaway projects where I can work like that, but I’ve reviewed so much LLM generated code that there is no way I’m going to be having LLMs generate code and just merge it with a quick review on projects that matter. I treat it like pair programming where my pair programmer doesn’t care when I throw away their work
I’m guessing it was downvoted by the masses but at the same time I’d like the choice to be able to read it I’m not that into what the general public think about something.
I’m getting into downmaxxing at this point. I love that you have to earn being negative on this site. Give it to me.
I've been trying to use Claude Code seriously for over a month, but every time I do it, I get the impression that it would take me less work to do with Cursor.
I'm on the enterprise plan, so it can get pricey. This is why I used to stick mostly to auto mode.
Now Composer 2 has taken over as my default model. It is not as intelligent as OpenAI's or Anthropic's flagship models, but I feel it has as good as or better intuition. With way better pricing. It can get stuck in more complex tasks though.
Being able to get in the loop, stop and instruct or change models makes all the difference. And that is why I've stayed in the editor mode until now. Let's see if 3.0 changes that.
Anyway, as a result, I switched to Claude Code Max and I am equally as prolific and paying 1/10th the price. I get my cake and to eat it, too. *Note there’s a Cursor Ultra, which at quick glance seems akin to Claude Code Max. Notice that both are individual plans, I believe I’m correct you benefit from choosing those token-wise over a team or enterprise plan?
Anyway, you’re right Claude Code is less ergonomic; generally slower. I was losing my mind over Opus in Cursor spinning up subagents. I don’t notice that happen nearly as frequently in Claude Code itself. I think it has to do with my relatively basic configuration. CC keeps getting better the more context I feed it though, which is stuff like homegrown linters to enforce architecture.
All to say, Cursor’s pricing model is problematic and left a bad taste in my mouth. Claude Code seems to need a bunch of hand holding at first to be magical. Pick your poison
The secret in my experience is parallelization - Cursor might be faster or have better ergo for a single task, but Claude Code really shines when you have 6 tasks that are fairly independent.
If you treat CC as just another terminal tool and heavily use git worktrees, the overall productivity shoots through the window. I've been using a tool called Ouijit[1] for this (disclosure: the dev is an old colleague of mine), and I genuinely do not think I could go back to using Cursor or any other traditional IDE+agent. I barely even open the code in an editor anymore, primarily interacting through the term with Vim when I need to pull the wires out.
Claude Code with Opus/Sonnet is the L7 senior engineer still gunning for promotion. Hasn't hit burnout, hasn't been ground down by terrible teams yet. Capable and willing to get their hands dirty. Codex (the harness) with GPT-5.4 or 5.3-codex is fantastic but terse. Some of the UX frustrates me. I want a visual task list. That said, Codex as a harness is phenomenal. Think of it as the senior EM / CTO-in-waiting who won't write new code without complaining and nitpicking for hours. But they'll thoroughly tear your code apart and produce a plan you can execute yourself or pass to Claude Code.
Both are great, and so is Factory Droid. Also worth checking out Compound Engineering from Every.to if you haven't.
most tasks I can do better and faster with composer 2
a fellow engineer reported a bug on a code I had written a few months back. I used his report as prompt for composer 2, gpt-5.4-high and claude-4.6-opus-max-thinking. composer found the issue spot on. gpt found another possible vector a couple of minutes later, but a way less likely one and one that would eventually self heal (thus not actually reproducing what we observed on production). claude had barely started when the other two had finished
also, i don't have a budget per se. but it is expected that i over deliver if i'm over spending
I'm now switching between Claude and Codex for less than 1/4 of what I was spending in December.
Which sucks because Cursor is clearly better than Anthropic at building UIs. CC desktop is buggy af.
Codex is nearly Opus level though. Anyone know if OpenAI permits Max subs to be used in Cursor?
I feel like this design direction is leaning more towards a chat interface as a first class citizen and the code itself as a secondary concern.
I really don't like that.
Even when I'm using AI agents to write code, I still find myself spending most of my time reading and reasoning about code. Showing me little snippets of my repo in a chat window and changes made by the agent in a PR type visual does not help with this. If anything, it makes it more confusing to keep the context of the code in my head.
It's why I use Cursor over Claude Code, I still want to _code_ not just vibe my way through tickets.
It's a very tough spot they're in. They have a great product in the code-first philosophy, but it may turn out it's too small a market where the margins will just be competed away to zero by open source, leaving only opportunity for the first-party model companies essentially.
They've obviously had a go at being a first-party model company to address this, but that didn't work.
I think the next best chance they see is going in the vibe-first direction and trying to claim a segment of that market, which they're obviously betting could be significantly bigger. It's faster changing and (a bit) newer and so the scope of opportunity is more unknown. There's maybe more chances to carve out success there, though honestly I think the likeliest outcome is it just ends up the same way.
Since the beginning people have been saying that Cursor only had a certain window of time to capitalise on. While everyone was scrambling to figure out how to build tools to take advantage of AI in coding, they were one of the fastest and best and made a superb product that has been hugely influential. But this might be what it looks like to see that window starting to close for them.
Sometimes u need the beef of opus but 80% composer is plenty.
Now you’ve got me thinking I should give composer another go because speed can be pretty darn great for more generic, basic, tasks.
It's a very tough spot they put themselves into. If the goal wasn't to get filthy rich quick it would probably be possible to make a good product without that tough spot.
Nothing lives forever. The life of a product is short and over in the blink of an eye.
They're playing this game optimally for their present station.
Slow coding an IDE? We might not even have IDEs in six years.
It’s the “why can’t Facebook just show me a chronological feed of people I follow”. Because it’s not in their interests to do so.
God forbids you make a great product in a specific niche and are happy with the money flowing.
Nope, has to be more.
I thought there was an entire initiative to build their own coding model and the fine tunes of in Composer 1.5 and Composer 2 were just buying them time and training data
A company makes a popular product customers like, but to satisfy the VCs the company must make a product the customers don’t like but could make the VCs more money.
Not sure this is the “invisible hand” Adam Smith had in mind.
The problems that Cursor is facing are directly resulted by the choices that its founders freely made previously.
This is why Zed's direction felt pretty strong to me. Unfortunately their agentic features are kind of stagnating and the ACP extensions are riddled with issues.
edit: https://devswarm.ai
I'm hoping in this new UI in v3 I can still get that experience (maybe it's just hidden behind a toggle somewhere for power users / not shown off in the marketing materials).
> I wish they'd keep the old philosophy of letting the developer drive and the agent assist. Even when I'm using AI agents to write code, I still find myself spending most of my time reading and reasoning about code.
We very much still believe this, which is why even in this new interface, you can still view/edit files, do remote SSH, go to definition and use LSPs, etc. It's hard to drive and ship real changes without those things in our opinion, even as agents continue to get better at writing code.
> I'm hoping in this new UI in v3 I can still get that experience (maybe it's just hidden behind a toggle somewhere for power users / not shown off in the marketing materials).
This new interface is a separate window, so if you prefer the Cursor 2 style, that continues to exist (and is also getting better).
That's good to hear, I might have jumped a little too quickly in my opinion. It's a bit of a Pavlovian response at this point seeing a product I very much love embrace a giant chat window as a UX redesign haha.
I would love to see more features on the roadmap that are more aligned with users like us that really embrace the Cursor 2 style with the code itself being the focal point. I'm sure there's a lot you can do there to help preserve code mental models when working with agents that don't hide the code behind a chat interface.
I dont think there is an inbetween. Its really hard to 'keep an eye' on code by casually reading diffs. Eventually it will become vibe coding.
Software engineers are deluding themselves with spec driven, plans, prds whatever nonsense and thinking its not vibecoding.
Reading diffs is an inescapable skill, needed for evaluating any kind of PR. This just makes it more interactive.
I just use Copilot with VS Code, but my flow is to just ask Claude to make a change across whatever files it needs to touch, then either accept the changes, edit the changes directly, or clarify whatever was different from my expectations.
Reading diffs is central to how I work with these agents.
I thought it was primarily a user of Anthropic and OpenAI APIs, so the fewer tokens you use to accomplish a task, the higher their margin.
That's basically it. You can review changes afterwards, but that's not the main point of Claude Code. It's a different workflow. It's built on the premise: given a tight and verifiable plan, AI will execute the actual coding correctly. This will work, mostly, if you use the very best models with a very good and very specific harness.
Cursor, same as Copilot, has been used by people who are basically pair programming with the AI. So, on abstraction down.
I have no idea what is better, or faster. I suspect it depends at least on the problem, the AI, and the person.
This is not really true anymore.
Cursor has better cloud agents than Claude. The multi-agent experience is better, the worktree management is better. Tagging specific code or files in chat is better.
It's hard for me to express the level of pain and frustration I feel going from Cursor to Claude / Conductor+Claude / Claude Extension for VS Code, Claude in Zed, etc.
Really hoping Claude puts more energy into Cowork as a competitor for Cursor and Codex.
Then when you go back to Cursor it will still support all of those things in the settings.
Using Cursor you tend to not think about those as much since Cursor does a lot of it for you as part of the IDE integration. But it's good to refine it your own way.
But for the most part there isn't much difference.
Zed plus Claude feels more like using isolated browser extensions instead of something part of the browser (unless you pay for Zeds AI thing then the integration is marginally better).
And management everywhere is convinced that thats what they are paying for. My company is replacing job titles with "builder". Apparently these tools will make builder out of paper pushers hiding in corporate beaurcarcy. I am suddenly same as them now per my company managment.
These models are infinitely more effective when piloted by a seasoned software engineer and that will always be the case so long as these models require some level of prompting to function.
Better prompts come from more knowledgeable users, and I don't think we can just make a better model to change that.
The idea we're going to completely replace software engineers with agents has always been delusional, so anchoring their roadmap to that future just seems silly from a product design perspective.
It's just frustrating Cursor had a good attitude towards AI coding agents then is seemingly abandoning that for what's likely a play to appease investors who are drunk on AI psychosis.
Edit: This comment might have come off more callous than I intended. I just really love Cursor as a product and don't want to see it get eaten by the "AI is going to replace everything!" crowd.
The truth is absolutely nobody knows how this will all shake out.
Now we have 3 ways of coding:
* vim / emacs - full manual
* VSCode / IntelliJ - semi-automatic
* ClaudeCode/Codex/OpenCode/... - fully automated
Cursor can't stay in between
Are you saying they can’t compete with VS Code in the semi-automatic space?
VSCode is open source and ahead, and getting lots of contributions from different companies.
On the other hand, you have JetBrains with a specific expertise in JVM based dev environments, it's possible to compete with them, but very time consuming
They better focus on one thing and win the developers, otherwise they would lose (and losing) to Claude Code and Codex on one side, on the other side they will lose to JetBrains and VSCode
Better to focus
In both cases, it's your processes (automated testing, review, manual QA) that is the bulwark against bugs and issues.
With AI, you can set up great processes like having it check every PR against the source code of your dependencies or having it generate tests for what's an intermediate step or ephemeral solution that you would never write tests for if you had to do it yourself.
There's this idea on HN that if you delegate too much to AI, you get worse code. Presumably not appreciating all the code-improving processes you can delegate to AI, particularly processes you were never doing for hand-written code.
Compile code is not perfect also. But who does hand-written assembler anymore? Yes, LLM is another layer, it would be ugly and slower but it’s much faster to use.
If you've dug in sufficiently on plan mode, then what the agent is executing is not a surprise and shouldn't need input. If it does, the plan was insufficient and/or the context around the request (agents.md, lessons.md, or whatever tools and documents you use ) weren't sufficient.
EDIT: Maybe it doesn't work in cursor, but I continue to use vscode to review diffs and dig in on changes.
You and I want this. My EMs and HoEs and execs do not. I weep for the future of our industry.
I use Cursor because agents are not ready to be the ones driving. I need to drive. I still need to understand all the code (and easily browse it) and keep a close watch over what the AI is doing.
Ignoring the fact that software will just keep getting more and more complex and interconnected... There will always be a new frontier or code and UX
That's because that's exactly where we're headed, and it's fine.
We needed that jump, there were still floppy disk icons
With Claude Code, I use Gitlab for reviewing code. And then I let Claude pull the comments.
It looks like the new UI has a big focus on multiple agents. While it feels wrong, the more you split up your work into smaller merge requests, the easier it is to review the work.
Chat first is the way to go since you want the agent busy making its code better. Let it first make plans, come up with different ideas, then after coding let it make sure it fully tests that it works. I can keep an agent occupied for over a hour with e2e tests, and it’s only a couple hundred lines of code in the end.
Maybe I'm not a 10x developer, I'm fine with that.
Cursor shoving Agents down my throat made me abandon and cancel it once this year. I jumped around between Sublime, Zed, VS Code, and alas none of them has a Tab completion experience that even remotely compares with Cursor, so I had to switch back.
If possible, I'll probably stay on v2 until it's deprecated. Hope Zed catches by that time.
The crazy thing is that it's a diffusion-based LLM. That makes it very fast, like Cursor Tab, and the outputs seem very accurate in my limited testing (although I find Cursor Tab to still feel "like 10% better")
---
That said, you should really give agentic coding a la Claude Code a try. It's gotten incredibly good. I still need to check the outputs of course, but after using it for 2-3 days, I've learned to "think" about how to tackle a problem with it similarly like I had to learn when first picking up programming.
Once I did, suddenly it didn't feel risky and weird anymore, because it's doing what I would've done manually anyways. Step by step. It might not be as blackboxy as you think it is
Define the interface and functions and let the AI fill in the blanks.
Eg: I want XYZ class with methodFoo and methodBar which connects to APIabcd and fetch details. Define classes for response types based on API documentation at ...., use localLibraryXYZ for ABCD.
This is the way I found to work well for me. I maintain a tight grip over the architecture, even the low level architecture, and LLM writes code I can't be bothered to write.
I find tab completions very irritating. They're "almost" correct but miss some detail. I'd rather review all of that at once rather than when writing code.
Same as it is faster to take notes with a laptop than writing manually, it is faster and cheaper to have an agent give you the code you want to type than actually typing them, manually.
Are we, as an industry, really limited by "not typing fast enough"?
As many have pointed out, the cost of token via Cursor is prohibitive compared to having a CC or Codex subscription, so I think the new update brings little to current users, but reduces Cursor's usability.
I think Cursor should go in the direction of embracing other provider's extensions and go for a more integrated and customizable IDE, rather than a one-solution-fits-all kind of an approach. Today I opened VSC again after a log time.
They're also churning with enterprise customers because for lots of customers on next contract renewal their pricing is increasing like 4-8x (depending on usage patterns but this was what we calculated for most of our devs) because they are slowly moving enterprise customers to usage based only plus a surcharge per million tokens, which they already did with personal sub customers, and all of the latest models are becoming Max mode only. My company is currently going through this and we've committed to way less spend with Cursor for our renewal and are signing with Anthropic and telling devs to prefer using claude code instead. I wouldn't be surprised if next year we cancel altogether and tell all devs to go back to VSCode or some other preferred editor.
I don't see a world where Cursor continues to be viable for 5-10 more years. Lots of people were originally saying "the moat is not in being an model provider" for agentic tools and thats turning out to be very much false in my opinion at least if you care about being a business.
2. Cursor's UI allows you to edit files, and even have the good old auto-complete when editing code.
3. Cursor's VSCode-based IDE is still around! I still love using it daily.
4. Cursor also has a CLI.
5. Perhaps more importantly, Cursor has a Cloud platform product with automations, extremely long-lived agents and lots of other features to dispatch agents to work on different things at the same time.
Disclaimer: I'm a product engineer at Cursor!
Cursor is an IDE and an agentic interface and a cli tool and a platform that all work locally and and in the cloud and in the browser and supports dozens of different models.
I don't know how to use the thing anymore, or what the thing actually is.
I applaud Cursor for experimenting with design, and seeing if there are better ways of collaborating with agents using a different type of workspace. But at the moment, it's hard to even justify the time spent kicking the tires on something new, closed source and paid.
Cursor was the tool you use to pair program with AI. Where the AI types the code, and you direct it as you go along. This is a workflow where you work in code and you end up with something fundamentally correct to your standards.
Claude Code is the tool you use if you want to move one abstraction layer up - use harness, specs, verifications etc. to nail down the thing such that the only task left is type in the code - a thing AI does well. This is a workflow where the correctness depends on a lot of factors, but the idea is to abstract one level up from code. Fundamentally, it would be successful if you don't need to look at code at all.
I think there is not enough data to conclusively say which of these two concepts is better, even taking into account some trajectory of model development.
I do feel that any reason I have for installing Cursor is that I want to do workflow 1, rather than workflow 2. Cause I have a pretty comprehensive setup of claude code (or opencode, or whatevs) and I think it does everything you list here.
So, as a product engineer, you probably wanna mention why it matters that Cursor UI allows you to edit files with auto-complete.
And I would happily pay a seat based subscription fee or usage fees for cloud agents etc on top of this
Unfortunately very locked into these heavily subsidized subscription plans right now but I think from a product design and vision standpoint you guys are doing the best work in this space right now
For $20 a month, I can plan and implement thousands of features using Composer 2 or Auto with Cursor. The usage limits are insanely higher. Yes, the depth of understanding is not Opus 4.6, but most work doesn't need that. And the work that does need it I pass to Claude.
I can code 8 hours a day using LLMs as my primary driver spending just $40 a month.
I am not saying this in bad faith. Model companies cannot penetrate every niche with the same brand recognition as some other companies you would consider as "API resellers" do.
These are features I am sure Codex will soon have, of course.
Then there is the advantage of multiple models: run a top level agent with an expensive model, that then kicks of other models that are less expensive - you can do this in Claude Code already (I believe), but obviously here you are limited to something like Haiku.
One of my favorite startups and I genuinely like to keep subscribing to them.
Still curious which ones will survive when the AI gold diggers finally settle.
If MS ever decided to discontinue VS Code or relicense it, there would be blood in the water. I guarantee you there would be multiple compelling competitors in under a year and probably a new open source winner with consolidation in 5.
So to answer your question: they would be forking Atom (which I think would’ve won otherwise).
Atom was far slower than VS Code, despite both of them being built on Electron. I wouldn't have used Atom, but I use VS Code.
It is entirely possible that some other closed-source editor with a superior package/extension system would have won, or the "war" would have been postponed until Rust was ready enough for Zed to come along.
Not saying there aren't people who care, there are, but they are a small minority.
I haven't used it in a decade, Im sure it has has evolved
Nowadays I'm basically a Neovim purist, but I have positive memories of it. I'm kind of afraid to revisit it at this point, though, since everyone hates on it and I suspect I wouldn't like it as much.
Can’t say I miss eclipse, but a lot of the VSCode extensions seems to utilize old legacy eclipse stuff and has the bugs to match.
IMO sounds like natural foundation for Cursor
Personally, I keep trying OpenCode + Opus 4.6 and I don't find it that good. I mean it does an OK job, but the code is definitely less quality and at the moment I care too much about my codebase to let it grow into slop.
The next generation of interfaces are not going to look like an evolution into minimalist text editor v250. This is like people iterating on terminals before building native or web applications.
TUIs blow most modern web apps out of the water in terms of UX
I’ve been using AI coding since GitHub copilot was in beta, used all IDEs in the market, and had very few occasions when I passed the $20 subscription limit. And when I did, that was when I decided to move from cursor to CC and Codex, and still, using them everyday and didn’t have to go above my limits.
That meant we can go from 17 minutes on 32 cores to 5 minutes on a few hundred. And because it's distributed compilation we don't have to provision each developer with an overpowered build system they won't be using most of the time.
It could also eliminate our CI backlog because autoscaling. Over a few hundred engineers building this codebase this probably a few thousand hours of waiting a week.
This took me about 2 weeks as someone who graduated 9 months ago. Most of the tokens were spent in several hour long debugging sessions relating to distributed systems networking and tracing through gRPC logs because the system wasn't working until it did.
I think I'd need several years of experience and 6 months as a full time engineer to have accomplished the same thing pre-AI.
Since I work at a semiconductor company near Toronto there's nobody around with the distributed systems experience to mentor me. I did it mostly on my own as a side project because I read a blog post. I literally wouldn't have been able to complete this without AI.
I'm sure the actual solution is terrible compared to what a senior developer with experience would've created. But my company feels like it's getting ROI on the token spend so far even though it's double my salary.
But I do check the generated code, make sure it doesn’t go banana. I wouldn’t do multiple features at the same time as I have no idea how people are checking the output after that.
I like AI coding and it accelerated my work, but I wouldn’t trust their output blindly
The biggest downside for me with Cursor was losing access to gated Microsoft extensions like Python and C#. Even when vibing there are times you will still need a debugger or intellisense.
I note in the comments lots of people saying they are moving back and this latest move looks like the final nail in the coffin for Cursor.
These AI companies are running out of ideas, and are desperate. I can't imagine investing in companies that are 3 month behind open source alternatives, and their target audience being the most experimental kind there is.
Looks pretty though.
I think it's a really solid release, and while cursor seems to have fallen out of the "cool kids club" in the past three months it remains the most practical tool for me doing AI-first work in a large production code base. The new UI works better in a world where agents are doing most of the work and I can hop back into the IDE interface to make changes.
We've set up a linear integration where I can delegate simpler tasks to cloud agents, and the ability to pick that work up in cursor if I need to go back in forth is a real productivity boost. The tighter integration with cloud agents is something I've been hoping for recently.
I appreciate not being tied at the hip to one model provider, and have never loved doing most of my work from the command line. I was on vs code + meta's internal fork of it for years prior to the current AI wave, so that was a pretty natural transition. I'm pretty optimistic on cursor's ability to win in the enterprise space, and think we're going to see open source models + dev tools win with indie devs over things like claude code as costs start getting passed down more and the gap between frontier models and open source gets tighter.
Quite honestly, I've turned off almost all of the LLM features in Cursor. No more tab completion. No more agents for little changes. This week, the only code I wrote with agents was low-stakes front end code for our admin panel. Everything else was organic, free range, human-written code. And it's the first time in months I've felt this good about my job. Agents suck the soul out of programming for me by giving a few cheap dopamine hits.
Truth be told, if Cursor removes the vs code bits, I'll probably see what Nova is like, or what Sublime has been up to. Or maybe kick the tires on Zed.
This change is possibly too big and unless all my existing usage patterns are maintained or improved, I’ll likely give CC a try now. Not optimistic.
I should also add: Cline doesn't require any account at all. I just installed the extension and added my OpenRouter API key and that was it.
The thing I've noticed is cursor was better at producing better results with a really shitty prompt.
That said, well written prompts on pi.dev seem to be out performing anything I ever tried on Cursor. That may just be me, but it's what I've noticed in my work.
This week I had 4 different agents, each with sub agents, all working on different tasks. Mostly greenfield work. My feedback was mostly nitpicky. I was pretty damn impressed
I like the option for different models that I just don't get with Claude Code. I want an IDE to monitor files and understand the code, not just see snippets (I know that there is still the Editor view in Cursor but with the push towards the Agent view I feel it's headed into a Conductor direction and personally I'm not ready for that).
Worth noting, a few weeks ago we got hit with $2500 of unauthorized usage during the weekend. We stopped using it because of security concerns, no 2FA, and some risky defaults: “Only Admins Can Edit Usage Settings” is off by default.
Hard to trust in a team setting without stronger safeguards.
Makes no sense to me, the main driver of codex, Claude code, etc.. seems to be fixed cost plans that offer reduced token cost. Cursor doesn’t have a good model so they can’t offer that (at least not to the same extent).
However, is this really a moat?
What's the pitch for using Cursor now a days?
[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
The biggest concern is that if you want to use SOTA models I don’t see how they can match what you get with the subscription plans of Anthropic and Open AI, whether your spending $20 or $200 a month.
Even if they could match what you get in terms of token quantity, they are giving their tools away for free for the foreseeable future and Cursor is not.
Nerve wreaking race.
I think I'll switch over to cursor on trial basis.
So while the Cursor AI is great, especially for reviewing generated code, it just can't compete.
The surprisingly lacking thing for me is the worktree support is really behind other tools. Conductor/Composer/Superset etc realized making the sidebar PRs/worktree focused rather than chat focused can feel great. But Cursors worktree support seems underbaked?
I totally preferred the other way, but at some point , there is boiler plate or organizations you just want done and it does not make sense to put you waiting minutes a time to confirme few refactors. That literally killed the vibe for cursor to me
Ar work I am still on 500 fast requests plan, so I can use quite some Opus 4.6 requests, but at home my quota is finished after about 14 Opus requests.
For my personal use, I will probably switch to Forge Code or Pi and MiniMax 2.6, GLM 5.1 or Qwen 3.6.
Cursor is getting too expensive.
I don't know what you're talking about. My experience with Cursor (before this new v3) is that new Cursor agent tabs / cloud agents already intelligently manage worktrees to prevent conflicts.
[0] https://forum.cursor.com/t/working-with-worktrees-in-cursor/...
I'm on Version: 2.6.19.
Per https://cursor.com/docs/configuration/worktrees#how-is-this-...
They apparently removed this in 3.0. - I couldn't begin to guess why.
"Automatic management of worktrees was removed in Cursor 3.0 and replaced with the new commands /worktree and /best-of-n. We also have added worktree support for the Cursor CLI.
Management of worktrees is now fully agentic. This makes it simpler to support use cases such as starting an agent, and only doing work in a worktree later on in the chat's lifecycle.
/best-of-n makes comparing the results of multiple models much easier. The parent agent will provide commentary on the different results and you can pick the best one. Additionally, you can even ask the parent agent to merge different parts of the different implementations into a single commit.
If you had agents that were previously running in a worktree, those chats will still work. However, you will need to use the new commands to start new agents in worktrees."
Personally I never use the actual IDE, and much prefer Claude code with helix in the terminal.
But are they affordable already for developers who don't earn a Silicon Valley salary? Developers in 3rd world countries?
Your workflow is probably closer to what most SWEs are actually doing.
The only way you're going to let an agent go off on its own to one-shot a patch is if your quality bar is merely "the code works."
Agents today can generate solid code even for relatively complex requirements. However, they don't always make the right trade-offs.
Just because something works doesn't mean it scales. It doesn't mean it can handle unexpected user input. It doesn't mean it's easily extensible.
Today engineers really just need to define those high-level technical requirements.
I'm happy w my VS Code harness which has also improved A LOT just with the last update alone.
This on the other hand feels like a clear reaction to cc/codex, in a way that even kind of builds an offboarding ramp?
Is "Cursor 3" == Glass? I get they feel like their identity means they need to constantly be pushing the envelope in terms of agent UX. But they could stand to have like an "experimental" track and a "This is VS Code but with better AI integration" track.
If I want to mostly direct 1 or more agents I go straight to claude code (codex at home.)
But I still want to have a IDE at the end of the day, I do look and review the code. I still need to direct it to fix some things it doesn't do properly and I dont feel like giving up my understanding of the system I work with (despite what the vibe people say) I don't think it will lead to good outcomes or any benefit in the name of speed.
So for me this direction goes against what I find useful in cursor, and entirely seems to look out for the the 10+ agents crowd. Which makes sense, these are the guys spending +200 $ subscriptions and so on. I'll go back to Zed + CC or Codex.
By the way their new interface looks just like the Codex App.
Cursor's inline autocomplete is very good though, much better than anything I could reproduce in Zed with various 3rd party "edit" LLMs (although checking google, they announced a new model since I tried it https://zed.dev/blog/zeta2)
$ rpm -q cursor
cursor-3.0.4-1775123877.el8.x86_64
https://forum.cursor.com/t/sigsegv-in-zygote-type-zygote-on-...Apparently if launched with --verbose it works, but that's the same crash I was seeing without the verbose flag
Good luck explaining all the details to Claude, it tends to ignore them anyway. Like a middle-level SWE, it's too stubborn to appreciate them, and prefers to blast energy (tokens) on shuffling the lines instead of seeing a bigger picture.
Cursor, in contrast, is a highly educated coder that gets you immediately.
I've got an impression that Claude Code is more oriented on unattended development of CRUD applications, while Cursor is more refined and closer to a senior-level SWE/PhD for productive work in pair.
At least before they were tangentially still an actual developer tool, standard vsc windows, the code was the point etc.
Now they offer really nothing interesting for professionals.
The input isn't yours, as it is stolen and re-sold to other people.
The model isn't yours, as it was built with piracy, theft of service, and EULA violations.
What are people doing exactly... outside data-entry for free. =3
For me, there's no way to get into a flow state if I'm thinking about terminal windows and Claude Code. Even before conductor dropped on our team, I'd been building CLIs to spin up agent sandboxes on work trees -- but that still required a lot of terminal window management.
My work now is usually: - 1 hard task (hard to think about more than 1 of these at once) -- localized to a sandbox, but with multiple agents in different convo threads - N simpler tasks (usually 4-8). These are usually one-shottable. They're a pleasure to come up with & ship.
I'm thinking about and managing the hard task. When it's cooking for more than 10 seconds, I'm switching to an ez task and pushing them along.
Just like OG coding -- hard to be in a flow state every day. But when it works, you can get an unbelievable amount of work done.
I'll be walking around now, and I'll add voice notes of little tasks or cleanups I want to throw an agent at when I get home. Good products are made of 1000s of small, good decisions -- and now those are free to implement, the slowest part is writing them down as tickets.