It is a vs code fork. There were some UI glitches. Some usability was better. Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it. Design of this one looks more polished and less muddy.
I was working on a project and just continued with it. It was easy because they import setting from cursor. Feels like the browser wars.
Anyway, I figured it was the only way to use gemini 3 so I got started. A fast model that doesn't look for much context. Could be a preprompt issue. But you have to prod it do stuff - no ambition and a kinda offputting atitude like 2.5.
But hey - a smarter, less context rich Cursor composer model. And that's a complement because the latest composer is a hidden gem. Gemini has potential.
So I start using it for my project and after about 20 mins - oh, no. Out of credits.
What can I do? Is there a buy a plan button? No? Just use a different model?
What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.
I switched back to cursor. And you know? it had gemini 3 pro. Likely a less hobbled version. Day one. Seems like a mistake in the eyes of the big evil companies but I'll take it.
Real developers want to pay real money for real useful things.
Google needs to not set themselves up for failure with every product release.
If you release a product, let those who actually want to use it have a path to do so.
They force the developing team to have a huge number of meetings and email threads that they must steer themselves to check off a ridiculously large list of "must haves" that are usually well outside their domain expertise.
The result is that any non-critical or internally contentious features get cut ruthlessly in order to make the launch date (so that the team can make sure it happens before their next performance review).
It's too hard to get the "approving" teams to work with the actual developers to iron these issues out ahead of time, so they just don't.
Buck passed, product launched.
I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.
There's a lot of "shipping the org chart" -- competing internal products, turf wars over who gets to own things, who gets the glory, rather than what's fundamentally best for the customer. E.g. Play Music -> YouTube Music transition and the disaster of that.
The GPM team was hugely passionate about music and curating a good experience for users, but YT leadership just wanted us to "reuse existing video architecture" to the Nth degree when we merged into the YT org.
After literally years of negotiations you got... what YTM is. Many of the original GPM team members left before the transition was fully underway because they saw the writing on the wall and wanted no part of it. I really wish I had done the same.
I didn't even get to try a single Gemini 3 prompt. I was out of credits before my first had completed. I guess I've burned through the free tier in some other app but the error message gave me no clues. As far as I can tell there's no link to give Google my money in the app. Maybe they think they have enough.
After switching to gpt-oss:120b it did some things quite well, and the annotation feature in the plan doc is really nice. It has potential but I suspect it's suffering from Google's typical problem that it's only really been tested on Googlers.
EDIT: Now it's stuck in a loop repeating the last thing it output. I've seen that a lot on gpt-oss models but you'd think a Google app would detect that and stop. :D
EDIT: I should know better than to beta test a FAANG app by now. I'm going back to Codex. :D
I complained to it that I had only made one image. It decided to make me one more! Then told me I was out of credits again.
What?! So was it only hallucinating that you were out of credits the first time?
The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."
On a separate note, I think the UX is excellent and the output I've been getting so far are really good. It really does feel like AI-native development. I know asking for a more integrated issue-tracking experience might be expanding the scope too much but that's really the biggest missing feature right now. That and, I don't like the fact that the "Review Changes" doesn't work if you're asking it to modify reports that are not in the current workspace that's open.
When I downloaded it, it already came with the proper "Failed due to model provider overload" message.
When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.
I did not see functionality to have it test the app in the browser yet.
Google may have won the browser wars with Chrome, but Microsoft seems to be winning the IDE wars with VSCode
Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.
Even more importantly, though, the more we move towards "I'm supervising a fleet of 50+ concurrent AI agents developing code on separate branches" the more the notion of the IDE starts to look like something you want to be able to launch in an unconfigured cloud-based environment, where I can send a link to my PM who can open exactly what I'm seeing in a web browser to unblock that PR on the unanswered spec question.
Sure, there's a world where everyone in every company uses Zed or similar, all the way up to the C-suite.
But it's far more likely that web technologies become the things that break down bottlenecks to AI-speed innovation, and if that's the case, IDEs built with an eye towards being portable to web environments (including their entire extension ecosystems) become unbeatable.
It’s part of the furniture at this point, for better or worse. Maybe don’t bet on it, but certainly wouldn’t be smart to bet against it, either.
I used Visual Studio Code across a number of machines including my extremely underpowered low-spec test laptop. Honestly it’s fine everywhere.
Day to day, I use an Apple Silicon laptop. These are all more than fast enough for a smooth experience in Visual Studio Code.
At this point the only people who think Electron is a problem for Visual Studio Code either don’t actually use it (and therefore don’t know what they’re talking about) or they’re obsessing over things like checking the memory usage of apps and being upset that it could be lower in their imaginary perfect world.
Alternatives have a lot of features to implement to reach parity
I think the ship sailed
In order to build a web app, you will first need a web app
Meanwhile, JetBrains IDEs are still the best, but remain unpopular outside of Android Studio.
> remain unpopular outside of Android Studio
What a strange claim. For enterprise Java, is there is a serious alternative in 2025? And, Rider is slowly eating the lunch of (classic) Visual Studio for C# development. I used it again recently to write an Excel XLL plug-in. I could not believe how far Rider has come in 10 years.PyCharm’s lack of popularity surprises me. Maybe it’s not good enough at venvs
Hence even the infamous Ballmer quote.
They have a chance to compete fresh with Fleet, but they are not making progress on even the basic IDE there, let alone getting anywhere near Cursor when it comes to LLM integration.
neovim won the IDE wars before it even started. Zed has potential. I don't know what IntelliJ is.
It started as a modernized Eclipse competitor (the Java IDE) but they've built a bunch of other IDEs based on it. Idk if it still runs on Java or not, but it had potential last I used it about a decade ago. But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
“I never read The Economist” – Management Trainee, aged 42.
The state of Cursor "review" features make me convinced that the cursor devs themselves are not dogfooding their own product.
It drives me crazy when hundreds of changes build up, I've already reviewed and committed everything, but I still have all these "pending changes to review".
Ideally committing a change should treat it as accepted. At the very least, there needs to be a way to globally "accept all".
Cursor Settings -> Agents -> Applying Changes -> Auto-Accept on Commit
I am fed up with VSCode clones, if I have to put up with Electron, at least I will use the original one.
I expect huge improvements are still to be made.
I wonder how much Google shareholders paid for that 20 minutes. And whether it's more or less than the corresponding extremely small stock price boost from this announcement.
I don't think it's connected in any way, though. Their pricing page doesn't mention it. https://antigravity.google/pricing
if it were true, it would be a big miss to not point that out when you run out of credit, in their pricing page, or anywhere in their app.
I should also mention that the first time I prompted it, I got a different 'overloaded' type out of credit message. The one I got at the end was different.
I've rotated on paying the $200/month plans with Anthropic, Cursor, and OpenAI. But never Google's. They have maybe the best raw power in their models - smartest, and extremely fast for what they are. But they always drop the ball on usability. Both in terms of software surrounding the model and raw model attitude. These things matter.
It does not.
This is great fundamental business advice. We are in the AI age but these companies see to have forgotten basic business things
Interesting that a next-gen open-source-based agentic coding platform with superhuman coding models behind it can have UI glitches. Very interesting that even the website itself is kind of sluggish. Surely, someone, somewhere must have ever optimized something related to UI rendering, such that a model could learn from it.
And the say:
Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won’t have to worry about, and you feel unrestrained in your usage of Antigravity
You have to wonder what kind of models did they run for this.
Is there another world where $200/m is needed to run hundreds of agents or something?
am i behind and i dont even know it?
It’s very easy to run into limits if you choose more expensive models and aren’t grandfathered.
Yes, the auto model is good enough for me especially with well documented frameworks (rails, frontend madness).
Thanks for the response, looks like i'm in for a reckoning come New year's day
You can't provide an API key for a project that has billing enabled?
Sounds like the modus operandi of most large tech companies these days. If you exclude Valve.
I think thats the beauty of opensource.
Oh ffs
With vendor lock-in to Google's AI ecosystem, likely scraping/training on all of your code (regardless of whatever their ToS/EULA says), and being blocked from using the main VS Code extensions library.