Google Releases Their Own Agentic IDE and So Far It Looks Amazing!
November 18, 2025 - 9 min read - Raymond

The landscape of software development just shifted beneath our feet. For the last year, we have watched the rise of AI coding assistants evolve from simple tab-autocompleters to chat-based sidebars. Tools like Cursor have dominated the conversation, forcing developers to ask: "What is the future of the IDE?"
Today, Google answered that question with a resounding boom.
In a move that frankly took me by surprise, Google has dropped Antigravity, a brand-new, standalone Integrated Development Environment (IDE). This isn't just an extension for VS Code; it is a fully realized platform designed from the ground up for an "Agent-First" paradigm.
I have spent the morning tearing through the documentation and running my first few "missions" with it, and I have to say: the hype appears to be real. Antigravity isn't just trying to help you write code; it is trying to be your partner in building software, capable of planning, executing, and verifying tasks across your terminal, editor, and browser autonomously.
Here is a comprehensive deep dive into everything we know about Google Antigravity, why it feels different from everything else on the market, and why you should download the public preview immediately.
What is Google Antigravity?

At its core, Antigravity is a fork of Visual Studio Code (VS Code). This is a brilliant strategic move because it means the interface is instantly familiar to millions of developers, and it maintains compatibility with the massive ecosystem of existing plugins.
However, calling it a "VS Code fork" does it a disservice. Google has gutted the operational philosophy of the traditional IDE and replaced it with an engine powered by Gemini 3, their newest and most capable Large Language Model (LLM).
The defining characteristic of Antigravity is that it is Agentic. Traditional AI tools wait for you to type or ask a specific question. Antigravity is designed to be given a high-level goal—like "Refactor this authentication service" or "Build a landing page based on this sketch"—which it then breaks down into subtasks, plans out, implements, and (crucially) verifies on its own.
The "Agent-First" Philosophy
Google describes Antigravity as a shift from "passive suggestion" to "active partnership." This is built on four core tenets that seem to solve the biggest frustrations developers have with current AI tools:
Trust via Artifacts: Instead of a black box where code just appears, Antigravity generates "Artifacts"—documents like implementation plans and task lists that you can review before the agent destroys your codebase.
Autonomy: The agent isn't trapped in a chat box. It has permission (if you grant it) to roam across your terminal, your file system, and even a browser to get the job done.
Feedback: You don't just accept or reject code. You can comment on the agent's plans, mark up screenshots it takes, and guide it mid-flight.
Self-Improvement: The system uses "Knowledge Items" to learn from your preferences and past projects, meaning it should theoretically get smarter the longer you use it.
A Tale of Two Interfaces: Editor vs. Manager

One of the most innovative aspects of Antigravity is how it handles the user interface. It recognizes that working with an agent is different than working in code.
1. The Antigravity Editor
This is the synchronous view. It looks like the IDE you know and love. You have your code, your terminal, and your file tree. However, the AI integration is far deeper. You have smart tab autocompletion, context-aware suggestions, and a side panel where you can chat with the agent for immediate, "fast-mode" tasks.
2. The Agent Manager (Mission Control)
This is the game-changer. Antigravity introduces a "Manager" surface—essentially a mission control center.
In the Manager, you aren't necessarily looking at code. You are looking at Workspaces and Task Groups. You can spawn an agent to do background research in one workspace while you code in another. It flips the paradigm: instead of the agent living inside your editor, your editor is just one tool the agent uses.
The Inbox: This acts as a central hub for notifications. If an agent needs permission to run a
sudocommand or wants you to review a UI change, it pops up here.Asynchronous Handoffs: You can define a task, set the agent loose, and close the window. The agent continues working in the background.
The Workflow: Plan, Act, Verify
How does it actually feel to code with Antigravity? The workflow is distinctively structured to prevent the AI from hallucinating complex logic.
Step 1: The Planning Phase & Task Groups
When you assign a complex task (in "Planning Mode"), the Agent doesn't just start typing. It creates a Task Group.
It analyzes the request.
It generates a Task List (an Artifact) breaking down the job.
It creates an Implementation Plan (another Artifact).
This is where the "Trust" comes in. You can read the Implementation Plan. It outlines the technical details, the files it will touch, and the logic it will use. You can comment on this plan like a Google Doc. Only once you click "Proceed" (or if you set your policy to "Agent Decides") does it start coding.
Step 2: Execution & The Browser Subagent
This is perhaps the most impressive technical feat. Antigravity includes a Browser Subagent.
If you ask the agent to "Change the button color to blue and verify it works," the agent will:
Modify the CSS/Tailwind code.
Spin up the local server via the terminal.
Actually open a Chrome instance, navigate to localhost, and "look" at the page.
It uses a specific model (Gemini 2.5 Pro UI Checkpoint) to "see" the DOM and pixels. It can click, scroll, and type. While it works, it shows an overlay on the browser so you can see exactly what it's doing.
Step 3: Verification & Walkthroughs
Once the job is done, the agent presents a Walkthrough Artifact. This is a summary of what changed, why it changed, and proof that it works.
Screenshots: The browser agent takes screenshots of the UI before and after.
Recordings: You can watch a video playback of the agent interacting with your app.
Diff Reviews: A clean UI to see file changes side-by-side.
Under the Hood: Models and Intelligence
You might expect Google to lock this down to only Gemini, but they have taken a surprisingly open approach.
The Brain: Gemini 3 The default reasoning engine is Gemini 3 Pro. Google claims this model handles "million-token context windows," allowing it to ingest your entire repository. This is vital for large monorepos where context is usually lost.
Model Optionality In the settings, you can actually swap the reasoning model!
Anthropic: You can select Claude Sonnet 4.5 (and the "Thinking" variant).
OpenAI: Support for GPT-OSS.
Google Vertex: Various flavors of Gemini.
This "sticky" setting persists per conversation, giving you the flexibility to use the model that fits your specific coding style.
Specialized Sub-Models Antigravity isn't just one LLM; it's a stack of them:
Nano Banana: A model specifically for generative images (used for UI mockups).
Gemini 2.5 Pro UI: For the browser agent to understand web pages.
Gemini 2.5 Flash: For fast background context summarization.
Connecting to the World: MCP Integration
Antigravity supports the Model Context Protocol (MCP). If you aren't familiar with this, it's a standard that allows the IDE to securely connect to external tools and databases.
This means Antigravity doesn't just know your code; it can know your infrastructure.
Database Awareness: Connect it to Neon or Supabase, and the agent can read your schema to write perfect SQL queries.
Issue Tracking: Connect it to Linear or GitHub. You can say "Fix the bug reported in issue #102," and it will fetch the ticket details automatically.
Documentation: Connect it to Notion or internal wikis to understand business logic.
The MCP Store is built right into the editor, featuring integrations for Heroku, Stripe, MongoDB, and more.
Knowledge Items: The Memory System
One of the biggest annoyances with AI coding is repeating yourself. "Use single quotes," "We use Tailwind here," "Don't touch that legacy file."
Antigravity introduces Knowledge Items (KIs). This is a persistent memory system.
Auto-Generation: As you work, the system analyzes your corrections and creates KIs automatically.
Explicit Creation: You can tell it, "Here is our style guide," and it saves it as a KI.
When the agent starts a new task, it scans your Knowledge Items. If a KI is relevant, it retrieves that context. This suggests that the agent will become a better "employee" the longer it works in your specific codebase.
Safety, Security, and Controls
Giving an AI autonomous control over your terminal and browser is terrifying. Google clearly anticipates this fear and has built in several layers of "Safety Rails."
1. Artifact Review Policy You can configure how much autonomy the agent has via three settings:
Always Proceed: The agent goes full cowboy mode.
Request Review: The agent must ask for permission before implementing a plan.
Agent Decides: A hybrid approach where the agent judges the risk level.
2. Terminal & Browser Permissions
Allowlist/Denylist: You can set specific terminal commands that are allowed or denied.
BadUrlsChecker: The browser subagent checks URLs against a Google safety service. You can also locally allowlist specific domains (like your localhost ports) so it doesn't wander off to random websites.
3. Local Secret Protection By default, the agent only accesses files in the workspace. There is a setting to "Allow Agent Non-Workspace File Access," but it is off by default to prevent the AI from accidentally reading your global .ssh keys or other sensitive data.
Pricing and Availability
Here is the best part: It is currently free.
Google has launched Antigravity in Public Preview.
Cost: No charge for individual users.
OS Support: Available for macOS (Monterey+), Windows 10/11, and Linux (Ubuntu, Debian, Fedora).
Rate Limits: Google describes them as "generous." They refresh every five hours. While there is a cap, they model that very few power users will hit it. It seems they are subsidizing the heavy compute of Gemini 3 to get adoption data.
There is currently no paid enterprise tier, but the documentation hints that this is for "individual accounts" under Google's standard terms, implying a paid "Pro" or "Team" version is inevitable.

Comparison vs. The Competitors
While I need more time to benchmark this properly, the specs suggest Antigravity is aiming higher than Cursor.
Context: Gemini 3’s million-token window is significantly larger than what most local copilot setups offer.
Autonomy: Cursor acts largely as a super-powered autocomplete and chat. Antigravity's "Manager" interface and "Task Groups" suggest a workflow where the human manages the AI, rather than just chatting with it.
Browser Integration: Native, autonomous browser control for verification is a feature most other IDEs rely on third-party plugins or hacky workarounds to achieve.
The "Liftoff" Moment?
Google Antigravity represents a massive swing. By forking VS Code, they solved the barrier to entry. By integrating Gemini 3 and the Agent Manager, they are attempting to solve the workflow bottleneck.
This isn't just about writing code faster; it's about offloading the mental overhead of planning, context switching, and verification. The ability for an agent to write code, run it, see the error in the browser, and fix it without my intervention is the "Holy Grail" of agentic coding.
What's Next? I am currently running Antigravity through a gauntlet of tests:
Refactoring a messy legacy Python backend.
Building a React frontend from a napkin sketch.
Testing the MCP integration with a live Postgres database.
Stay tuned for a follow-up post where I will share the results of these tests, including where the agent failed (because it will fail) and where it succeeded.
I am super excited that Google did this. It was a huge surprise, and it finally feels like a stable Agentic Era of software development has officially arrived.