On March 18, 2026, Google Labs didn’t just update a product. It redefined what an AI design tool is supposed to be.
There’s a particular kind of frustration that every designer knows. You have an idea — vivid, specific, fully formed in your head — and then you open a blank canvas and watch it slowly drain away as you wrestle with frames, grids, and component libraries. The idea doesn’t die. It just gets buried under the mechanics of the tool.
Google’s bet, with the sweeping 2026 overhaul of Stitch, is that this friction is no longer necessary. On March 18, Google Labs announced what it calls an “AI-native software design canvas” — a complete rethinking of Stitch that introduces five major new capabilities: an infinite AI-assisted canvas, a smarter project-wide design agent, voice-driven “vibe design,” instant interactive prototyping, and a built-in design system powered by a new DESIGN.md file format.
The announcement landed with force. Figma’s stock dropped roughly 8.8% on the news. Designers flooded LinkedIn and X with reactions ranging from genuine excitement to cautious skepticism. And the broader tech press — from TechCrunch to Investing.com — framed it as a direct challenge to the incumbents of the design world.
Whether or not Stitch is a “Figma killer” — a label that experts are quick to complicate — the update represents something genuinely new: a design tool that doesn’t ask you to learn its language. Instead, it learns yours.

The Infinite Canvas: Where Ideas Come First
The most immediately visible change in the new Stitch is the canvas itself. Gone is the traditional design tool paradigm of starting with a fixed artboard or frame. In its place is an infinite, open workspace — one that Google describes as giving “ideas room to evolve.”
But calling it simply “infinite” undersells what makes it different. The canvas is AI-native, meaning it doesn’t just give you more space — it understands everything you put into it. Drop in a photograph for visual inspiration, paste a block of code, sketch a rough wireframe with a stylus, or type a paragraph describing the mood you’re after. Stitch reads all of it as context and uses it to generate UI.
As Google’s official announcement puts it, the new canvas “amplifies your creativity” by letting you “bring your ideas regardless of shape — images, text or even code — directly to the canvas.” That’s not marketing language for its own sake. It reflects a genuine architectural shift: the AI isn’t waiting for a structured prompt. It’s reading the room.
Early users have responded warmly. One designer noted that the open canvas “makes people feel inclined to use it” — a small but telling observation. Design tools have historically imposed cognitive overhead before you’ve even started. Stitch’s new canvas removes that barrier. You don’t have to decide what kind of thing you’re making before you start making it.
This approach has clear parallels to Figma’s FigJam, which also offers an open, freeform workspace for early ideation. But where FigJam is a whiteboard that eventually hands off to Figma proper, Stitch’s canvas is the whole pipeline. You ideate and generate in the same space, without switching tools or modes.
The Design Agent: A Collaborator That Remembers Everything
If the canvas is where ideas live, the new design agent is what makes them coherent. Stitch’s 2026 update introduces a project-wide AI agent that doesn’t just respond to individual prompts — it tracks the entire history of your design and reasons across it.
Google’s blog describes it as “a brand new design agent that can reason across the entire project’s evolution,” paired with an Agent Manager interface that lets you organize multiple design directions — or “threads” — in parallel. In practice, this means the agent remembers every screen you’ve built, every prompt you’ve issued, and every stylistic decision you’ve made. When you ask it to generate the next screen, it doesn’t start from scratch. It starts from everything that came before.
The implications are significant. If you’ve designed a login screen and a home dashboard, the agent can infer what comes next — a settings page, a profile view, an onboarding flow — without you having to specify it. If you want to explore an alternate visual direction, the Agent Manager lets you branch the project, running a new version alongside the original so you can compare them side by side.
Investing.com noted that this capability allows Stitch to “analyze entire project histories and track multiple design directions simultaneously” — a description that captures both the technical achievement and the practical value. For designers working on complex, multi-screen products, this kind of contextual memory is transformative. It’s the difference between a tool that executes commands and one that participates in a design process.
Powered by Google’s Gemini models, the agent represents a meaningful step beyond what earlier versions of Stitch could do. Previously, Stitch generated screens one at a time, with no persistent understanding of the broader project. Now it functions more like an AI teammate — one that’s been in every meeting, read every brief, and can speak intelligently about where the design has been and where it might go.
Vibe Design: When You Can Just Talk
Of all the new features, “vibe design” is perhaps the most immediately striking — and the most likely to change how designers actually work day to day.
The concept is straightforward: you talk to Stitch. Click the microphone icon, and the design agent becomes a voice-activated collaborator. Say “Add three different navbar layouts” and it generates them. Say “Show this screen in dark mode with a warmer color palette” and it redraws accordingly. The agent doesn’t just execute commands — it can ask clarifying questions, offer critiques, and engage in a genuine back-and-forth about the design.
Google’s announcement describes the agent as capable of “designing a new landing page by interviewing you” — a phrase that captures the conversational, collaborative nature of the interaction. Rather than translating your vision into a structured prompt, you simply describe it, and the AI handles the translation. Real-time updates appear as you speak, so the canvas evolves in response to your words without any lag or mode-switching.
The practical value here is hard to overstate. One of the persistent friction points in AI-assisted design has been the prompt itself — the need to articulate your vision in a way the AI can parse. Voice removes that friction almost entirely. You can think out loud, change your mind mid-sentence, and respond to what you see on screen in natural language. Early users have described the experience as “incredible for exploring many ideas at lightspeed” — a sentiment that reflects how much cognitive overhead the voice interface eliminates.
Google frames this as transforming the design agent into a “sounding board” — a tool for dynamic critique and rapid iteration, not just execution. That framing matters. It positions Stitch not as a replacement for design thinking, but as an accelerant for it.
No mainstream design tool currently offers anything comparable. Figma, Sketch, and Adobe XD all have AI features, but none of them let you have a conversation with your canvas. Stitch’s voice mode is, for now, genuinely novel.
Instant Prototyping: From Static to Interactive in One Click
The fourth major upgrade addresses one of the most time-consuming parts of the design process: turning static screens into interactive prototypes. Stitch now does this automatically, with a single “Play” button.
Hit Play, and Stitch links your screens together into a clickable, navigable prototype. No manual hotspot creation. No wiring up transitions. The AI infers the user journey from the screens you’ve built and connects them accordingly. If you click a button in the prototype and there’s no screen to navigate to, Stitch generates one — inferring from context what the logical next step in the user flow should be.
Google’s blog describes this as “mapping out user journeys effortlessly,” and the phrase is apt. The ability to “refine individual elements or overhaul entire flows with a single click” compresses what used to be hours of work into seconds. Investing.com confirmed that the feature “allows users to connect screens and preview app flows through a ‘Play’ button” — a simple description of a capability that fundamentally changes the prototyping workflow.
The significance of this feature extends beyond convenience. Prototyping has historically been a gate — a step that required dedicated time and effort before you could test an idea with real users. By making prototyping instant and automatic, Stitch removes that gate. You can test an idea the moment you have it, get feedback, and iterate — all within the same session.
This is particularly valuable for early-stage design exploration, where the goal is to test assumptions quickly rather than build polished artifacts. Stitch’s instant prototyping turns the tool into a rapid hypothesis-testing machine, not just a mockup generator.
Solving the Consistency Problem
The fifth major upgrade is perhaps the most technically interesting, and the one most likely to matter in professional design workflows: the introduction of DESIGN.md.
Consistency has been a persistent challenge in AI-generated design. When you generate screens one at a time, there’s no guarantee they’ll share the same visual language — the same color palette, typography, spacing system, or component style. DESIGN.md is Stitch’s answer to that problem.
Every Stitch project now initializes with a default design system — colors, type scales, spacing rules, component styles — captured in a plain-language Markdown file called DESIGN.md. This file becomes the “source of truth” that the AI reads when generating new screens. As long as DESIGN.md is in place, every screen Stitch generates will conform to the same visual system.
But the real power of DESIGN.md is its portability. You can extract a design system from any existing website by URL, import it into Stitch, and have the AI generate new screens that match that site’s visual language. You can export a DESIGN.md from one Stitch project and use it as the starting point for another. You can even feed Stitch a DESIGN.md derived from a developer’s toolkit — Tailwind configuration files, Figma tokens, Storybook themes — and have it apply those rules to generated UI.
Google’s announcement frames this as letting you “apply your designs to a different Stitch project so you don’t have to reinvent the wheel.” Investing.com describes DESIGN.md as “a markdown file format that enables users to extract design systems from URLs or transfer design rules between projects and external tools” — a description that captures both the simplicity of the format and the breadth of its applications.
For power users, Google’s GitHub hosts a [design-md](https://tessl.io/registry/skills/github/google-labs-code/stitch-skills/design-md) skill that analyzes your screens and writes a DESIGN.md describing specific design decisions — color values like “Deep Muted Teal-Navy #294056 for primary actions,” shape rules like “pill-shaped buttons” — so that new screens match the existing visual system with precision.
This is a meaningful step toward making AI-generated design production-ready. The gap between “AI-generated mockup” and “design system-compliant screen” has been one of the main reasons professional designers have been cautious about AI tools. DESIGN.md narrows that gap considerably.
The Developer Bridge: Code Export and MCP Integration
Alongside the five headline features, Stitch’s 2026 update also deepens its integration with developer workflows — a signal that Google is positioning Stitch not just as a design tool, but as a bridge between design and engineering.
Stitch can now export designs directly to Figma as editable layers, or output raw HTML, CSS, and React code that can be opened and refined in any IDE. TechCrunch noted that Stitch “supports directly exporting to Figma and can expose code so that it can be refined and worked on in an IDE” — a capability that makes Stitch a genuine part of the production pipeline, not just a prototyping sandbox.
More technically ambitious is Stitch’s Model Context Protocol (MCP) server and SDK, which allows developers to invoke Stitch programmatically from other tools. A Medium post from Google Cloud demonstrated how Google Antigravity — Google’s AI coding agent — can call the Stitch MCP server to generate UI screens on the fly within a development workflow. The result is a system where design and code generation happen in concert, not in sequence.
As one developer put it, Stitch lets you “annotate and modify the designs, and even view the code or download it” — emphasizing that what you get out of Stitch is actual, production-ready code, not just static images or mockups. This is the detail that matters most for engineering teams evaluating whether to integrate Stitch into their workflow.
Is This a Figma Killer? The Expert Verdict
The question that dominated the conversation on March 18 — and the days that followed — was whether Stitch represents an existential threat to Figma. The 8.8% stock drop suggested that investors thought so, at least initially.
The expert consensus is more nuanced. TechCrunch was clear that Stitch “isn’t meant to be a full-fledged design platform like Figma or Adobe XD” — it’s a “first iteration” tool, optimized for speed and exploration rather than the kind of deep, collaborative, system-level design work that Figma excels at.
The most memorable framing came from Tech with Eldad on the Muzli blog: “Stitch is a microwave; Figma is a full kitchen.” You wouldn’t bake a Thanksgiving turkey in a microwave, but for quick prototyping, it’s perfect. The same post advised a hybrid workflow — “Stitch it” first to generate many rapid concepts, then “Refine it in Figma” to build something production-ready.
That framing has resonated widely. Designers on social media describe a future workflow where they generate dozens of concepts in Stitch, pick the most promising direction, and bring it into Figma for finish work. As one Medium writer put it, designers will “use Stitch to explore ten directions in the time it used to take to explore one, then bring that into Figma to actually build something production-ready.”
The bluntest summary came from a commentator who wrote: “Google didn’t destroy Figma. They made the boring parts faster.” That’s probably the most accurate description of what Stitch’s 2026 update actually does. It doesn’t replace the craft of design. It removes the friction that gets in the way of it.
How to Get the Most Out of the New Stitch
For designers and developers looking to integrate Stitch into their workflow, a few practical principles have emerged from early users and expert commentary.
Start with Stitch, finish elsewhere. The most effective workflow is to use Stitch for rapid ideation — generating ten or fifteen concepts in the time it used to take to build one — and then bring the best direction into Figma or another tool for refinement. Stitch is optimized for speed and exploration; Figma is optimized for precision and collaboration.
Be specific with prompts. Stitch’s output quality scales directly with the specificity of your input. “Mobile dashboard with card layout, pastel color palette, and sans-serif typography” will produce better results than “a dashboard.” Treat the AI like a talented junior designer who needs clear direction.
Use DESIGN.md as your style anchor. Once you’ve established a visual direction you like, export the DESIGN.md file and use it as the starting point for every subsequent project. This is the most reliable way to maintain consistency across a product family or design system.
Talk to it. The voice interface is genuinely useful for minor adjustments and rapid iteration. “Convert this to dark mode” or “Give me a version with a sidebar instead of a top nav” are exactly the kinds of quick tweaks that voice handles well — changes that would take several minutes to execute manually.
Test early and often. The instant prototyping feature is most valuable when you use it continuously, not just at the end of a design session. Clicking Play after every significant change gives you a real sense of how the flow feels, and the AI’s ability to generate missing screens means you can test incomplete flows without blocking.
Connect to your code pipeline from the start. If you’re building something that will eventually be handed off to developers, set up the Figma export or HTML export early. Catching structural issues before you’ve invested hours in a design direction is far less painful than discovering them at handoff.
The Bigger Picture
Stitch’s 2026 update is part of a broader shift in how the design industry is reckoning with AI. Tools like Uizard, TeleportHQ, and Lovable have been chipping away at the edges of the design workflow for years. What Google has done with this update is bring those capabilities into a single, coherent, deeply integrated system — and back it with the full weight of Google’s AI infrastructure.
The result is a tool that genuinely changes the economics of early-stage design. Exploring ten directions used to cost ten times as much time as exploring one. With Stitch, that multiplier collapses. The cost of a bad idea is lower, which means designers can afford to have more ideas — and find the good ones faster.
That’s not a small thing. It’s a fundamental change in how design work gets done. And if the March 18 announcement is any indication, it’s only the beginning.





