Back to BlogNext.js

Vercel AI SDK 6: Build Streaming AI Apps in Next.js Without the Boilerplate

AI SDK 6 from Vercel makes it easy to add streaming AI to any Next.js app. Learn what is new — agents, human-in-the-loop, DevTools, MCP support — with real code examples.

VercelAI SDKNext.jsTypeScriptStreamingAgents
Vercel AI SDK 6: Build Streaming AI Apps in Next.js Without the Boilerplate

Six months ago, adding AI to a Next.js app meant writing a mountain of boilerplate — manual fetch calls, custom streaming parsers, error handling for five different provider response shapes, and a prayer the API stayed consistent. AI SDK 6 from Vercel buries all of that. It is the TypeScript toolkit that has quietly become the industry standard for LLM-powered web apps, and the latest major version ships some genuinely game-changing stuff.

What Is the Vercel AI SDK?

The AI SDK is a unified TypeScript library for building AI-powered applications. It gives you a single consistent API that works across 20+ model providers — OpenAI, Anthropic, Google Gemini, AWS Bedrock, Cohere, Mistral, and more. Swap providers by changing one line. No rewrites.

It is split into two core pieces:

  • AI SDK Core — the server-side primitives: generateText, streamText, generateObject, tool calling, and the new Agent abstraction

  • AI SDK UI — framework-agnostic hooks for the client: useChat, useCompletion, useObject

It works with Next.js (App Router and Pages Router), Svelte, Vue/Nuxt, Node.js, Expo, and TanStack Start. If you are building something JS-based, it probably works.

What Is New in AI SDK 6?

The jump from v5 to v6 is not a full rewrite — it is a spec upgrade. Most changes are additive, and there is a codemod to handle the breaking bits. Here is what is actually new.

The Agent Abstraction

The biggest new concept. AI SDK 6 ships a formal Agent interface with a built-in ToolLoopAgent class that handles the full agentic loop automatically: LLM call to tool execution to LLM call again, repeat until done. No more manually writing while loops to chase tool calls.

import { ToolLoopAgent } from 'ai';
import { openai } from '@ai-sdk/openai';

const agent = new ToolLoopAgent({
  model: openai('gpt-4o'),
  tools: { getWeather, searchWeb },
  maxSteps: 10,
});

const result = await agent.run('What is the weather in Tokyo right now?');
console.log(result.text);

Define your tools, set a maxSteps limit so it does not spiral, and let the agent handle orchestration. You can also implement the Agent interface yourself for fully custom behavior.

Human-in-the-Loop: The needsApproval Flag

Agents in production need guardrails. The new needsApproval flag on tools pauses the agent loop and waits for explicit human confirmation before running sensitive operations. Perfect for anything that writes to a database, deletes files, or sends emails.

import { tool } from 'ai';
import { z } from 'zod';

const deleteFileTool = tool({
  description: 'Deletes a file from the server',
  parameters: z.object({ path: z.string() }),
  needsApproval: true, // pauses loop — waits for human confirmation
  execute: async ({ path }) => fs.unlink(path),
});

When the model tries to call a tool with needsApproval: true, execution halts. Your app can surface a confirmation UI, get user approval, then resume the loop. This is huge for production safety.

AI DevTools

Run npx @ai-sdk/devtools in your project and get a full debugging interface — every LLM call, every tool execution, token usage, latency breakdown, and execution traces. Think React DevTools but for your AI layer. It is the observability piece that was always missing.

Stable MCP Support

The @ai-sdk/mcp package is now stable and ships with OAuth authentication, HTTP transport, resources, prompts, and elicitation support. This lets your agents connect to any MCP server — databases, file systems, external APIs — without writing a single custom integration.

Other Highlights

  • Native reranking via the rerank function (Cohere, Bedrock, Together.ai)

  • Image editing: generateImage now accepts reference images for inpainting and style transfer

  • Standard JSON Schema V1 support — works with any schema library, not just Zod

  • Enhanced usage reporting with cache metrics and reasoning token counts

Getting Started in 3 Steps

Step 1: Install

npm install ai @ai-sdk/openai

Step 2: API Route (App Router)

This is all you need for a streaming chat endpoint:

// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    messages,
  });

  return result.toDataStreamResponse();
}

The toDataStreamResponse() method handles all the streaming protocol details for you. Done.

Step 3: Client UI with useChat

// app/page.tsx
'use client';
import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          <strong>{m.role}:</strong> {m.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Ask something..."
        />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

That is a fully working streaming chat UI. The useChat hook manages message state, sends requests to your route handler, and streams responses in real time. No useState sprawl. No manual fetch logic.

Tool Calling in 15 Lines

Tools are how you give models access to real data. Define a tool with a Zod schema and an execute function, and the model automatically decides when to call it:

import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4o'),
  tools: {
    getWeather: tool({
      description: 'Get current weather for a city',
      parameters: z.object({ city: z.string() }),
      execute: async ({ city }) => ({ temp: '22C', condition: 'Cloudy' }),
    }),
  },
  prompt: 'What is the weather in Berlin?',
});

console.log(result.text);

The model sees the tool description and parameters, decides whether it needs to call it, executes, then uses the result to form a final response. Combine multiple tools and you have an agent.

Migrating from AI SDK 5?

Good news — there is a codemod that handles most of the breaking changes automatically:

npx @ai-sdk/codemod v6

Run it, review the diff, and you are mostly done. The v6 spec changes are evolutionary, not revolutionary. If your app was not doing anything exotic with the internals, migration should be smooth.

TL;DR

  • AI SDK 6 is the TypeScript standard for building LLM-powered web apps in 2026

  • New ToolLoopAgent handles multi-step agentic loops automatically — no manual loop wrangling

  • needsApproval flag enables human-in-the-loop control for sensitive tool calls in production

  • AI DevTools gives full observability into every LLM call and tool execution

  • MCP support is now stable — connect agents to any external system

  • Works with 20+ providers through a single unified API

  • Migrate from v5 with npx @ai-sdk/codemod v6