ai-sdk
by vercelUse the ai-sdk skill to install the core ai package, verify current docs, and apply modern usage patterns for streaming, tools, agents, useChat, and gateway-first setup in full-stack apps.
This skill scores 84/100, which means it is a solid directory listing candidate: agents get strong trigger cues, explicit anti-hallucination operating rules, and practical references for current AI SDK usage, though install-time and workflow execution still require some interpretation by users.
- Strong triggerability from frontmatter and description, with explicit use cases like generateText, streamText, tools, agents, embeddings, providers, and useChat.
- Good operational guidance: it tells agents to verify APIs in node_modules/ai/docs or ai-sdk.dev and explicitly warns that internal knowledge is outdated.
- Useful supporting references cover real adoption pain points such as deprecated API changes, AI Gateway usage, DevTools setup, and type-safe agent patterns.
- No install command is provided in SKILL.md, so package setup depends on the agent inferring the correct package-manager command from the project.
- Workflow guidance is mostly document-driven rather than step-by-step executable recipes, with no scripts or embedded code fences in the main skill file.
Overview of ai-sdk skill
What this ai-sdk skill helps you do
The ai-sdk skill is a practical guide for developers building with Vercel's AI SDK, especially when you need current, version-aware help instead of generic LLM advice. Its real job is to help you choose the right API shape, verify modern syntax, and avoid stale patterns while adding chat, streaming, tools, structured generation, embeddings, or agents to an app.
Who should install this ai-sdk skill
Best fit readers include:
- Full-stack developers evaluating
ai-sdk for Full-Stack Development - Teams migrating older AI SDK code to newer APIs
- Developers using
generateText,streamText, tools,ToolLoopAgent, oruseChat - Anyone comparing provider setup across OpenAI, Anthropic, Google, and gateway-based access
- Builders who want fewer wrong starts than a plain "write me AI code" prompt
Why this skill is more useful than a generic prompt
The strongest differentiator is that the skill explicitly warns that internal model knowledge about the AI SDK is often outdated. Instead of trusting memory, it pushes you toward local package docs, source inspection, and targeted references like common API changes, gateway usage, devtools, and type-safe agent patterns. That makes this ai-sdk skill more reliable for install decisions and implementation work than ordinary prompting.
What matters most before adoption
What users usually care about first:
- whether they should install only
aifirst - how to pick provider packages later instead of over-installing
- which APIs have changed recently
- whether
useChatexamples found online are still valid - how to debug tool loops and streamed runs
- whether the SDK fits server routes, React UIs, or both
If those are your blockers, this page will save time.
How to Use ai-sdk skill
Start with the minimum ai-sdk install path
Use the smallest possible install step first:
pnpm add ai
The repository guidance is deliberate here: install only the core ai package first. Do not immediately add @ai-sdk/openai, @ai-sdk/react, or other provider/client packages until your actual use case requires them. This reduces false assumptions and keeps your implementation aligned with current docs.
If you are installing the GitHub skill itself into your agent workflow, use:
npx skills add vercel/ai --skill ai-sdk
Verify docs locally before asking for code
The key usage pattern is not "ask from memory." It is:
- Confirm
node_modules/ai/docs/exists. - Search
node_modules/ai/docs/andnode_modules/ai/src/. - Only then fall back to
ai-sdk.devor the repo references.
This is the most important practical behavior in the ai-sdk guide, because AI SDK APIs evolve fast and many public examples lag behind.
Read these files first
If you want fast orientation, start in this order:
skills/use-ai-sdk/SKILL.mdskills/use-ai-sdk/references/common-errors.mdskills/use-ai-sdk/references/ai-gateway.mdskills/use-ai-sdk/references/devtools.mdskills/use-ai-sdk/references/type-safe-agents.md
Why this order works:
SKILL.mdgives the trigger conditions and workflowcommon-errors.mdcatches API rename traps earlyai-gateway.mdhelps you get a working model quicklydevtools.mdimproves debugging once code runstype-safe-agents.mdmatters when UI and agent types must line up
Know the current API drift before you write code
A major adoption blocker is copying old examples. The references call out several changes that materially affect ai-sdk usage:
maxTokens→maxOutputTokensmaxSteps→stopWhen: stepCountIs(n)- tool
parameters→inputSchema - some older object-generation patterns have changed
useChathas changed significantly and should be verified before reuse
If your first prompt to the skill includes your current package version and any legacy code, you get much better migration help.
Use AI Gateway when you need a fast first success
For many teams, the fastest path is gateway-backed setup. The skill includes a useful reference for Vercel AI Gateway, where a model can be selected with a string like:
import { generateText } from 'ai';
const { text } = await generateText({
model: 'anthropic/claude-sonnet-4.5',
prompt: 'What is love?',
});
This is helpful when your decision is less about provider SDK plumbing and more about validating product behavior quickly.
Before hardcoding any model ID, fetch the current model list. The reference explicitly warns not to trust memory for model names.
What input to give the ai-sdk skill
Give the skill enough context to choose the right package shape and API pattern. A strong request usually includes:
- runtime:
Next.js,Node.js,Vercel, edge/serverless, etc. - goal: chat UI, agent, RAG, structured extraction, tool calling
- current package versions
- whether you need streaming
- provider preference or gateway usage
- frontend requirements like React hooks or server-only usage
- any failing code and exact error text
Weak input:
- "Help me use AI SDK"
Strong input:
- "I have a Next.js app router project on AI SDK 6, need streaming chat with tool calling, want to start with gateway, and my old
useChatcode no longer works. Show the minimal server route and UI shape."
The second prompt lets the skill narrow to the right docs and modern API names.
Turn a rough goal into a better ai-sdk prompt
A good formula:
- app context
- desired user experience
- current implementation state
- constraints
- expected output format
Example:
I'm building a customer-support assistant in Next.js. I need ai-sdk usage for streamed responses, one weather tool, and a React chat UI. Keep packages minimal, prefer gateway first, and explain any AI SDK 6 changes from older examples. Return the file list, install commands, and the smallest working path.
This works better than asking for "an agent" because it gives the skill enough structure to avoid generic scaffolding.
Choose the right workflow for common jobs
Use the skill differently depending on the job:
- For first install: ask for the minimum package set and a single working request
- For migration: paste old code and ask for API renames and behavioral changes
- For tool calling: ask for tool schema shape and stop conditions
- For frontend chat: ask specifically for current
useChatpatterns - For debugging: ask how to inspect runs with DevTools and where traces are stored
That job-based prompting is where the ai-sdk skill adds more value than a repo skim.
Use DevTools once code runs but behavior is wrong
When the code compiles but the model behaves unexpectedly, the DevTools reference is high-value. It captures SDK calls, steps, and tool interactions to:
.devtools/generations.json
This is especially useful for:
- hidden tool-call loops
- malformed structured outputs
- prompt/tool mismatch
- confusing streamed behavior
- token and step inspection during agent runs
For adoption decisions, this matters because it lowers debugging cost after initial install.
Use type-safe agent patterns when UI rendering matters
If you're building an agent-backed UI, the type-safe agent reference is a strong signal that the skill is useful beyond toy examples. It shows a pattern where agent definitions export inferred UIMessage types, making useChat rendering more reliable.
This is especially relevant for ai-sdk for Full-Stack Development, where backend agent configuration and frontend message rendering need to stay aligned.
Practical misfit cases
Do not choose this skill if you mainly need:
- provider-specific SDK docs unrelated to the
aipackage - general prompt engineering advice without implementation work
- Python-first AI application guidance
- framework-agnostic LLM theory
This skill is best when your question is specifically about implementing or debugging the AI SDK in a JavaScript/TypeScript stack.
ai-sdk skill FAQ
Is this ai-sdk skill good for beginners?
Yes, if you're already comfortable with basic JavaScript or TypeScript. The skill is beginner-friendly in the sense that it narrows the first steps, but it assumes you can edit project files, install packages, and follow framework conventions.
Does the ai-sdk skill replace reading the docs?
No. It is best used as a routing layer that tells you where to look and which modern patterns to trust. The core value is reducing wrong turns, not replacing source docs.
What is the biggest warning before ai-sdk install?
Do not trust old examples or model memory about AI SDK syntax. The repository repeatedly emphasizes checking installed docs and source first. This is not a minor caution; it is central to correct ai-sdk install and implementation.
Should I install all provider packages up front?
Usually no. Start with ai, then add provider or client packages only when your use case requires them. This keeps dependency choice intentional and avoids carrying outdated assumptions into your setup.
Is this mainly for chat apps?
No. Chat is a common use case, but the skill also fits structured generation, tool calling, agents, embeddings, gateway-based model access, and streamed server responses.
How is this different from asking an LLM to write AI SDK code?
A generic prompt may confidently generate obsolete APIs. This skill is better because it pushes a verification workflow: local docs, current references, known migration traps, and targeted file reading. That improves trust and lowers rework.
Does it help with React and useChat?
Yes, but with a caution: useChat has changed significantly. Treat older snippets with suspicion and use the skill to verify the current shape before copying UI examples.
When should I not use this ai-sdk guide?
Skip it if your problem is mostly vendor billing, model evaluation strategy, or non-JS platform integration. Use it when your blocker is current AI SDK implementation detail.
How to Improve ai-sdk skill
Give versioned context, not just goals
The fastest way to improve results is to include exact versions, especially for ai and any related packages. Many failures come from asking for "AI SDK code" without stating whether you're on a newer release or migrating older code.
Ask for minimal working slices first
Better than "build my full agent app":
- "show the smallest
generateTextexample" - "add one tool"
- "then stream it"
- "then wire
useChat"
This incremental workflow makes the ai-sdk guide much more effective because each step can be checked against current docs before complexity compounds.
Surface errors verbatim
If something breaks, include the exact error and the relevant snippet. The common-errors.md reference exists because many issues come from near-miss API names. One precise error often tells the skill whether you are using old docs, wrong package imports, or outdated options.
Say whether you want gateway or direct provider setup
A lot of ambiguity disappears if you specify one of these up front:
- "Use Vercel AI Gateway first"
- "Use direct OpenAI provider package"
- "Keep provider choice abstract for now"
That changes install commands, model selection, and example structure.
Be explicit about runtime and framework boundaries
For stronger ai-sdk usage help, state:
- server-only or client + server
- Next.js App Router or another framework
- edge or Node runtime
- TypeScript strictness
- whether tools call internal APIs or external services
These details affect what "correct" code looks like.
Common failure modes to watch for
The main quality killers are:
- relying on stale
useChatexamples - copying deprecated option names
- hardcoding old model IDs
- installing too many packages too early
- asking for agent code without defining tools and stop conditions
- debugging with console logs instead of run traces
If you avoid these, the ai-sdk skill becomes much more reliable.
Ask the skill to compare two implementation paths
A strong improvement tactic is to request a decision, not just code. For example:
Compare ai-sdk usage for (A) gateway-first quick setup and (B) direct provider setup in my Next.js app. Recommend one based on fast prototyping, future portability, and minimal package count.
This kind of prompt produces better adoption guidance than "show me the docs."
Iterate after the first answer with evidence
After the first output, improve quality by replying with one of:
- current file tree
- installed package list
- exact failing request
- captured
.devtools/generations.jsonexcerpt - a local docs excerpt from
node_modules/ai/docs/
That evidence-based iteration is the best way to turn the ai-sdk skill from general guidance into implementation-grade help.
