V

develop-ai-functions-example

by vercel

develop-ai-functions-example helps you create or modify runnable AI SDK examples in vercel/ai under examples/ai-functions/src/. Use it to choose the right category, match repo conventions, and build minimal examples for provider validation, demos, or fixtures.

Stars23.1k
Favorites0
Comments0
AddedMar 31, 2026
CategorySkill Examples
Install Command
npx skills add vercel/ai --skill develop-ai-functions-example
Curation Score

This skill scores 67/100, which means it is acceptable to list for directory users who already work inside the Vercel AI repo, but it is not a strong standalone install decision page. The repository evidence shows real workflow content for creating or modifying examples under `examples/ai-functions/src`, with concrete example categories and likely execution guidance, so an agent can be steered toward the right area faster than with a generic prompt. However, the skill is marked `internal: true`, has an experimental/test signal, and lacks support files or an install command, which limits trust and portability for broader users.

67/100
Strengths
  • Clear trigger scope: the description says when to use it for creating, running, or modifying AI function examples under `examples/ai-functions/src`.
  • Grounded workflow structure: the skill maps example categories like `generateText`, `streamText`, `generateObject`, embeddings, images, speech, transcription, reranking, and agent workflows.
  • Substantial written guidance: SKILL.md is long, includes multiple workflow/constraint signals, code fences, and repo path references rather than placeholder text.
Cautions
  • Best fit is narrow: metadata marks it as `internal: true`, so it appears tailored to contributors inside the Vercel AI repository rather than general adopters.
  • Adoption clarity is limited: there is no install command and no support files, scripts, or references to help a new user run the workflow independently.
Overview

Overview of develop-ai-functions-example skill

What develop-ai-functions-example is for

The develop-ai-functions-example skill is a focused build guide for creating or modifying runnable examples in examples/ai-functions/src/ inside the vercel/ai repository. Its job is not general prompting. It helps you produce examples that match the repository’s conventions for AI SDK function demos, provider validation, and test-like fixtures.

Who should use this skill

Use the develop-ai-functions-example skill if you are:

  • adding a new example for an AI SDK function
  • adapting an example to a different provider
  • validating whether a provider supports a feature
  • creating a minimal repro or fixture under the existing example tree
  • trying to stay aligned with repo structure instead of inventing your own pattern

If you just want to ask an LLM how generateText() works in abstract, this skill is probably narrower than you need.

The real job-to-be-done

Most users are not looking for prose. They want a working example in the right folder, with the right shape, using the right SDK function, and with fewer repository-specific mistakes. develop-ai-functions-example is best viewed as a repo-aware implementation assistant for the AI Functions examples area.

What makes it different from a generic coding prompt

A generic prompt can draft example code, but it usually misses the local taxonomy and file-placement logic. This skill adds practical context such as:

  • the example categories already present under examples/ai-functions/src/
  • the expected mapping between feature type and target directory
  • the repository’s purpose for these examples: validation, demonstration, and fixtures
  • a bias toward minimal, runnable examples rather than broad tutorials

Best-fit scope

The strongest fit for develop-ai-functions-example for Skill Examples is work in these categories:

  • generate-text/
  • stream-text/
  • generate-object/
  • stream-object/
  • agent/
  • embed/
  • embed-many/
  • generate-image/
  • generate-speech/
  • transcribe/
  • rerank/
  • middleware/

That scope matters. The skill is optimized for example development inside this repo layout, not for designing an app, SDK API surface, or production architecture.

How to Use develop-ai-functions-example skill

Install context and where it applies

Use develop-ai-functions-example install guidance in the context of the vercel/ai repository or when you want to mirror its example structure. A common setup is:

npx skills add vercel/ai --skill develop-ai-functions-example

Then invoke the skill when your task is specifically about creating, editing, or validating example files under examples/ai-functions/src/.

Start by choosing the right example category

Before asking for code, identify the target function family. This is the highest-leverage decision because it determines both the folder and the example shape.

Examples:

  • plain one-shot text output → generate-text/
  • token or chunk streaming → stream-text/
  • schema-constrained structured output → generate-object/
  • streamed structured output → stream-object/
  • embeddings for one input → embed/
  • embeddings for batches → embed-many/
  • tool-using multi-step workflow → agent/

If you skip this step, the generated example may use the wrong API entirely.

What input the skill needs from you

The develop-ai-functions-example usage quality depends heavily on how specific your request is. Provide:

  • target function, such as generateObject() or streamText()
  • provider and model, if known
  • whether the example is for validation, docs-style demonstration, or a test fixture
  • expected inputs and outputs
  • whether streaming, tools, schema validation, or middleware is required
  • any constraints on dependencies, file naming, or environment variables

A weak input:

  • “Make an AI example.”

A stronger input:

  • “Create a minimal generateObject() example under examples/ai-functions/src/generate-object/ that tests structured extraction from product review text using a Zod schema and a provider supported in this repo.”

Turn a rough goal into a skill-friendly prompt

A good develop-ai-functions-example guide prompt names the outcome, target directory, function, and validation intent.

Useful pattern:

  • what you want created
  • where it should live
  • which AI SDK function it should use
  • what provider assumptions are allowed
  • how minimal or production-like it should be
  • what success looks like

Example prompt:

“Use develop-ai-functions-example to create a minimal example in examples/ai-functions/src/stream-text/ showing streamText() with a provider already used in the repo. Keep it short, runnable, and clearly focused on streaming output rather than app integration. Include any required env vars and explain why this belongs in stream-text/ instead of generate-text/.”

Read SKILL.md first, then inspect nearby examples

This skill has only one visible support file: SKILL.md. Read it first because it captures the category map and the repo-specific intent. After that, inspect neighboring examples in the destination directory you plan to modify. That gives you the local naming and coding pattern faster than reading broad repo docs.

Practical reading order:

  1. skills/develop-ai-functions-example/SKILL.md
  2. target folder under examples/ai-functions/src/
  3. one or two sibling examples closest to your intended feature
  4. any provider-specific imports or env patterns used nearby

Match the example to its purpose

The repository uses examples for more than documentation. Your prompt should say which of these you are optimizing for:

  • provider support validation
  • feature demonstration
  • reproducible fixture
  • iterative experimentation

Why it matters:

  • validation examples should be minimal and explicit
  • demo examples should highlight the key API behavior
  • fixtures should avoid unnecessary variation
  • experiments can be rougher but still need correct folder placement

Keep the output minimal but runnable

The best develop-ai-functions-example skill outputs usually do less, not more. Ask for:

  • one clear function call
  • only the imports needed
  • obvious inputs
  • small output handling
  • env variables called out explicitly
  • no framework scaffolding unless the folder already expects it

This reduces false positives where the model adds app code that hides the actual SDK behavior you wanted to demonstrate.

Repository paths to mention explicitly

Mention paths directly in your request when you want repo-aligned output. Useful paths include:

  • examples/ai-functions/src/generate-text/
  • examples/ai-functions/src/stream-text/
  • examples/ai-functions/src/generate-object/
  • examples/ai-functions/src/stream-object/
  • examples/ai-functions/src/agent/
  • examples/ai-functions/src/embed/
  • examples/ai-functions/src/embed-many/

Direct path references help the skill choose the right pattern instead of inventing a neutral example.

Practical workflow for creating a new example

A reliable workflow is:

  1. identify the target function family
  2. inspect sibling examples in that folder
  3. prompt the skill with folder, function, provider, and purpose
  4. generate the initial example
  5. trim anything not needed for validation or demonstration
  6. run it against the intended provider
  7. adjust naming, inputs, and env handling to match nearby files

This is usually faster than asking for a complete example blindly and then retrofitting it to the repo.

When to use this instead of an ordinary prompt

Use develop-ai-functions-example usage when the main risk is repository mismatch: wrong folder, wrong AI SDK function, wrong level of complexity, or a demo that does not fit the examples tree. Use an ordinary prompt when you need broader architectural help outside examples/ai-functions/src/.

develop-ai-functions-example skill FAQ

Is develop-ai-functions-example only for the vercel/ai repo

It is most useful there because the skill is anchored to the example taxonomy in examples/ai-functions/src/. You can still use it as a pattern for your own examples repo, but the value drops if you do not care about matching this structure.

Is this skill beginner-friendly

Yes, if you already know the feature you want to demonstrate. It narrows the decision space by mapping your goal to the right example category. It is less beginner-friendly if you are still deciding between text generation, structured generation, embeddings, agents, or middleware.

What does it do better than a plain coding request

The main advantage is better repo fit. develop-ai-functions-example helps you avoid mistakes like putting a structured-output example in a plain text folder, using a non-streaming API for a streaming demo, or overbuilding a fixture with unrelated app code.

When should I not use develop-ai-functions-example

Do not use it when you need:

  • production app architecture
  • UI integration guidance
  • a generic AI SDK tutorial
  • cross-repo refactoring advice
  • evaluation or benchmarking strategy beyond example creation

It is intentionally narrow.

Do I need to know the exact provider first

Not always, but your output improves if you specify one. If you do not know, ask for a provider already used in nearby examples so the generated file is easier to validate against existing repo patterns.

Can it help with existing examples, not just new ones

Yes. It is useful for modifying or tightening an existing example, especially when you want to change the SDK function, swap providers, simplify a repro, or align a file with the correct category.

How to Improve develop-ai-functions-example skill

Give the skill a concrete repository target

The single most effective improvement is to name the exact destination folder and function. Compare:

Weak:

  • “Add an example for object extraction.”

Strong:

  • “Use develop-ai-functions-example to add a minimal generateObject() example under examples/ai-functions/src/generate-object/ that extracts invoice fields from plain text and prints validated JSON.”

The stronger version removes ambiguity about API choice and file placement.

State the output purpose up front

Users care most about whether the example is meant to prove support, teach usage, or serve as a fixture. Put that in the first sentence of your prompt. The code shape often changes based on this decision.

Prevent overengineering in the first draft

A common failure mode is getting a response with extra wrappers, app scaffolding, or mixed concerns. Explicitly ask for:

  • minimal code
  • one responsibility
  • no UI
  • no framework setup
  • only essential comments

This keeps the example aligned with the examples tree rather than becoming a mini app.

Ask for local pattern matching

To improve develop-ai-functions-example skill results, tell the model to imitate nearby examples in naming, imports, and env handling. That small instruction often matters more than asking for “best practices.”

Example:

  • “Match the style of sibling files in examples/ai-functions/src/stream-text/ and avoid introducing a new pattern unless required.”

Be explicit about inputs and expected output shape

Another common failure mode is vague demo logic. If you specify sample inputs and expected output, the example becomes easier to validate.

Better prompt detail:

  • input text or payload
  • expected response form
  • whether output should be streamed, logged, or schema-validated
  • what should count as success

Iterate by tightening, not broadening

After the first draft, improve by removing ambiguity:

  • replace “some provider” with a concrete provider
  • replace “an object” with a schema
  • replace “example script” with the target directory
  • replace “works” with a testable output expectation

The best iteration path for develop-ai-functions-example is usually narrower scope, not more features.

Check category fit before accepting the output

Before you keep the result, ask:

  • Is this the right folder for the function used?
  • Is the example minimal enough for validation?
  • Does it demonstrate one core behavior clearly?
  • Would a maintainer understand why it belongs here?

If not, revise the prompt before revising the code. That is often the faster fix.

Improve prompts with provider and environment assumptions

If the example depends on credentials or provider-specific behavior, say so directly. Otherwise the model may output code that is technically plausible but hard to run in the repository.

Useful additions:

  • required env var names
  • provider SDK assumptions
  • model name if stability matters
  • whether fallbacks are acceptable

Use sibling examples as acceptance criteria

A practical way to improve output quality is to judge the generated file against nearby examples, not against generic code standards. If it feels noticeably more complex, less focused, or structurally different than siblings, ask for a rewrite aligned to the local pattern.

Ratings & Reviews

No ratings yet
Share your review
Sign in to leave a rating and comment for this skill.
G
0/10000
Latest reviews
Saving...