agent-ui
by inferen-shagent-ui is a batteries-included Agent component for React/Next.js from ui.inference.sh. It ships a prebuilt chat/agent UI plus an SDK-friendly proxy route, so you can wire up AI assistants, SaaS copilots, and agentic UIs without hand-rolling the frontend or streaming logic.
Overview
What is agent-ui?
agent-ui is a batteries-included Agent component for React and Next.js provided by ui.inference.sh. Instead of hand-building an AI chat interface from scratch, you install a ready-made Agent UI along with a simple API proxy route powered by @inferencesh/sdk.
The component bundles the core pieces you typically need for modern AI assistants:
- A drop-in chat/agent interface
- Runtime wiring to an inference backend via a proxy route
- Support for streaming responses
- Room for tools, approvals, and more advanced agent flows (via the underlying ui.inference.sh setup)
If you want to ship AI assistants, SaaS copilots, or agentic UIs inside a Next.js app, agent-ui gives you a production-ready starting point with minimal boilerplate.
Who is agent-ui for?
agent-ui is designed for:
- Frontend developers who want a polished agent/chat UI in a React or Next.js app without designing components from scratch.
- Product teams building AI copilots or assistants into existing dashboards, SaaS products, or internal tools.
- API-focused engineers who prefer to configure a proxy route and environment variables rather than hand-coding client-side request logic.
You will be most comfortable with this skill if you already:
- Use Next.js (App Router) or a similar React stack
- Are familiar with environment variables and API keys
- Know how to configure routes and components in a TypeScript/React codebase
What problems does agent-ui solve?
agent-ui helps you avoid common friction points when implementing AI features:
- No custom chat UI required – skip building message lists, input boxes, and loading states from scratch.
- No ad-hoc fetch glue – the proxy route is handled via
@inferencesh/sdk, so you avoid repeating streaming and error-handling logic. - Easier configuration – pass an
agentConfigobject (model ref, description, system prompt) instead of manually threading config through the UI.
This makes agent-ui a strong choice when you want to move quickly from “we have an API key” to “we have a functioning agent UI.”
When is agent-ui a good fit?
agent-ui is a good fit when:
- You are building a Next.js app (especially with the
app/directory structure). - You want a prebuilt agent UI that looks modern and is built with the shadcn-style ecosystem.
- You are comfortable configuring an inference proxy route and environment variables.
It may not be the best fit if:
- You are not using React/Next.js as your frontend stack.
- You need a fully custom-designed chat UI with bespoke interaction patterns that diverge heavily from a typical agent interface.
- You cannot expose or manage an
INFERENCE_API_KEYin your environment.
How to Use
1. Skill installation in your agent workspace
To add the agent-ui skill into an Agent Skills–compatible workspace, install it via the skills CLI:
npx skills add https://github.com/inferen-sh/skills --skill agent-ui
This fetches the agent-ui skill metadata from the inferen-sh/skills repository. After installation, open SKILL.md in the ui/agent-ui folder for the upstream instructions.
2. Install the agent-ui component into a Next.js project
Inside your actual Next.js app (where you want to render the UI), install the Agent component from ui.inference.sh using the shadcn-style command referenced by the skill:
# Install the agent component
npx shadcn@latest add https://ui.inference.sh/r/agent.json
# Add the SDK for the proxy route
npm install @inferencesh/sdk
This does two things:
- Pulls in the Agent UI block (React component and related UI wiring) from
ui.inference.sh. - Installs
@inferencesh/sdk, which you’ll use to create the inference proxy route in your Next.js app.
You can run these commands in any existing Next.js (App Router) project. Ensure you have Node.js, npm (or another package manager), and a working dev environment before running them.
3. Configure the API proxy route in Next.js
agent-ui expects your frontend to talk to a backend proxy route rather than calling the inference service directly from the browser. The skill documentation provides a minimal Next.js route based on @inferencesh/sdk:
// app/api/inference/proxy/route.ts
import { route } from '@inferencesh/sdk/proxy/nextjs';
export const { GET, POST, PUT } = route;
Implementation notes:
- Place this file in
app/api/inference/proxy/route.tsin a Next.js App Router project. - The
routehelper from@inferencesh/sdk/proxy/nextjsexposes the HTTP handlers (GET, POST, PUT) for you, so you don’t need custom routing logic. - This endpoint becomes the
proxyUrlyou’ll pass to theAgentcomponent.
If you use a different directory structure, keep the path consistent and update your proxyUrl accordingly.
4. Set the INFERENCE_API_KEY environment variable
Next, configure your inference API key in your local environment. The skill’s instructions reference an INFERENCE_API_KEY variable:
# .env.local
INFERENCE_API_KEY=inf_...
Steps:
- Create or open
.env.localat the root of your Next.js project. - Add your actual API key in place of
inf_.... - Restart your dev server so changes to
.env.localare picked up.
Make sure you keep this key secret and never commit .env.local to version control.
5. Render the Agent component in a page
Once the component and proxy route are in place, you can render the Agent UI within any Next.js page. The skill includes a concise example:
import { Agent } from "@/registry/blocks/agent/agent";
export default function Page() {
return (
<Agent
proxyUrl="/api/inference/proxy"
agentConfig={{
core_app: { ref: 'openrouter/claude-haiku-45@0fkg6xwb' },
description: 'a helpful ai assistant',
system_prompt: 'you are helpful.',
}}
/>
);
}
Key parts to understand:
proxyUrl: Points to the API proxy route you created (/api/inference/proxy). This is how the Agent UI sends and receives messages.agentConfig: An object that configures the underlying agent, including:core_app.ref: A reference to the model or app used in the backend.description: A human-readable description of the assistant.system_prompt: A short system prompt that shapes behavior.
You can duplicate or adapt this page file in app/agent/page.tsx or any existing route in your app.
6. Customize and extend agent-ui
The skill’s SKILL.md mentions features such as tools, approvals, and widgets (via ui.inference.sh). To take advantage of these, you can iteratively:
- Adjust
agentConfigwith different model refs, descriptions, and system prompts. - Explore the installed
Agentblock files under@/registry/blocks/agent/agentto see how the UI is built and what props are supported. - Integrate the Agent UI into specific app flows (for example, a support dashboard, an onboarding copilot, or an internal operations assistant).
Because agent-ui is delivered as a React component, you can wrap it in layouts, modals, or tabs, or use your own navigation and authentication patterns around it.
7. Files to review after installation
After installing the agent-ui skill in your skills-aware workspace, the main file to open is:
ui/agent-ui/SKILL.md– upstream quick-start and configuration details.
From there, you can map the instructions into your live Next.js project and tailor them to your own models and backend constraints.
FAQ
Does agent-ui require Next.js, or can I use it with plain React?
The skill’s setup walkthrough and proxy route example are written specifically for Next.js using the app/api route convention and @inferencesh/sdk/proxy/nextjs. While the UI itself is React-based, the documented and supported path in this skill is for Next.js with an App Router–style API route.
If you are using plain React or a different framework, you would need to re-create the proxy behavior yourself and adapt the example; that integration path is not documented in this skill.
How is agent-ui different from building a custom chat UI?
With agent-ui you:
- Install a prebuilt Agent component from ui.inference.sh.
- Wire it to an inference backend via a single proxy route and an environment variable.
You do not have to:
- Design and code a chat message list, input area, and streaming states.
- Hand-write fetch calls for sending and receiving messages.
You still retain configuration control through agentConfig, so you can change the model reference, description, and system prompt without touching the UI internals.
What is the role of @inferencesh/sdk in this setup?
@inferencesh/sdk powers the server-side proxy route:
import { route } from '@inferencesh/sdk/proxy/nextjs';
export const { GET, POST, PUT } = route;
By using this helper, you:
- Expose a single endpoint (
/api/inference/proxyin the example) for your Agent UI to talk to. - Delegate protocol details and HTTP method handling (GET, POST, PUT) to the SDK instead of custom code.
This makes it easier to maintain and modify your inference integration without rewriting the UI.
How do I change the model or behavior of the agent?
You update the agentConfig passed to the Agent component. For example:
<Agent
proxyUrl="/api/inference/proxy"
agentConfig={{
core_app: { ref: 'openrouter/claude-haiku-45@0fkg6xwb' },
description: 'a helpful ai assistant',
system_prompt: 'you are helpful.',
}}
/>
To change behavior, you can:
- Swap the
core_app.refto a different supported model or app. - Update the
descriptionto reflect the assistant’s role (e.g., “customer support copilot”). - Refine the
system_promptto tune tone and task boundaries.
Consult your inference backend’s documentation for valid values and additional config options.
Is agent-ui suitable for production use?
The agent-ui skill exposes a real React/Next.js Agent component and a proxy route pattern oriented toward production-style apps. However, production readiness depends on how you:
- Manage API keys and environment variables.
- Add authentication, authorization, and rate limiting around
/api/inference/proxy. - Monitor, log, and secure traffic to your inference backend.
The skill gives you a solid starting point, but you should layer in your own security, observability, and error handling policies before going live.
Where can I see the upstream documentation for agent-ui?
After installing the skill via:
npx skills add https://github.com/inferen-sh/skills --skill agent-ui
open:
ui/agent-ui/SKILL.md
That file is maintained in the inferen-sh/skills repository and contains the upstream quick start (installation commands, proxy route snippet, env configuration, and example component usage) for the agent-ui component.
