chat-ui
by inferen-shchat-ui provides ready-made chat interface components for React/Next.js from ui.inference.sh, including chat containers, messages, inputs, typing indicators, and avatars. It helps developers quickly implement modern chat and AI assistant UIs with streaming-friendly message lists and input handling.
Overview
What is chat-ui?
chat-ui is a set of chat interface building blocks for React and Next.js, sourced from ui.inference.sh. It packages production-ready components such as a chat container, message items, input box, typing indicators, and avatars so you can assemble a complete chat or AI assistant frontend without starting from scratch.
These components are delivered as shadcn-style blocks you add to your project, giving you a familiar, tailorable developer experience while handling the repetitive parts of chat UI layout and behavior.
Key features at a glance
- Chat container layout – A ready-made
<ChatContainer>component that structures the full chat area. - Message rendering –
<ChatMessage>for displaying user and assistant messages with support for roles and content. - Chat input handling –
<ChatInput>to capture user text, handle submit actions, and manage disabled/loading states. - Typing indicator –
<TypingIndicator>for visual feedback while the assistant is responding. - Streaming-friendly – Message list and container patterns suitable for streaming and incremental assistant responses.
- React/Next.js focused – Designed for modern React and Next.js applications, aligning with common frontend patterns.
Who is chat-ui for?
chat-ui is aimed at:
- Frontend developers building chat or AI assistant interfaces on top of existing APIs.
- Product teams who want a polished chat layout without spending days on UI wiring.
- Next.js and React developers adopting shadcn-style UI blocks and wanting consistent chat components.
If you are integrating language models, support bots, or internal tools that rely on conversational interfaces, the chat-ui skill helps you get a professional interface running quickly while keeping full control over logic, styling, and data.
What problems does chat-ui solve?
Implementing a chat frontend usually means re-creating the same pieces repeatedly:
- Structuring a responsive chat layout
- Displaying user vs assistant messages differently
- Handling message submission, disabled states, and loading
- Showing typing indicators and maintaining scroll behavior
chat-ui solves the UI part by providing:
- Predefined components for container, messages, input, and typing
- A consistent API surface to connect to your own backend or AI service
- A starting point that matches modern design expectations, while remaining customizable
You still own the business logic and data flow, but you skip the boilerplate UI work.
When chat-ui is a good fit
Use chat-ui when:
- You are building a React or Next.js application.
- You want ready-to-use chat components that integrate with your existing state and API calls.
- You already have, or plan to have, a chat or AI assistant backend and need a front layer.
- You prefer component-level control rather than embedding a full chat SaaS widget.
It is especially helpful in:
- AI copilots and assistants embedded in dashboards
- Customer support or internal helpdesk tools
- Developer tools with conversational interfaces
- Prototyping new chat-based products quickly
When chat-ui may not be ideal
chat-ui may not be the best choice if:
- You are not using React or Next.js. The components are built for the React ecosystem.
- You want a hosted, plug-and-play chat widget that includes backend, auth, and storage. chat-ui focuses on frontend UI only.
- You prefer a design system–agnostic implementation that does not follow shadcn-style patterns.
In these cases, consider a generic UI library or a turnkey chat provider instead.
How to Use
Installation prerequisites
Before installing chat-ui, you should have:
- A React or Next.js project already created
- Node.js and npm (or compatible) installed
- Familiarity with importing and using React components
The chat-ui components are distributed via the shadcn-style registry at ui.inference.sh, and you add them to your project through a single CLI command.
Install chat-ui components
Run the following command in the root of your frontend project to add the chat components:
# Install chat components
npx shadcn@latest add https://ui.inference.sh/r/chat.json
``
This command pulls the chat-related blocks from the ui.inference.sh registry into your project. It will typically create entries under a path similar to `@/registry/blocks/chat/` (as reflected in the import paths shown in the examples).
After installation, confirm that your project now includes files like:
- `chat-container.tsx`
- `chat-message.tsx`
- `chat-input.tsx`
- `typing-indicator.tsx`
The exact filenames can vary depending on your shadcn configuration, but the imports will follow the patterns shown below.
### Core components and basic usage
#### 1. ChatContainer – overall layout
`ChatContainer` wraps your full conversation area. It is responsible for organizing how messages and inputs are laid out.
```tsx
import { ChatContainer } from "@/registry/blocks/chat/chat-container"
function Chat() {
return (
<ChatContainer>
{/* messages and input go here */}
</ChatContainer>
)
}
You can nest your message list, input, and other UI elements inside ChatContainer. Treat it as the base layout for the chat experience.
2. ChatMessage – render user and assistant messages
ChatMessage displays individual messages and differentiates between roles, such as user and assistant.
import { ChatMessage } from "@/registry/blocks/chat/chat-message"
function Messages({ messages }) {
return (
<div>
{messages.map((message) => (
<ChatMessage
key={message.id}
role={message.role}
content={message.content}
/>
))}
</div>
)
}
You are responsible for managing the messages array and passing each message's role and content to ChatMessage. This keeps the UI in sync with your backend or AI model.
3. ChatInput – capture and submit user messages
ChatInput provides the text input area and submission handling.
import { ChatInput } from "@/registry/blocks/chat/chat-input"
function ChatBox({ onSend, isLoading }) {
return (
<ChatInput
onSubmit={(message) => onSend(message)}
placeholder="Type a message..."
disabled={isLoading}
/>
)
}
Use the onSubmit callback to integrate with your message-sending logic. The disabled prop is useful while you wait for a response, especially if you are streaming tokens from an AI model.
4. TypingIndicator – show assistant activity
TypingIndicator lets users know that the assistant or other party is composing a reply.
import { TypingIndicator } from "@/registry/blocks/chat/typing-indicator"
function Footer({ isTyping }) {
return <>{isTyping && <TypingIndicator />}</>
}
Set isTyping based on your application state — for example, while awaiting server responses or streaming completions.
Putting it together: a simple chat flow
Here is a simplified outline of how you might combine these components in a React or Next.js page:
import { useState } from "react"
import { ChatContainer } from "@/registry/blocks/chat/chat-container"
import { ChatMessage } from "@/registry/blocks/chat/chat-message"
import { ChatInput } from "@/registry/blocks/chat/chat-input"
import { TypingIndicator } from "@/registry/blocks/chat/typing-indicator"
export function ChatPage() {
const [messages, setMessages] = useState([])
const [isTyping, setIsTyping] = useState(false)
async function handleSend(userMessage) {
const nextMessages = [
...messages,
{ id: Date.now(), role: "user", content: userMessage },
]
setMessages(nextMessages)
setIsTyping(true)
// Call your backend or AI service here
const reply = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ messages: nextMessages }),
}).then((res) => res.json())
setMessages((prev) => [
...prev,
{ id: Date.now() + 1, role: "assistant", content: reply.content },
])
setIsTyping(false)
}
return (
<ChatContainer>
<div>
{messages.map((message) => (
<ChatMessage
key={message.id}
role={message.role}
content={message.content}
/>
))}
</div>
<ChatInput
onSubmit={handleSend}
placeholder="Ask me anything..."
disabled={isTyping}
/>
{isTyping && <TypingIndicator />}
</ChatContainer>
)
}
This example demonstrates how chat-ui focuses on the UI primitives, while you supply the state management and API integration.
Customization and styling
Because chat-ui components come from ui.inference.sh as shadcn-style blocks, you can usually:
- Inspect and edit the component source locally to tweak layout, colors, or typography.
- Integrate with your existing design tokens or Tailwind CSS setup.
- Extend components with additional props or wrappers to handle avatars, timestamps, or message actions.
Check the generated files in your project after running the install command to see exactly how the components are implemented and how best to align them with your design system.
FAQ
How do I install chat-ui in an existing React or Next.js project?
Run the shadcn CLI command from your project root:
npx shadcn@latest add https://ui.inference.sh/r/chat.json
This adds the chat-ui components from ui.inference.sh into your codebase, typically under a registry/blocks/chat path. You can then import ChatContainer, ChatMessage, ChatInput, and TypingIndicator using the documented import paths.
Does chat-ui work without Next.js, in plain React?
Yes, chat-ui components are standard React components. As long as your environment supports React, you can integrate them into a plain React SPA. The important part is that you can run the npx shadcn@latest add ... command and resolve the generated imports correctly in your bundler.
Does chat-ui include any backend or AI logic?
No. chat-ui only provides the frontend UI components. You are responsible for:
- Managing the
messagesstate - Calling your backend, API, or AI model
- Handling streaming, errors, and authentication
This separation makes chat-ui flexible: you can pair it with any chat backend, from custom APIs to third-party AI platforms.
Can I customize the appearance of chat-ui components?
Yes. The components imported via the ui.inference.sh registry are regular React components stored in your project. You can open the corresponding files (for example, chat-container, chat-message, chat-input, and typing-indicator) and:
- Adjust layout and spacing
- Change colors, fonts, and borders
- Integrate additional UI elements like avatars, timestamps, or message status
Because you own the source, there is no lock-in on styling.
Is chat-ui suitable for building AI assistants and copilots?
Yes. chat-ui is a natural fit for AI assistants, copilots, and similar conversational tools. It provides:
- A flexible chat container and message presentation
- Input and typing indicator components well-suited for streaming responses
You connect these pieces to your AI backend, handle streaming or incremental updates in your state, and let chat-ui handle the user-facing interface.
When should I choose a different solution instead of chat-ui?
Consider alternatives if:
- You need a fully managed chat widget (UI plus backend, database, and auth) with minimal engineering.
- Your stack is not based on React (for example, Vue, Svelte, or vanilla server rendering without React).
- You want components that are tightly coupled to a specific backend product.
In those cases, a dedicated chat platform or a UI library specific to your framework may be a better match.
Where can I learn more about chat-ui components?
Within the skill, the primary reference is SKILL.md, which outlines the available components and usage code snippets. After installation, your local component files become the best documentation source, since they show exactly how the chat-ui blocks are structured in your project and how you can extend or modify them.
