Backend-agnostic AI streaming for React
Build ChatGPT-style interfaces with OpenAI, Anthropic, Groq, or your own backend.
Your frontend should not care whether responses come from OpenAI, Anthropic, Groq, FastAPI, Node.js, or your own inference server. react-ai-stream keeps the React layer completely decoupled from whatever produces the stream.
npm install @react-ai-stream/react10 lines to a streaming chat
Install, point at your backend, render. That's the full mental model.
'use client'
import { useAIChat } from '@react-ai-stream/react'
import { Chat } from '@react-ai-stream/ui'
import '@react-ai-stream/ui/styles'
export default function Page() {
const { messages, sendMessage, loading, stop } = useAIChat({
endpoint: '/api/chat', // OpenAI, Anthropic, Groq, FastAPI — any streaming endpoint
})
return <Chat messages={messages} onSend={sendMessage} onStop={stop} loading={loading} />
}The hook returns plain data. You can drop <Chat /> in for zero-config, or wire messages to any UI you already have. Full quickstart →
Who is this for?
- Sidebar assistant in a SaaS product
- Floating chat widget
- Inline doc helper
- Run 3 providers in parallel
- Side-by-side response quality
- Cost/speed tradeoffs
- Already have a design system
- Don't want locked-in UI
- Need isolated chat instances
- Python / FastAPI
- Go / Rails
- Any server that speaks HTTP+SSE
- Employee Q&A bots
- Knowledge base search
- Report generation
- Swap models without frontend deploys
- Route by region or topic
- Stream long answers
Why backend-agnostic matters
Most AI chat libraries are secretly backend libraries. They stream from OpenAI directly, or through their own cloud, or via a specific server adapter. The React hook is just a thin client on top of one particular provider.
react-ai-stream takes a different approach: the hook speaks a simple HTTP streaming protocol. Any server that produces that protocol works.
data: {"type":"text","text":"Hello"}
data: {"type":"text","text":", world"}
data: {"type":"done"}Three event types. That's the entire contract between your server and your React component. This means:
- Switch providers without touching React. OpenAI → Anthropic → your own model? Change the API route, not the frontend.
- No API keys in the browser. The hook talks to your server, which talks to the LLM.
- Multiple providers simultaneously. Run
useAIChatthree times with three endpoints — each instance is fully isolated. - Any backend language. FastAPI, Go, Rails, Cloudflare Workers — if it can stream SSE, it works.
Compare this to libraries that import @anthropic-ai/sdk directly in client code: your LLM choice is now coupled to your UI layer, visible in your bundle, and locked to that SDK's format.
Deep dive: Why backend-agnostic? →
What you get
| react-ai-stream | Vercel AI SDK | |
|---|---|---|
| Bundle size | ~20 kB total | ~90 kB+ |
| Framework requirement | None — plain React | Next.js optimized |
| Multiple isolated chat instances | Per hook, zero config | Shared context |
| Bring your own UI | First-class | No |
Event hooks (onToken, onComplete) | ✓ | Limited |
| Backend language | Any (HTTP+SSE) | Node.js preferred |
| Direct provider (no server) | ✓ dev only | ✓ |
Both are MIT and well-maintained. Choose react-ai-stream when you want portable streaming primitives with no framework opinions. Choose Vercel AI SDK when you need tight Next.js RSC integration.
Three packages, one idea
| Package | Role | Size |
|---|---|---|
@react-ai-stream/core | SSE parser, message store, abort utils | ~6 kB |
@react-ai-stream/react | useAIChat hook + AIChatProvider context | ~8 kB |
@react-ai-stream/ui | <Chat>, <MessageList>, <MarkdownRenderer> | ~12 kB |
@react-ai-stream/react has no dependency on @react-ai-stream/ui. Use the hook alone and wire it to Tailwind, shadcn/ui, or anything else. The UI package is optional.