Express (Node.js)
@react-ai-stream/express is a one-line Express middleware that handles RAIS-compliant SSE streaming. It reads req.body.messages, calls your chosen provider, and pipes the stream back to the client with proper headers.
Install
npm install @react-ai-stream/expressExpress itself is a peer dependency — install it separately if you haven't already.
Usage
import express from 'express'
import { raisMiddleware } from '@react-ai-stream/express'
const app = express()
app.use(express.json())
app.post('/api/chat', raisMiddleware({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
}))
// Anthropic:
// app.post('/api/chat', raisMiddleware({
// provider: 'anthropic',
// apiKey: process.env.ANTHROPIC_API_KEY,
// model: 'claude-sonnet-4-6',
// }))
app.listen(3001)What it does
- Reads
req.body.messages(array of{role, content}objects) - Sets
Content-Type: text/event-stream,Cache-Control: no-cache,X-Accel-Buffering: no - Pipes
AsyncIterable<StreamChunk>from provider → SSE response - Aborts the LLM stream when the client disconnects
API
raisMiddleware(options)
| Option | Type | Description |
|---|---|---|
provider | 'openai' | 'anthropic' | Use a built-in provider |
apiKey | string | Provider API key |
model | string | Model name (optional, has sensible default) |
handler | (messages, signal) => AsyncIterable<StreamChunk> | Custom stream handler |
Pass either provider + apiKey or a handler — not both.
Client
Point useAIChat at your Express server:
const chat = useAIChat({ endpoint: 'http://localhost:3001/api/chat' })For same-origin Next.js apps, use a relative URL:
const chat = useAIChat({ endpoint: '/api/chat' })Add cors middleware if your React app runs on a different port than Express.
Abort handling
The middleware listens for req.on('close') and aborts the provider stream when the client disconnects or clicks Stop. This cancels the upstream LLM call so you don't pay for tokens that nobody reads.