Chat
Real-time AI chat with streaming responses.
Overview
The chat feature provides real-time AI conversations using Server-Sent Events (SSE) for streaming responses.
Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/chat | POST | Non-streaming chat |
/api/chat/stream | POST | Streaming chat (SSE) |
How It Works
- User sends a message from the chat UI
- Server validates session and checks credit balance
- Credits are deducted (10 credits per message)
- Request is sent to Volcano Engine (Doubao model)
- Response streams back via SSE to the client
- Message is saved to
chatMessagetable
Data Model
Chat Sessions (chatSession)
Each conversation is a session:
id— Session identifieruserId— Ownermodel— AI model usedtotalCreditsUsed— Running total of credits spent
Chat Messages (chatMessage)
Individual messages within a session:
sessionId— Parent sessionrole—userorassistantcontent— Message textcreditsUsed— Credits consumed for this message
Configuration
Model parameters can be adjusted in the API route:
const response = await chat({
model: 'doubao-1-5-thinking-pro-250415',
messages: [...],
temperature: 0.7,
top_p: 0.9,
max_tokens: 2048,
});