Streaming
Lyntaris uses two streaming patterns in production.
1) Prediction SSE (/api/v1/prediction/{chatflowId})
Set "streaming": true in prediction request.
{
"question": "Tell me a story",
"streaming": true,
"overrideConfig": {
"sessionId": "session-1"
}
}
Server responds as SSE (Content-Type: text/event-stream).
Each data: line contains JSON with { "event": "...", "data": ... }.
Typical events:
starttokenmetadatasourceDocumentsusedToolserrorend
Additional runtime events may be emitted (for example agentFlowEvent, usageMetadata, tts_start, tts_data, tts_end).
2) Realtime-v1 protocol (Flowise -> Unity)
Unity parser expects streamed tokens to form this final structure:
{
"text": "Conversational text for TTS",
"tool_calls": [
{
"name": "LoadFlowiseImageCommand",
"args": {
"arg1Name": "arg1Value"
}
}
],
"error": "Optional error string"
}
Rules used by Unity:
- Tokens in
textare sent to TTS buffer immediately. tool_callsare buffered, then executed when the array is complete.
Realtime queue API (external triggering)
Base path: /api/v1/openai-realtime/{chatflowId}
POST /queue/enqueueGET /queue/statusPOST /queue/togglePOST /queue/completePOST /queue/cancelPOST /queue/clearPOST /queue/remove
enqueue requires form object with non-empty ID (case-insensitive, normalized to ID).
{
"form": {
"ID": "customer-123",
"question": "Hey, how are you?"
},
"ccBaseUrl": "https://optional-unity-base.example.com"
}
Success response:
{
"accepted": true,
"taskId": "flowise-4ed0073e-b9a8-49e9-a086-819e77990780",
"pendingCount": 1
}