LLM Node

Provides direct access to a configured Large Language Model (LLM) for executing AI tasks, enabling the workflow to perform structured data extraction if needed without agentic overhead.
Functionality
This node sends structured requests to an LLM based on provided instructions (Messages) and context. It is aggressively optimized for text generation, summarization, translation, analysis, answering questions, and generating strictly formatted JSON output according to a defined enterprise schema. It has native access to memory for the conversation thread and can freely read/write to the Flow State.
Configuration Parameters
- Model: Specifies the AI model from a chosen service — e.g., OpenAI's GPT-4o, Google Gemini, or local models.
- Messages: Define the conversational input for the LLM, structuring it as a sequence of roles —
System,User,Assistant,Developer— to mathematically guide the AI's response. Dynamic runtime data can be inserted effortlessly using{{ variable }}. - Memory: If enabled, determines if the LLM should actively consider the history of the current conversation thread when generating its response.
- Memory Type, Window Size, Max Token Limit: If memory is used, these settings refine how the conversation history is managed and presented to the LLM — for example, whether to comprehensively include all messages, restrict to a recent window of turns, or enforce a summarized memory version to save tokens.
- Input Message: Specifies the dynamic variable or text that will be appended as the most recent user message at the very end of the existing conversation context.
- Return Response As: Configures how the LLM's output is formally categorized — as a User Message or Assistant Message. This heavily influences how it's handled by subsequent memory systems or observability logging tools.
- JSON Structured Output: Instructs the LLM to format its output aggressively according to a strictly defined JSON schema — including keys, data types, and descriptions — ensuring predictable, machine-readable pipeline data.
- Update Flow State: Allows the node to seamlessly modify the workflow's runtime state
$flow.stateduring execution by updating pre-defined keys.
Inputs & Outputs
- Inputs: This node dynamically utilizes data from the workflow's initial trigger or from the aggregated outputs of any preceding nodes.
- Outputs: Produces the LLM's response, guaranteed as either plain string text or a predictably formatted structured JSON object.