Teams using automation platforms are starting to treat conversational AI as another operational interface. That change is reflected in a new feature from n8n, which has introduced a built-in Chat Hub designed to let users interact with AI models and internal automation through a single chat interface.
Chat Hub sits inside the n8n platform and gives users the ability to send prompts to LLMs, invoke workflow-backed agents, and receive responses without direct access to workflow logic or credentials. The feature is intended for organizations that want conversational access to automation while keeping controls around how AI is used and who can use it.
n8n said the goal is to reduce fragmented AI usage across teams by offering a shared interface that is governed by existing platform permissions. The company framed the feature as a response to growing internal use of external AI tools that operate outside enterprise oversight.
Conversational access without workflow visibility
Chat Hub introduces a dedicated chat interface that supports general AI models and workflow-connected agents. Users can switch between available models in the same conversation window and interact with agents that are tied to specific automations.
From a permissions standpoint, Chat Hub relies on a separate Chat user role. That role allows users to send prompts and receive outputs while blocking access to workflow design, credentials, and other configuration areas. According to n8n, this structure is meant to support scenarios where non-technical staff need to query systems or trigger actions without modifying automation logic.
Workflow authors expose agents to Chat Hub by adding a chat trigger to a workflow and enabling it for chat access. Once enabled, the agent becomes available in the Chat Hub interface for authorized users. The interaction remains conversational, but execution happens through predefined workflows.
Centralized control over models and credentials
Administrators retain control over which AI providers and models are available through Chat Hub. Model access, credentials, and provider settings are configured at the platform level, with options to restrict users from adding their own credentials.
n8n positioned this control structure as a way to keep AI usage aligned with organizational policies. AI prompts, model access, and workflow execution remain inside the same system that already governs automation and integrations.
The company also highlighted credential reuse as part of the design. By centralizing API keys and provider connections, the platform reduces reliance on personal accounts or unmanaged tools that operate outside internal visibility.
Personal agents and workflow agents serve different roles
Chat Hub supports two types of agents. Personal agents are configured directly within the chat interface and are designed for repeated conversational tasks. Users define a system prompt, select a model, and choose which tools the agent can access.
Workflow agents are created by technical users and connect chat interactions directly to automation logic. These agents can query internal systems, run integrations, or trigger actions based on conversational input. Their behavior is governed by the workflow design rather than the chat session alone.
Personal agents have restrictions around file knowledge and tool access, while workflow agents require specific trigger versions and streaming settings to operate correctly inside Chat Hub.
Addressing internal AI sprawl
Organizations are seeing wider use of external AI tools by employees without centralized oversight, creating gaps in visibility around data handling, credential use, and system access.
Chat Hub places conversational AI interaction inside the same environment used for workflow automation, keeping prompts, credentials, and execution under existing permission controls and reducing reliance on disconnected AI services.
