Designing a Chat Widget That Knows Who It's Talking To
How metadata-driven routing and few-shot role conditioning let a single embeddable widget serve employees, admins, and customers, with completely different behavior.
The Multi-Audience Problem
Most enterprise platforms serve multiple audience types simultaneously: employees asking about internal processes, customers asking about products, and admins asking about system configuration. Traditionally, you'd build three separate chatbots. Our Embeddable Role-Aware Chat Widget solves this with a single deployment that adapts its behavior, knowledge scope, and response style based on who is asking.
The core insight is that the 'right answer' is often the same information presented differently, from a different knowledge source, with a different tone. A question about password reset should give an IT admin a step-by-step technical guide; it should give an end user a simple 'click here and follow these steps' answer; it should give a customer a polite 'please contact our support team' response.
Role-Based Routing Architecture
When the widget is embedded on a page, it is initialized with a metadata payload passed from the host application: user_id, role, department, and platform_context. This metadata is signed with a JWT to prevent client-side tampering. On every query, this metadata is sent to the backend alongside the user message.
The routing layer maps the role to a configuration profile that specifies: which namespaces in the vector database are accessible, which LLM persona to use, which tool functions are available (e.g., admins can call system diagnostic functions; customers cannot), and what information should never appear in responses for this role.
Few-Shot Role Conditioning
Beyond routing, we use few-shot examples in the system prompt to condition the LLM's response style per role. Rather than lengthy role-description paragraphs (which the LLM often ignores), we include 3–4 example Q&A pairs that demonstrate the expected behavior for that role.
This technique is substantially more reliable than instructional conditioning alone. Showing the model 'here is how an admin question is answered' and 'here is how a customer question is answered' creates a behavioral anchor that persists across the conversation and survives topic changes better than abstract role instructions.
💡Show, Don't Tell, For Role Conditioning
Instead of 'You are a technical assistant for IT administrators, use precise technical language,' include 2-3 example exchanges showing exactly what that looks like. The model follows demonstrated patterns far more reliably than described ones.
Knowledge Isolation
Strict knowledge isolation between roles is critical for enterprise deployments. A customer should never receive information intended only for internal staff; an employee in one department should not access another department's sensitive documents. We implement this at the vector database layer using namespace-level access control, not prompt-level instructions alone.
Prompt-level instructions ('do not share HR documents with customers') are not a security boundary, they can be circumvented with jailbreak-style prompts. Hard namespace isolation at the retrieval layer means the restricted documents are never retrieved in the first place, regardless of what the user asks.
Embed Anywhere Design
The widget is distributed as a single JavaScript snippet that can be embedded on any web platform, React, Vue, plain HTML, Webflow, SharePoint, Confluence. It communicates with our backend API via WebSocket for streaming responses and exposes a simple configuration API for host applications to pass role metadata.
For mobile applications, we provide a React Native wrapper and a native iOS/Android SDK with equivalent functionality. All widget instances share the same backend infrastructure, with tenant isolation at the API authentication layer.
Conclusion
A role-aware chat widget done well eliminates the fragmented experience of multiple disconnected chatbots while maintaining the access control and behavioral differentiation that enterprise deployments require. The key is treating role as a first-class architectural concern from day one, in routing, in retrieval, in prompting, and in the knowledge base organization.
Related Projects

Agentic Knowledge Assistant
An LLM-powered, multi-channel assistant that uses Retrieval-Augmented Generation (RAG) to autonomously answer employee o...

Autonomous Content-to-Learning Engine
An AI system that ingests PDFs, videos, or documents and autonomously creates assessments, flashcards, and learning summ...

Embeddable Role-Aware Chat Widget
A lightweight AI widget that plugs into any platform and adapts answers dynamically based on user role and platform cont...