Ask your documents.
Get answers.
Upload documents into collections, build a RAG pipeline automatically, and let AI agents answer questions grounded in your data. With inline citations, multi-format parsing, and six LLM providers — enterprise document intelligence in one platform.
Collections
Upload documents, build a RAG pipeline, and ask questions with cited answers.
Chat
Multi-turn conversations with streaming, file uploads, and live tool visibility.
Agents
Create AI agents with system instructions, models, tools, and knowledge.
Skills
Custom Python functions agents can call. Reusable across agents via slash commands.
Analytics
Usage trends, model performance, conversation history, and agent activity.
Retrieval-Augmented Generation
From documents to answers,
in four steps.
Upload documents
Drop PDFs, CSVs, HTML, JSON, or plain text into a collection. Multi-format parsing extracts clean text automatically.
Chunk & embed
Text is split into overlapping chunks and converted to vector embeddings using sentence transformers — stored for fast similarity search.
Retrieve context
When a question arrives, the most relevant chunks are found via semantic search and assembled into a context window for the LLM.
Generate answer
The LLM produces a grounded answer with inline citations pointing back to exact source documents and character ranges.
Document ingestion
Upload anything.
We handle the rest.
Drop files into a collection and the pipeline parses, chunks, embeds, and indexes them automatically. No preprocessing scripts, no format conversion — just upload and ask.
- PDF documents with page-level navigation in the viewer
- Spreadsheets — CSV and XLSX with row-level chunking
- Web content — HTML pages and scraped URLs
- Structured data — JSON and JSONLINES
- Text — Markdown, plain text, and rich documents
Retrieval-Augmented Generation
Answers grounded in
your actual data.
RAG combines document retrieval with LLM generation. Instead of relying on the model's training data, the system searches your uploaded documents, finds the most relevant passages, and generates an answer that cites exact sources — so you can verify every claim.
- Retrieve — your question is embedded and matched against document chunks via semantic similarity
- Augment — the top matching passages are injected into the LLM prompt as context
- Generate — the LLM produces an answer constrained to the provided context, with inline citations
- Eliminates hallucination — answers come from your documents, not the model's imagination
Document collections
Your documents become
a searchable knowledge base.
Create collections, upload files in any format, and instantly get a fully indexed knowledge base. Ask questions directly or attach a collection to an agent for grounded conversations.
- Multi-format ingestion — PDF, CSV, XLSX, HTML, JSON, Markdown, plain text
- Automatic chunking with configurable overlap and size
- Semantic, keyword, and hybrid search modes
- Streaming Q&A with inline source citations and document viewer
- Per-collection embedding model and LLM selection
MCP protocol servers
Connect any remote MCP endpoint. The agent gets structured, typed tools it can call directly — full tool schema, parallel execution, iteration tracking.
REST API wrapper
Point to any REST base URL. The agent gets a call_api tool to reach any endpoint with full method and payload control — no MCP server required.
Tools
Two ways to connect
external capabilities.
Attach tool servers to any agent. Tool calls show live in the chat UI — the agent's reasoning, which tools it used, and what they returned is always visible.
Manage toolsMulti-provider
Use the model that fits the task.
Each agent independently selects its provider and model. One interface, any model underneath.
Analytics
Understand how your agents
are being used.
Track conversation volume, model usage, tool calls, and response patterns across your entire agent fleet. Scheduled runs, channel traffic, and memory usage — all in one dashboard.
- Conversation and token usage over time
- Per-agent and per-channel activity breakdown
- Tool call frequency and success rates
- Full session history with message-level replay
Collections
Build a knowledge base
Upload documents, configure search modes, and get AI-generated answers with inline citations — all from one interface.
Create collectionAgents
AI-powered assistants
Build agents with system instructions, tools, skills, and knowledge. Attach a collection for grounded document Q&A.
Create agentChat
Start a conversation
Multi-turn streaming conversations with any agent. File uploads, tool visibility, and collection-backed Q&A built in.
Open chat