Documentation Index
Fetch the complete documentation index at: https://watermelon.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Why guardrails matter
AI Agents are powered by large language models (LLMs). These models generate responses probabilistically — they predict what words are most likely to come next. This means instructions guide the model’s behavior, but they are not hard rules. The model does not truly understand company policies and may attempt to answer questions even when it shouldn’t. Without clear guardrails, your agent might:- Answer irrelevant questions (for example: “What is a cow?”)
- Mention competitors
- Provide information unrelated to your company
- Suggest actions it cannot actually perform
“Can you send me the invoice for my order?”Without guardrails, the agent may respond as if it can perform this action — even if it has no access to your systems. Guardrails help prevent this by clearly defining what the agent is responsible for and what it should refuse.
What you need to do
Add clear behavioral rules to your AI Agent under: AI Agent settings → guardrails These rules should explain:- What the agent is responsible for
- What the agent should not answer
- What it should do when a question is out of scope
Basic guardrail example
This keeps the agent focused on your company and its knowledge base.Guardrails you should consider
When defining guardrails, think about the role of your AI Agent and what it should not do. Below are common categories that help keep agents reliable and on-brand.1. Scope guardrails
Define what the agent is allowed to talk about. Example: This prevents the agent from answering general knowledge questions or unrelated topics.2. Competitor guardrails
Prevent the agent from discussing competitors. Example:3. Capability guardrails
Define what the agent cannot actually do. This is very important. Customers may assume the AI can perform actions like:- Sending invoices
- Checking order status
- Updating account details
- Canceling subscriptions
4. Advice guardrails
Prevent the AI from giving regulated or risky advice. Example:5. Opinion guardrails
Keep the agent neutral and factual. Example:6. Content source guardrails
Ensure the AI only uses trusted information. Example:Good guardrails make better AI Agents
A well-defined agent should clearly know:- What it represents
- What information it can use
- What it cannot do
- How to respond when a question is outside its scope

