Testing & Health
Test your webhooks with sample data, monitor health dashboards, review execution logs, analyze errors, and track AI tool call statistics for reliable automation.
What you'll learn
- Testing your webhook
- Understanding the health dashboard
- Recent executions log
- Error breakdown
- AI tool call stats
Testing your webhook
Before enabling a webhook for real traffic, always test it. Click the 'Send Test Webhook' button in the webhook editor to fire a test request. Chattlebot generates sample data that mirrors a real lead capture: a test email address, sample name, phone number, default urgency and sentiment values, a mock conversation transcript, and current timestamps. The test result shows three key pieces of information: the HTTP status code (200 means success), the response body (what your endpoint returned), and the execution time (how long the request took). In the execution history, test fires are logged with 'triggered_by: test' so you can easily distinguish them from real triggers. Use manual tests whenever you change the webhook URL, modify the payload template, update authentication credentials, or want to verify the receiving endpoint is working correctly. If the test fails, you'll see the error status code and response body, which helps you diagnose the issue before it affects real leads.


Understanding the health dashboard
The health dashboard gives you a real-time overview of how your webhook is performing. Access it by clicking the activity icon in the webhook list. The dashboard shows four key metrics: success rate over the last 24 hours, success rate over the last 7 days, average response time across all executions, and total execution count. The 24-hour and 7-day success rates help you spot both immediate issues and longer-term trends. A sudden drop in the 24-hour rate might indicate your endpoint went down, while a gradual decline over 7 days might suggest growing timeout issues as your API gets more traffic. Average response time tells you how fast your endpoint is responding โ ideally under 2 seconds for most use cases, and under 5 seconds for AI-callable webhooks (since the AI waits for the response during a chat). Total executions give you a sense of volume โ useful for capacity planning and for understanding how heavily a webhook is used.


Recent executions log
The executions log shows a detailed record of every webhook fire. You can filter the list by: all executions, successful only, or failed only. Each entry shows: the timestamp (when it fired), trigger type (lead_capture, workflow, ai_tool, or test), HTTP status code, execution time, and attempt number. The attempt number is particularly useful for debugging retries โ if a webhook failed on the first try but succeeded on retry, you'll see attempt 1 (failed) and attempt 2 (success) as separate entries. This helps you understand whether failures are transient (network blips that resolve on retry) or persistent (configuration issues that fail every time). Click on any execution to see the full request and response details, including headers, payload sent, and response body received.

Error breakdown
The error breakdown section categorizes all failures into actionable groups. Timeout errors (408) mean your endpoint took too long to respond โ consider increasing the timeout setting in your webhook configuration. Server errors (500-504) indicate a problem on the receiving end โ check your endpoint's health and logs. Client errors like authentication failures (401/403) mean your credentials are wrong or expired โ verify your authentication settings. Rate limited errors (429) mean the receiving service is throttling your requests โ enable retry with exponential backoff to handle these gracefully. Network errors mean Chattlebot couldn't reach your URL at all โ verify the URL is correct and the server is publicly accessible. Each error category shows its count over the selected time period and includes an actionable suggestion for how to fix it. Focus on the categories with the highest counts first for the biggest impact.
Error Breakdown
Last 30 days
AI tool call stats
When a webhook is configured as AI-callable, the health dashboard includes an additional section specifically for AI tool call statistics. This shows: total AI calls (how many times the AI invoked this tool), AI-specific success rate (percentage of AI calls that returned valid data), and average AI response time (how long AI calls take on average). These metrics are separate from the overall webhook metrics because AI tool calls have different performance requirements โ they happen during live conversations, so response time directly affects the user experience. A good target is keeping AI call response times under 3 seconds and success rates above 95%. If the AI call success rate drops significantly below 95%, investigate your endpoint health and response times. Slow or failing AI tools degrade the chat experience because the AI either waits too long or has to tell the user it cannot look up the requested information.
AI Tool Call Stats
AI-Callable onlyTotal AI Calls
342
Success Rate
96.8%
Avg Response Time
298ms
Monitor reliability โ if AI call success rate drops below 95%, check your endpoint health and response times.
๐กPro Tip
- Set up a weekly routine to check the health dashboard for each active webhook. Catching small issues early โ like gradually increasing response times โ prevents outages that could affect your leads and customer experience.
Related Guides
Troubleshooting
Solve common webhook issues โ error codes, retry configuration, timeout tuning, success conditions, and frequently asked questions for reliable automation.
Read guideReal-World Recipes
Copy-paste automation recipes for common use cases โ Slack alerts, AI inventory checks, Google Sheets sync, lead enrichment, and real-time appointment availability.
Read guideReady to get started?
Create your free account and start building your chatbot today.
Start Free Trial