Skip to main content

Analytics

Foil provides comprehensive analytics to understand your AI application’s performance, costs, and quality.

Dashboard Overview

The main dashboard shows key metrics at a glance:
MetricDescription
Total RequestsNumber of traces in the time period
Success RatePercentage of successful completions
Avg LatencyMean response time
Active AgentsAgents with recent activity
Alert CountActive alerts requiring attention

Time-Series Charts

Requests Over Time

Track request volume with agent breakdown:
GET /api/analytics/traces/requests-over-time?startDate=2024-01-01&endDate=2024-01-31&granularity=daily
Response:
{
  "data": [
    {
      "date": "2024-01-01",
      "total": 1250,
      "byAgent": {
        "customer-support": 800,
        "code-review": 450
      }
    }
  ]
}

Success/Failure Rates

Monitor error trends:
GET /api/analytics/traces/success-failure?startDate=2024-01-01&endDate=2024-01-31

Latency Distribution

Understand response time patterns:
GET /api/analytics/traces/latency-distribution?startDate=2024-01-01&endDate=2024-01-31
Returns buckets:
  • < 200ms
  • 200-500ms
  • 500ms-1s
  • 1-2s
  • 2-5s
  • 5s

Latency Percentiles

Track p50, p95, p99 over time:
GET /api/analytics/traces/latency-percentiles?granularity=hourly

Token & Cost Analytics

Token Usage

Monitor input/output token consumption:
GET /api/analytics/traces/token-usage?startDate=2024-01-01&endDate=2024-01-31&granularity=daily
Response:
{
  "data": [
    {
      "date": "2024-01-01",
      "inputTokens": 1250000,
      "outputTokens": 450000,
      "totalTokens": 1700000
    }
  ]
}

Cost Breakdown

View costs by model:
GET /api/analytics/costs?startDate=2024-01-01&endDate=2024-01-31
Response:
{
  "totalCost": 125.50,
  "byModel": {
    "gpt-4o": 95.00,
    "gpt-4o-mini": 25.50,
    "text-embedding-3-small": 5.00
  },
  "trend": [
    { "date": "2024-01-01", "cost": 4.25 }
  ]
}

Error Analytics

Errors by Type

Understand what’s failing:
GET /api/analytics/traces/errors-by-type?startDate=2024-01-01&endDate=2024-01-31
Response:
{
  "errors": [
    { "type": "rate_limit", "count": 45, "percentage": 35 },
    { "type": "timeout", "count": 30, "percentage": 23 },
    { "type": "api_error", "count": 25, "percentage": 19 },
    { "type": "validation", "count": 15, "percentage": 12 },
    { "type": "other", "count": 14, "percentage": 11 }
  ]
}

Error Rate Over Time

Track error trends:
GET /api/analytics/traces/error-rate?granularity=hourly

Tool Usage

Tool Usage Heatmap

See which tools are used by which agents:
GET /api/analytics/traces/tool-usage?startDate=2024-01-01&endDate=2024-01-31
Response:
{
  "heatmap": {
    "customer-support": {
      "web-search": 450,
      "calculator": 120,
      "knowledge-base": 890
    },
    "code-review": {
      "lint": 340,
      "test-runner": 280
    }
  }
}

Filtering

All analytics endpoints support filtering:
ParameterDescription
startDateStart of time range (ISO 8601)
endDateEnd of time range (ISO 8601)
agentIdFilter by specific agent
granularityTime grouping: ‘hourly’, ‘daily’, ‘weekly’, ‘monthly’
Example:
GET /api/analytics/metrics?startDate=2024-01-01&endDate=2024-01-31&agentId=agent-123

Key Metrics API

Get all primary dashboard metrics in one call:
GET /api/analytics/metrics?startDate=2024-01-01&endDate=2024-01-31
Response:
{
  "totalTraces": 15420,
  "successRate": 0.97,
  "avgLatency": 1250,
  "p95Latency": 2800,
  "activeAgents": 5,
  "activeAlerts": 3,
  "totalTokens": 45000000,
  "estimatedCost": 125.50
}

Comparing Periods

Compare metrics across time periods:
// Current week
const current = await fetch('/api/analytics/metrics?startDate=2024-01-08&endDate=2024-01-14');

// Previous week
const previous = await fetch('/api/analytics/metrics?startDate=2024-01-01&endDate=2024-01-07');

// Calculate changes
const successRateChange = current.successRate - previous.successRate;
const latencyChange = ((current.avgLatency - previous.avgLatency) / previous.avgLatency) * 100;

Exporting Data

Export analytics data for external analysis:
# Get detailed trace list
GET /api/analytics/traces/list?startDate=2024-01-01&endDate=2024-01-31&limit=1000

# Response includes all trace details for export

Best Practices

Average latency can hide outliers. p95 shows what your slowest users experience.
Monitor costs closely, especially when testing new prompts or models.
Regular comparisons help identify regressions quickly.

Next Steps