Skip to main content

Quickstart

This guide will get you from zero to traced AI calls in under 5 minutes.

Prerequisites

  • A Foil account (sign up here)
  • An API key from the Foil dashboard
  • Node.js 18+ or Python 3.8+

Step 1: Install the SDK

npm install @foil-ai/sdk

Step 2: Initialize the Client

import { createFoilTracer } from '@foil-ai/sdk';

const tracer = createFoilTracer({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-first-agent'
});

Step 3: Trace Your First Call

import OpenAI from 'openai';
import { createFoilTracer, SpanKind } from '@foil-ai/sdk';

const openai = new OpenAI();
const tracer = createFoilTracer({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-first-agent'
});

// Wrap your AI logic in a trace
const result = await tracer.trace(async (ctx) => {
  // Start an LLM span
  const span = await ctx.startSpan(SpanKind.LLM, 'gpt-4o', {
    input: 'What is the capital of France?'
  });

  // Make the API call
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'What is the capital of France?' }]
  });

  // End the span with results
  await span.end({
    output: response.choices[0].message.content,
    tokens: {
      prompt: response.usage.prompt_tokens,
      completion: response.usage.completion_tokens,
      total: response.usage.total_tokens
    }
  });

  return response.choices[0].message.content;
}, { name: 'capital-query' });

console.log(result); // "Paris"

Step 4: View Your Trace

  1. Go to the Foil Dashboard
  2. Navigate to Traces
  3. Click on your trace to see the full span details
You’ll see:
  • The input and output of your LLM call
  • Token usage breakdown
  • Latency metrics
  • Any errors or warnings

What’s Next?

Using the OpenAI Wrapper (Easier!)

For the simplest setup, use our OpenAI wrapper which automatically traces all calls:
import OpenAI from 'openai';
import { createFoilTracer } from '@foil-ai/sdk';

const openai = new OpenAI();
const tracer = createFoilTracer({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-agent'
});

// Wrap OpenAI client
await tracer.trace(async (ctx) => {
  const wrappedOpenAI = tracer.wrapOpenAI(openai, { context: ctx });

  // All calls are automatically traced
  const response = await wrappedOpenAI.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }]
  });
});