ModulesAI ModuleOverview

AI Module Overview

The Antarctica.io AI Module provides a robust set of endpoints specifically tailored to ingest, monitor, and calculate metrics surrounding large language model (LLM) utilization and AI telemetry.

What the AI Module Does

By securely tracking AI usage telemetry, the module answers critical observability questions:

  • Token Usage: Measures inputTokens and outputTokens per model and provider.
  • Latency Tracking: Evaluates the time to first token and total response latencies.
  • Cost & Resource Impact: Analyzes platform utilization such as instances on chatgpt, claude, or gemini.

Core Capabilities

  1. Flexible Ingestion: Designed to handle millions of telemetry events via asynchronous background workers.
  2. Standardized Schema: Strict data types ensuring that downstream analytical pipelines receive clean structures.
  3. Idempotency Built-In: Safe distributed retry processing prevents duplication of telemetry consumption calculations.