AI Module Overview
The Antarctica.io AI Module provides a robust set of endpoints specifically tailored to ingest, monitor, and calculate metrics surrounding large language model (LLM) utilization and AI telemetry.
What the AI Module Does
By securely tracking AI usage telemetry, the module answers critical observability questions:
- Token Usage: Measures
inputTokensandoutputTokensper model and provider. - Latency Tracking: Evaluates the time to first token and total response latencies.
- Cost & Resource Impact: Analyzes platform utilization such as instances on
chatgpt,claude, orgemini.
Core Capabilities
- Flexible Ingestion: Designed to handle millions of telemetry events via asynchronous background workers.
- Standardized Schema: Strict data types ensuring that downstream analytical pipelines receive clean structures.
- Idempotency Built-In: Safe distributed retry processing prevents duplication of telemetry consumption calculations.