ModulesAI ModuleSetup

Technical Setup Guide

Integrating the Antarctica.io AI Module requires establishing a secure reporting pipeline between your LLM wrappers and our telemetry ingest servers.

The setup is a strict, two-step process: Provisioning Credentials and Application Integration.


Step 1: Provisioning Credentials (API Keys)

Authentication across the Antarctica ecosystem uses high-entropy, cryptographically secure Bearer Tokens. You must provision an API key before sending any telemetry.

  1. Navigate to your Antarctica.io Launch Dashboard and log in using an Administrator or Developer account.
  2. Under your selected workspace, locate the AI Module heading in the primary navigation sidebar.
  3. Access the Configure sub-menu and click on API Keys.
  4. Click Create API Key. You will be prompted to provide the following configuration:
    • Name (Required): Use a strict nomenclature identifying the service (e.g., prod-inference-collector-1).
    • Server location (Required): Select the physical location or region where your server is hosted.
    • Environment (Required): Tie the key strictly to a specific environment (e.g., prod, development) to segment your analytics correctly.
    • IP allow list (Optional): Specify your trusted server IP addresses separated by commas. Our Edge network will automatically drop telemetry requests mapping to this key that do not originate from these IP addresses.
  5. Store the generated key securely in a secret manager (e.g., AWS Secrets Manager, HashiCorp Vault) or your application’s .env configuration.

[!WARNING] Key Hydration Security
We will only reveal your API token once. If you lose it, it cannot be recovered. You must revoke the lost key and issue a new one to prevent orphaned access.


Step 2: Application Integration

With your environment securely holding the API token, proceed to integrate telemetry dispatch inside your prompt execution layers.

Every time you execute an LLM call—such as generating a completion via OpenAI, Anthropic, or Gemini—you must extract the core response metadata and forward it to our network.

Integration Requirements

To ensure the payload is accepted seamlessly, your application must natively support:

  • Standard HTTP/1.1 or HTTP/2 POST requests.
  • Extraction of Token Metrics (prompt_tokens, completion_tokens).
  • Timing captures (calculating the timestamps for start, Time to First Token (TTFT), and total latency).

For a precise integration demonstrating production-ready architecture (including retries, secure headers, and strictly typed payloads), proceed to our deep-dive in the APIs Documentation.