Idempotency & Reliability

How idempotency keys protect against double-counting on retries and network failures.

Why Idempotency Keys Are Mandatory

The idempotency_key field is required on every event submission in ABAXUS. This is not optional. Every event — whether sent via the single endpoint, the batch endpoint, or the backfill endpoint — must include a unique idempotency key.

The reason is simple: networks fail. Timeouts happen. Your service might retry a request without knowing whether the original reached ABAXUS. Without idempotency keys, a retry would create a duplicate event, and that duplicate would be aggregated into usage totals, resulting in a customer being charged twice for the same usage.

With an idempotency key, ABAXUS maintains a deduplication log. If an event with the same key is submitted again, ABAXUS returns the original response (including the original event_id) without creating a new event. The operation is a no-op — safe to retry as many times as needed.


How to Construct Good Idempotency Keys

An idempotency key must uniquely identify this specific usage occurrence. The key should be deterministic — if you reconstruct it from the same inputs, you always get the same key. Good sources for idempotency keys:

For request-level tracking (one event per API request):

{request_id}_{metric_key}

Use your system’s existing request ID. If your API gateway assigns a X-Request-ID header, that’s an excellent source.

req_7f8a9b2c3d4e5f6a_api_calls

For periodic snapshots (seat count, storage size):

{customer_id}_{metric_key}_{date}

A daily seat count snapshot for a customer has a natural unique identifier combining customer, metric, and date.

cust_acme_active_seats_2026-04-03

For batch records from a data source:

{source_system}_{record_id}_{metric_key}

If you’re reading from a database table or event log that has its own record IDs, use those as the source of truth.

kafka_partition_3_offset_8492_api_calls

What not to use:

  • Random UUIDs generated at retry time (defeats the purpose)
  • Timestamps alone (two events in the same second would collide)
  • Sequential integers that reset between deploys

What Happens on Duplicate Submission

When ABAXUS receives an event with an idempotency key that already exists in its deduplication log:

  1. The duplicate is recognized before any database write
  2. ABAXUS returns 200 OK (not 202) with the original event’s details
  3. No new event is created
  4. Usage totals are not affected
1
2
3
4
5
6
{
  "event_id": "evt_01hx9km2p3q4r5s6t7u8v9w0",
  "status": "duplicate",
  "idempotency_key": "req_7f8a9b2c-unique-id-here",
  "original_created_at": "2026-04-03T10:15:33Z"
}

The status: "duplicate" field signals that this was a known duplicate, not a fresh acceptance. You can log this for monitoring but no action is needed.


Idempotency Key Retention

ABAXUS retains idempotency keys for 90 days. After 90 days, a key is removed from the deduplication log. Submitting an event with a key older than 90 days will be treated as a new event. In practice, you should never be retrying events older than a few minutes, so the 90-day window is a generous safety net rather than a practical constraint.


Backfill vs Regular Ingestion

When you use the backfill endpoint (POST /v1/events/backfill), the same idempotency rules apply. If a backfill event has the same idempotency key as a previously ingested real-time event, ABAXUS deduplicates it. This means you can safely backfill data that overlaps with real-time ingestion without worrying about duplicates.

The key difference between backfill and regular ingestion is not idempotency — it’s queue handling and aggregate invalidation:

  • Regular events enter the standard async queue and are aggregated incrementally
  • Backfill events bypass the queue and directly write to event storage, then trigger an aggregate invalidation for the affected time range

See the Backfilling Historical Data guide for details.


Retry Strategy Recommendations

When implementing retry logic in your services:

  1. Retry on network errors and 5xx responses. These are transient failures where the event may or may not have reached ABAXUS.
  2. Do not retry on 4xx responses (except 429 Too Many Requests). A 422 Unprocessable Entity means the event was invalid — retrying won’t help.
  3. Use exponential backoff with jitter to avoid thundering herd problems: 100ms → 200ms → 400ms → 800ms, with ±20% randomization.
  4. Set a maximum retry limit (5–10 attempts is usually enough). After exhausting retries, move the event to a dead-letter queue for manual review.
  5. Reuse the same idempotency key on every retry attempt for the same event. Do not generate a new key on each retry.