Ingesting Events

Send usage events to ABAXUS using the single or batch ingestion endpoints.

Event Structure

Every usage event sent to ABAXUS has the same core structure:

FieldTypeRequiredDescription
customer_idstringYesThe ABAXUS customer ID this usage belongs to
metric_keystringYesThe metric key to record against (e.g., api_calls)
valuenumberYesThe magnitude of the usage event
timestampISO 8601YesWhen the usage occurred (can be in the past)
idempotency_keystringYesA unique identifier for this submission — used for deduplication
propertiesobjectNoAdditional properties (e.g., user_id for unique_count metrics)

The timestamp field reflects when the usage actually occurred, not when you’re submitting the event. ABAXUS aggregates events based on their timestamp relative to the subscription’s billing period.


Single Event Ingestion

Use POST /v1/events to send one event at a time. This is appropriate for low-volume scenarios or when you need to record events synchronously as they occur.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
curl -X POST "$ABAXUS_URL/v1/events" \
  -H "Authorization: Bearer $ABAXUS_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "customer_id": "cust_acme",
    "metric_key": "api_calls",
    "value": 1,
    "timestamp": "2026-04-03T10:15:33Z",
    "idempotency_key": "req_7f8a9b2c-unique-id-here",
    "properties": {
      "endpoint": "/v1/widgets",
      "method": "POST"
    }
  }'

Response: 202 Accepted

1
2
3
4
5
{
  "event_id": "evt_01hx9km2p3q4r5s6t7u8v9w0",
  "status": "queued",
  "idempotency_key": "req_7f8a9b2c-unique-id-here"
}

A 202 Accepted means the event has been validated and placed in the processing queue. It has not yet been aggregated into the usage totals — that happens asynchronously within milliseconds. If you need a synchronous, exact usage total, use POST /v1/usage/compute after submission.


Batch Event Ingestion

For high-volume scenarios, use POST /v1/events/batch to submit up to 1,000 events in a single HTTP request. This reduces connection overhead and is significantly more efficient for services that generate many events per second.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
curl -X POST "$ABAXUS_URL/v1/events/batch" \
  -H "Authorization: Bearer $ABAXUS_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "events": [
      {
        "customer_id": "cust_acme",
        "metric_key": "api_calls",
        "value": 1,
        "timestamp": "2026-04-03T10:15:33Z",
        "idempotency_key": "req_001_acme_20260403"
      },
      {
        "customer_id": "cust_globex",
        "metric_key": "api_calls",
        "value": 1,
        "timestamp": "2026-04-03T10:15:34Z",
        "idempotency_key": "req_002_globex_20260403"
      },
      {
        "customer_id": "cust_acme",
        "metric_key": "data_egress_gb",
        "value": 0.45,
        "timestamp": "2026-04-03T10:15:35Z",
        "idempotency_key": "egress_001_acme_20260403"
      }
    ]
  }'

Response: 207 Multi-Status

1
2
3
4
5
6
7
8
9
{
  "accepted": 3,
  "rejected": 0,
  "results": [
    { "index": 0, "event_id": "evt_01hx9km...", "status": "queued" },
    { "index": 1, "event_id": "evt_01hx9kn...", "status": "queued" },
    { "index": 2, "event_id": "evt_01hx9ko...", "status": "queued" }
  ]
}

The 207 Multi-Status response lets you see the result for each individual event in the batch. Events can fail individually (for example, if a customer_id doesn’t exist or a metric_key is inactive) without failing the whole batch. Check the status field for each result — queued means accepted, anything else means you need to investigate that specific event.


The Async Queue

Both single and batch ingestion endpoints are asynchronous. Events are validated immediately (schema validation, customer/metric existence checks) and then placed into a processing queue. The actual aggregation into usage totals happens within milliseconds in a background worker.

This design means:

  • Ingestion endpoints respond fast (< 50ms typical) even under heavy load
  • Your product services are decoupled from the aggregation compute
  • The queue absorbs traffic spikes without backpressure

The queue is durable — events are not lost if the background worker restarts. ABAXUS uses PostgreSQL’s SKIP LOCKED pattern for reliable queue processing.


When to Use Single vs Batch

Use single ingestion when:

  • You’re ingesting events directly from a hot-path API handler
  • Volume is low (< 100 events/second)
  • You want per-event response details
  • Simplicity matters more than throughput

Use batch ingestion when:

  • A downstream service aggregates events before sending to ABAXUS
  • You’re processing a queue or stream (Kafka, SQS, Kinesis)
  • You need to maximize throughput (batch ingestion is ~10x more efficient per event)
  • You’re loading events from a log file or data pipeline

A common pattern is to buffer events in your service for 1–5 seconds and flush as a batch, giving you the simplicity of event-level tracking with the efficiency of batched delivery.


Event Properties

The properties field accepts arbitrary key-value pairs that can be used by unique_count metrics (via the unique_on configuration). Properties are stored alongside the event and available in the event list API for debugging, but they do not affect billing for sum, count, max, min, or last aggregations.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "customer_id": "cust_acme",
  "metric_key": "monthly_active_users",
  "value": 1,
  "timestamp": "2026-04-03T10:15:33Z",
  "idempotency_key": "mau_user_42_20260403",
  "properties": {
    "user_id": "user_42",
    "region": "us-east-1",
    "plan": "growth"
  }
}

If monthly_active_users is configured with unique_on: "user_id", ABAXUS counts distinct properties.user_id values across all events in the period.