Get a free observability report to evaluate the potential savingsContact us →
Slot Usage3 min read

BigQuery Slot Usage by Second

Second-level slot data is the finest granularity available in BigQuery. Use this for detailed debugging of specific time windows when queries were queued or slow.

Why This Matters

When you know approximately when a performance issue occurred, second-level data lets you reconstruct exactly what happened — how many slots were in use, how quickly demand ramped up, and how long the contention lasted. This is the query you run after identifying a problem window at coarser granularity.

How It Works

Aggregates period_slot_ms from JOBS_TIMELINE at SECOND granularity (dividing by 1,000ms). Due to the volume of data, keep lookback_days small (1 day is recommended).

SQL Query

Fill in your details to get a ready-to-run query:

SQL
-- Per-second slot consumption for detailed burst analysis

DECLARE lookback_days INT64 DEFAULT 1; -- keep small for second-level data

WITH second_slots AS (
  SELECT
    period_start AS second,
    ROUND(SUM(period_slot_ms) / 1000, 2) AS avg_slots
  FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE
  WHERE period_start >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
    AND statement_type != 'SCRIPT'
  GROUP BY second
),
calendar AS (
  SELECT ts AS second FROM UNNEST(GENERATE_TIMESTAMP_ARRAY(
    TIMESTAMP_TRUNC(TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY), SECOND),
    TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), SECOND),
    INTERVAL 1 SECOND)) AS ts
)
SELECT
  c.second,
  IFNULL(s.avg_slots, 0) AS avg_slots
FROM calendar c
LEFT JOIN second_slots s ON c.second = s.second
ORDER BY c.second
Replace your-project and region-us with your GCP project and dataset region.

Query Explanation

At second-level, the query divides period_slot_ms by 1,000 to get slots-per-second. The generated time series fills every second. Warning: 1 day = 86,400 rows, 7 days = 604,800 rows — keep the lookback window small.

Key Insights

  • lightbulb

    Second-level data reveals whether slot demand ramps up gradually or spikes instantly.

  • lightbulb

    If slot usage oscillates rapidly (high one second, zero the next), you have many short-lived queries.

  • lightbulb

    Sustained high slot usage for >60 seconds indicates a single heavy query or concurrent batch.

  • lightbulb

    Use this data alongside the concurrent queries query to correlate slot demand with query count.

Best Practices

  1. 1

    Only use second-level granularity for targeted debugging of specific incidents — not for ongoing monitoring.

  2. 2

    Combine with concurrent queries data to understand if high slot usage comes from few heavy queries or many light ones.

  3. 3

    Export to a visualization tool to create slot usage heatmaps.

  4. 4

    Set lookback_days to 1 to avoid processing excessive data.

Want CloudClerk to find these savings automatically?

Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.

Related Guides