Get a free observability report to evaluate the potential savingsContact us →
Slot Usage3 min read

BigQuery Slot Usage by Day

Daily slot usage gives you the big picture of your BigQuery compute consumption. This query generates a time series of average slot usage per day, with zero-fill for days with no activity, making it ideal for trend analysis and capacity planning.

Why This Matters

Daily trends reveal patterns that inform critical decisions: Are you growing and need more capacity? Do weekends show minimal usage that could be scaled down? Is there a weekly pattern you could exploit with scheduled reservations? This data is essential for transitioning from on-demand to Editions pricing.

How It Works

The query reads from INFORMATION_SCHEMA.JOBS_TIMELINE, which provides per-second slot usage data. It aggregates period_slot_ms per day (dividing by milliseconds-per-day) to get average slot usage. A calendar CTE creates the full time series, and a LEFT JOIN zero-fills gaps.

SQL Query

Fill in your details to get a ready-to-run query:

SQL
-- Average slot consumption per day (zero-filled time series)

DECLARE lookback_days INT64 DEFAULT 14;
DECLARE ms_per_day INT64 DEFAULT 86400000;

WITH daily_slots AS (
  SELECT
    TIMESTAMP_TRUNC(period_start, DAY) AS day,
    ROUND(SUM(period_slot_ms) / ms_per_day, 2) AS avg_slots
  FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE
  WHERE period_start >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
    AND statement_type != 'SCRIPT'
  GROUP BY day
),
calendar AS (
  SELECT ts AS day FROM UNNEST(GENERATE_TIMESTAMP_ARRAY(
    TIMESTAMP_TRUNC(TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY), DAY),
    TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), DAY),
    INTERVAL 1 DAY)) AS ts
)
SELECT
  c.day,
  IFNULL(d.avg_slots, 0) AS avg_slots
FROM calendar c
LEFT JOIN daily_slots d ON c.day = d.day
ORDER BY c.day
Replace your-project and region-us with your GCP project and dataset region.

Query Explanation

JOBS_TIMELINE records period_slot_ms every second. Summing these values and dividing by the number of milliseconds in a day (86,400,000) yields the average number of slots used during that day. GENERATE_TIMESTAMP_ARRAY creates a complete day-by-day timeline, and LEFT JOIN ensures zero-fill for days with no usage.

Key Insights

  • lightbulb

    Consistent daily usage suggests scheduled pipelines — good candidates for committed Editions reservations.

  • lightbulb

    Spiky usage with quiet days suggests ad-hoc workloads — on-demand or autoscaled Editions might be more cost-effective.

  • lightbulb

    Weekend vs weekday patterns can inform reservation scheduling (scale down on weekends).

  • lightbulb

    Trending upward over weeks indicates growing workloads — plan capacity before hitting slot limits.

Best Practices

  1. 1

    Use this data to right-size Editions baseline slots to your P50 daily usage.

  2. 2

    Set autoscaler max slots to cover your P95 daily peak.

  3. 3

    Schedule heavy ETL jobs during off-peak hours to smooth out slot demand.

  4. 4

    Alert on days where slot usage exceeds your expected maximum.

Want CloudClerk to find these savings automatically?

Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.

Related Guides