BigQuery Slot Usage by Second
Second-level slot data is the finest granularity available in BigQuery. Use this for detailed debugging of specific time windows when queries were queued or slow.
Why This Matters
When you know approximately when a performance issue occurred, second-level data lets you reconstruct exactly what happened — how many slots were in use, how quickly demand ramped up, and how long the contention lasted. This is the query you run after identifying a problem window at coarser granularity.
How It Works
Aggregates period_slot_ms from JOBS_TIMELINE at SECOND granularity (dividing by 1,000ms). Due to the volume of data, keep lookback_days small (1 day is recommended).
SQL Query
Fill in your details to get a ready-to-run query:
-- Per-second slot consumption for detailed burst analysis
DECLARE lookback_days INT64 DEFAULT 1; -- keep small for second-level data
WITH second_slots AS (
SELECT
period_start AS second,
ROUND(SUM(period_slot_ms) / 1000, 2) AS avg_slots
FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE
WHERE period_start >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
AND statement_type != 'SCRIPT'
GROUP BY second
),
calendar AS (
SELECT ts AS second FROM UNNEST(GENERATE_TIMESTAMP_ARRAY(
TIMESTAMP_TRUNC(TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY), SECOND),
TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), SECOND),
INTERVAL 1 SECOND)) AS ts
)
SELECT
c.second,
IFNULL(s.avg_slots, 0) AS avg_slots
FROM calendar c
LEFT JOIN second_slots s ON c.second = s.second
ORDER BY c.secondQuery Explanation
At second-level, the query divides period_slot_ms by 1,000 to get slots-per-second. The generated time series fills every second. Warning: 1 day = 86,400 rows, 7 days = 604,800 rows — keep the lookback window small.
Key Insights
Second-level data reveals whether slot demand ramps up gradually or spikes instantly.
If slot usage oscillates rapidly (high one second, zero the next), you have many short-lived queries.
Sustained high slot usage for >60 seconds indicates a single heavy query or concurrent batch.
Use this data alongside the concurrent queries query to correlate slot demand with query count.
Best Practices
- 1
Only use second-level granularity for targeted debugging of specific incidents — not for ongoing monitoring.
- 2
Combine with concurrent queries data to understand if high slot usage comes from few heavy queries or many light ones.
- 3
Export to a visualization tool to create slot usage heatmaps.
- 4
Set lookback_days to 1 to avoid processing excessive data.
Want CloudClerk to find these savings automatically?
Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.
Related Guides
BigQuery Slot Usage by Minute
Get minute-level BigQuery slot consumption data. Essential for debugging performance issues and understanding burst slot demand.
Read guideBigQuery Slot Usage by Hour
Monitor hourly BigQuery slot consumption to identify peak usage windows and optimize your reservation scheduling.
Read guideBigQuery Slot Usage by Day
Track daily BigQuery slot consumption over time. Identify daily usage patterns and trends to right-size your slot reservations.
Read guideBigQuery Concurrent Queries by Minute
Count how many BigQuery queries run simultaneously each minute. Detect concurrency bottlenecks that cause queuing and slow performance.
Read guide