BigQuery Concurrent Queries by Minute
This query counts the number of distinct queries running during each minute, revealing concurrency patterns that affect performance. High concurrency can lead to query queuing under both on-demand and Editions pricing.
Why This Matters
BigQuery has concurrency limits that vary by billing model — 100 concurrent queries for on-demand, or limited by reservation slot capacity for Editions. When you exceed these limits, queries queue and wait. Understanding your concurrency patterns helps you avoid these bottlenecks and schedule workloads efficiently.
How It Works
The query reads from INFORMATION_SCHEMA.JOBS_TIMELINE_BY_PROJECT, which logs per-second job activity. It deduplicates by job_id, truncates timestamps to minutes, and counts distinct jobs per minute period.
SQL Query
Fill in your details to get a ready-to-run query:
-- Count concurrent queries per minute to detect bottlenecks
DECLARE lookback_days INT64 DEFAULT 7;
WITH timeline AS (
SELECT
TIMESTAMP_TRUNC(period_start, MINUTE) AS minute,
job_id,
ROW_NUMBER() OVER (PARTITION BY job_id ORDER BY job_creation_time DESC) AS rn
FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE_BY_PROJECT
WHERE job_creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
AND job_type = 'QUERY'
AND parent_job_id IS NULL
)
SELECT
minute,
COUNT(job_id) AS concurrent_queries
FROM timeline
WHERE rn = 1
GROUP BY minute
ORDER BY minuteQuery Explanation
JOBS_TIMELINE_BY_PROJECT logs one row per job per second of execution. The ROW_NUMBER() deduplication ensures each job is counted once. parent_job_id IS NULL filters out script child jobs. The COUNT per truncated minute gives concurrent job count.
Key Insights
Minutes with >50 concurrent queries risk hitting on-demand concurrency limits (100 per project).
Consistent high concurrency at specific times indicates scheduled job overlap — stagger start times.
Spikes in concurrency followed by drops may indicate that some queries are being queued and then cleared in bursts.
Low concurrency with high slot usage means a few very heavy queries — different optimization strategy than many light queries.
Best Practices
- 1
Stagger pipeline start times by even 1-2 minutes to reduce peak concurrency.
- 2
Use BigQuery's job priority settings (INTERACTIVE vs BATCH) to prevent ad-hoc queries from being queued behind batch jobs.
- 3
Monitor for queries in PENDING state using the queued_queries view.
- 4
Consider Editions reservations with adequate max_slots to handle your peak concurrency.
Want CloudClerk to find these savings automatically?
Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.
Related Guides
BigQuery Slot Usage by Minute
Get minute-level BigQuery slot consumption data. Essential for debugging performance issues and understanding burst slot demand.
Read guideBigQuery Slot Usage by Hour
Monitor hourly BigQuery slot consumption to identify peak usage windows and optimize your reservation scheduling.
Read guideBigQuery Top Queries by Complexity
Find the most compute-intensive BigQuery queries ranked by slot usage. Identify queries consuming disproportionate computational resources.
Read guideBigQuery Top Queries by Duration
Find the longest-running BigQuery queries. Identify slow queries that block resources and impact user experience with runtime analysis.
Read guide