Get a free observability report to evaluate the potential savingsContact us →
Concurrency3 min read

BigQuery Concurrent Queries by Minute

This query counts the number of distinct queries running during each minute, revealing concurrency patterns that affect performance. High concurrency can lead to query queuing under both on-demand and Editions pricing.

Why This Matters

BigQuery has concurrency limits that vary by billing model — 100 concurrent queries for on-demand, or limited by reservation slot capacity for Editions. When you exceed these limits, queries queue and wait. Understanding your concurrency patterns helps you avoid these bottlenecks and schedule workloads efficiently.

How It Works

The query reads from INFORMATION_SCHEMA.JOBS_TIMELINE_BY_PROJECT, which logs per-second job activity. It deduplicates by job_id, truncates timestamps to minutes, and counts distinct jobs per minute period.

SQL Query

Fill in your details to get a ready-to-run query:

SQL
-- Count concurrent queries per minute to detect bottlenecks

DECLARE lookback_days INT64 DEFAULT 7;

WITH timeline AS (
  SELECT
    TIMESTAMP_TRUNC(period_start, MINUTE) AS minute,
    job_id,
    ROW_NUMBER() OVER (PARTITION BY job_id ORDER BY job_creation_time DESC) AS rn
  FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE_BY_PROJECT
  WHERE job_creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
    AND job_type = 'QUERY'
    AND parent_job_id IS NULL
)
SELECT
  minute,
  COUNT(job_id) AS concurrent_queries
FROM timeline
WHERE rn = 1
GROUP BY minute
ORDER BY minute
Replace your-project and region-us with your GCP project and dataset region.

Query Explanation

JOBS_TIMELINE_BY_PROJECT logs one row per job per second of execution. The ROW_NUMBER() deduplication ensures each job is counted once. parent_job_id IS NULL filters out script child jobs. The COUNT per truncated minute gives concurrent job count.

Key Insights

  • lightbulb

    Minutes with >50 concurrent queries risk hitting on-demand concurrency limits (100 per project).

  • lightbulb

    Consistent high concurrency at specific times indicates scheduled job overlap — stagger start times.

  • lightbulb

    Spikes in concurrency followed by drops may indicate that some queries are being queued and then cleared in bursts.

  • lightbulb

    Low concurrency with high slot usage means a few very heavy queries — different optimization strategy than many light queries.

Best Practices

  1. 1

    Stagger pipeline start times by even 1-2 minutes to reduce peak concurrency.

  2. 2

    Use BigQuery's job priority settings (INTERACTIVE vs BATCH) to prevent ad-hoc queries from being queued behind batch jobs.

  3. 3

    Monitor for queries in PENDING state using the queued_queries view.

  4. 4

    Consider Editions reservations with adequate max_slots to handle your peak concurrency.

Want CloudClerk to find these savings automatically?

Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.

Related Guides