BigQuery Cost Analysis by Project
When your organisation runs BigQuery across multiple GCP projects, understanding which project drives the most cost is essential for budget allocation and chargeback. This query provides project-level cost attribution using INFORMATION_SCHEMA.
Why This Matters
Multi-project environments are the norm in mature GCP setups — separate projects for production, staging, analytics, and data science. Without project-level cost visibility, teams cannot perform accurate chargeback, set meaningful budgets, or identify which project needs optimization attention first.
How It Works
This query reads from INFORMATION_SCHEMA.JOBS_BY_PROJECT and groups costs by project_id. For organisation-wide visibility, you would run similar queries against JOBS_BY_ORGANIZATION (requires organisation-level access). Each project's total on-demand cost is calculated from bytes billed at $6.25/TiB.
SQL Query
Fill in your details to get a ready-to-run query:
-- Aggregate on-demand cost per GCP project
DECLARE lookback_days INT64 DEFAULT 30;
WITH jobs AS (
SELECT
project_id,
job_id,
COALESCE(total_bytes_billed, 0) AS bytes_billed,
ROW_NUMBER() OVER (PARTITION BY job_id ORDER BY end_time DESC) AS rn
FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
AND job_type = 'QUERY' AND state = 'DONE' AND total_slot_ms IS NOT NULL
),
deduplicated AS (
SELECT project_id, job_id, bytes_billed FROM jobs WHERE rn = 1
)
SELECT
project_id,
COUNT(DISTINCT job_id) AS total_jobs,
ROUND(SUM(bytes_billed) / POW(1024, 3), 2) AS gib_scanned,
ROUND(SUM(bytes_billed) / POW(1024, 4) * 6.25, 2) AS estimated_cost_usd
FROM deduplicated
GROUP BY project_id
ORDER BY estimated_cost_usd DESCQuery Explanation
The query collects all completed query jobs, deduplicates by job_id, groups by project_id, and sums total_bytes_billed. It also counts distinct jobs per project so you can see which projects are running the most queries. The on-demand cost formula is bytes / 1024^4 * $6.25.
Key Insights
Staging and development projects can sometimes cost more than production if engineers run unoptimized exploratory queries.
Projects with high job counts but low cost are running efficient queries — study their patterns.
Projects with low job counts but high cost have individual expensive queries worth investigating.
Cross-reference with the by-user query to see which users are responsible for costs in each project.
Best Practices
- 1
Set per-project budget alerts in the GCP Billing console.
- 2
Use separate billing accounts or labels for chargeback to business units.
- 3
Consider using BigQuery Editions with reservations assigned per project for predictable costs.
- 4
Review project-level costs monthly and flag any project with >20% month-over-month increase.
Want CloudClerk to find these savings automatically?
Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.
Related Guides
BigQuery Cost Analysis by User
Find which users drive the most BigQuery spending. Query INFORMATION_SCHEMA to rank users by on-demand cost and identify optimization targets.
Read guideBigQuery Cost Analysis by Label
Break down BigQuery costs by job labels. Attribute spending to teams, pipelines, or features using the label key-value pairs on your jobs.
Read guideBigQuery Cost Analysis by Query
Analyze every individual query's cost, slot usage, and execution time. Get a detailed breakdown of on-demand cost per query execution.
Read guideBigQuery Top Queries by Cost
Find the most expensive BigQuery queries by on-demand cost. Rank queries by total bytes billed to identify the biggest cost drivers in your project.
Read guide