Contents
- The on-demand billing problem at scale
- BigQuery Editions: Standard, Enterprise, Enterprise Plus
- Slots: the unit of compute you should be buying
- Storage pricing traps
- BigQuery Omni and multi-cloud data costs
- Negotiation strategy: what Google will and won't move on
- How we negotiate Google Cloud analytics contracts
The On-Demand Billing Problem at Scale
Overpaying for Google Cloud? We handle Google Cloud and Workspace negotiation on a 25% gainshare basis — you keep 75% of every dollar saved. No retainer. No risk.
Get a free Google Cloud savings estimate →Google positioned BigQuery's on-demand model as democratising analytics — pay only for what you query. For data teams running exploratory queries, this is genuinely excellent pricing. For enterprises running production analytics against large datasets on a predictable schedule, it's a structural cost trap.
The per-TB-scanned model charges for bytes processed, not bytes returned. A query that scans a 50TB table to return 10 rows costs the same as a query that returns the entire table. Without rigorous query engineering, partitioning strategies, and clustered table designs, on-demand BigQuery costs scale with your data volume — not your actual business intelligence requirements. Data teams moving from Redshift or Snowflake to BigQuery are frequently surprised when their first full month on on-demand exceeds their prior provider's annual contract.
The path out of on-demand isn't just choosing flat-rate pricing — it's choosing the right version of flat-rate pricing for your workload mix, then negotiating the commercial terms to reflect your actual consumption pattern rather than Google's list price structure.
BigQuery Editions: Standard, Enterprise, Enterprise Plus
In 2023, Google replaced the legacy flat-rate and flex slot model with BigQuery Editions, a tiered subscription structure that bundles compute capacity (slots), data governance features, and support into three tiers. Each edition prices compute capacity differently and includes different capabilities that may or may not be relevant to your workload:
| Edition | List Price (per slot-hour) | Key Features | Best For |
|---|---|---|---|
| Standard | $0.04/slot-hour | Core analytics, shared slots, no autoscaling SLA | Development, low-priority batch workloads |
| Enterprise | $0.06/slot-hour | Dedicated slots, autoscaling, CMEK, 1-yr commitment option | Production analytics, regulated industries |
| Enterprise Plus | $0.10/slot-hour | All Enterprise + cross-region redundancy, BI Engine, ML features | Mission-critical BI, global deployments, AI-enriched analytics |
Prices are US region list rates. Multi-region and EU pricing varies. Negotiated rates for annual or multi-year commits typically discount 20–40% from list.
The critical insight is that editions are not additive — you choose an edition for a workload type and all queries in that reservation run at that edition's slot rate. Many enterprises are sold Enterprise Plus for all workloads when their actual production BI queries could run perfectly well on Enterprise, and their batch ETL workloads need nothing more than Standard. Workload segmentation — routing different query types to different edition reservations — is the single most powerful cost lever before you even start negotiating pricing.
Slots: The Unit of Compute You Should Be Buying
A BigQuery slot is a virtual CPU unit used to execute SQL queries. At any moment, a query running in BigQuery uses some number of slots proportional to its complexity and parallelism. On-demand queries draw from a shared slot pool; edition-based pricing gives you dedicated slot capacity.
The correct slot capacity calculation is not the peak capacity your heaviest queries require — it's the committed baseline that covers your predictable production workloads, with autoscaling enabled to handle spikes. Google's autoscaling adds capacity above your committed baseline and charges it at the on-demand rate, creating a hybrid model where you pay committed rates for steady-state and higher rates for bursts.
Negotiation strategy on slots:
- Negotiate the autoscale cap: Google autoscaling defaults to unlimited above your committed slots. Set a hard cap on maximum autoscale slots in your contract and negotiate a discounted rate for autoscale consumption (not on-demand rates).
- Annual vs. monthly commitment: Annual slot commitments carry approximately 15% discount over monthly. Three-year commitments reach 30–40% discounts. If your data platform is stable, three-year commits are cost-effective — but include a revision window at 18 months to adjust for actual consumption.
- Multi-region slot allocation: If you run analytics in both US and EU regions, negotiate a global slot pool that allows unused US slots to serve EU queries during off-peak US hours. Google doesn't offer this by default but it's achievable in enterprise agreements.
Your BigQuery costs are higher than they need to be.
We analyse your current BigQuery usage pattern and negotiate your Google Cloud contract on a 25% gainshare basis. If we don't identify savings, our fee is zero. No risk — contractually guaranteed.
Explore Google Cloud Negotiation →Storage Pricing Traps
BigQuery storage has its own cost structure that's easy to overlook when compute dominates the budget conversation. Google charges for active storage ($0.02/GB/month at list), long-term storage ($0.01/GB/month for tables unmodified for 90+ days), and physical vs. logical billing.
The physical vs. logical storage billing choice is a significant — and often misunderstood — cost lever. BigQuery stores data in its Capacitor columnar format, which typically compresses datasets to 20–40% of their logical size. By default, Google charges for logical storage (the uncompressed equivalent). Switching to physical storage billing in regions with high compression ratios can reduce storage costs by 60–80%. This setting is per-dataset and is not applied automatically — it's a configuration change that your data engineering team can make today, with no contract renegotiation required.
Quick win: Switch eligible datasets from logical to physical billing in your BigQuery settings. For enterprises storing 100TB+ in BigQuery, this single change typically saves $80K–$200K annually — with no negotiation required. When you do negotiate your next contract, include a clause locking in physical storage billing rights regardless of future Google pricing changes.
At the contract level, negotiate:
- Storage rate caps at current list pricing for the contract term
- Long-term storage eligibility definitions (Google reserves the right to redefine what qualifies as "unmodified")
- Data egress cost treatment — BigQuery queries that write results to other GCP services generate egress charges that are excluded from most BigQuery cost discussions
BigQuery Omni and Multi-Cloud Data Costs
BigQuery Omni allows you to run BigQuery SQL queries against data stored in AWS S3 or Azure Blob Storage without moving the data. Google charges per-byte-processed at standard BigQuery rates, plus an additional cross-cloud compute premium. For enterprises with data in multiple clouds, Omni solves a real architectural problem — but introduces cost complexity that most IT procurement teams don't model accurately.
The Omni cost equation: BigQuery Omni queries against AWS data run on compute infrastructure Google maintains in AWS regions. Google charges the BigQuery query rate plus an AWS cross-region surcharge that Google passes through at margin. If your data lives primarily in AWS, running AWS-native analytics services like Athena or Redshift Spectrum against that data will almost always be cheaper than Omni — unless your data science team is deeply invested in BigQuery SQL syntax and the migration cost outweighs the compute savings.
For enterprises negotiating an enterprise Google Cloud agreement, Omni compute should be explicitly addressed in your contract: negotiate Omni query rates at the same discount as your primary BigQuery edition rates, and establish cross-cloud egress caps that prevent runaway costs if your Omni usage scales unexpectedly.
Negotiation Strategy: What Google Will and Won't Move On
Google Cloud's enterprise sales team operates with genuine commercial flexibility — more so than Oracle or SAP, and comparable to AWS at similar spend levels. The key is knowing what levers actually work:
What Google will negotiate
- Edition pricing discounts: 20–40% off list for multi-year Enterprise or Enterprise Plus commitments is achievable for enterprises spending $500K+ annually on BigQuery
- Committed Use Discount (CUD) stacking: BigQuery CUDs stack with Google Cloud Platform CUDs — negotiate these as a combined package, not separately
- Storage pricing freeze: Google will agree to hold storage rates flat for the contract term if asked explicitly — they just won't offer this voluntarily
- Autoscale rate discounts: The default autoscale rate is on-demand pricing; negotiate a fixed autoscale rate at your committed edition price
- Professional services credits: Google routinely includes $50K–$200K in Google Cloud Professional Services credits in enterprise BigQuery deals — useful for migration assistance
What Google won't move on
- SLA uptime guarantees above 99.99% for BigQuery query availability
- Per-query minimum guarantees (Google won't commit to specific query throughput SLAs)
- Pricing parity with AWS Athena or Azure Synapse as a contractual right
- Data residency guarantees that conflict with Google's infrastructure architecture (multi-region data replication is structural, not contractual)
Further Reading
- Google Cloud Pricing Overview ↗
- Google Cloud Cost Management ↗
- Gartner Magic Quadrant for Cloud Infrastructure & Platform Services ↗
Approaching a Google Cloud renewal that includes BigQuery?
We negotiate Google Cloud contracts on a 25% gainshare basis — including BigQuery editions, CUDs, and data egress costs. If we don't save you money, you pay nothing. Learn how our model works.
Get a Free Google Cloud Assessment →How We Negotiate Google Cloud Analytics Contracts
The leverage point in Google Cloud BigQuery negotiations is Google's competitive ambition. Google is actively trying to displace AWS and Azure as the analytics platform of choice for enterprises — which means Google's sales team has genuine incentive to structure competitive deals, particularly for enterprises that are evaluating migration from Snowflake, Redshift, or Azure Synapse.
We use a two-stage approach to BigQuery negotiation. First, we conduct a workload analysis to determine the right edition and slot configuration for your actual query patterns — typically identifying 30–40% cost reduction through configuration optimisation before any commercial negotiation begins. This creates a lower baseline from which to negotiate, and demonstrates to Google that you have technical depth in the discussion.
Second, we leverage your broader Google Cloud spend as negotiating currency. BigQuery discounts improve significantly when bundled with GCE committed use discounts and other Google Workspace or Google Cloud services. If your organisation uses Google Workspace, this is an often-overlooked commercial lever in BigQuery negotiations.
We operate on a 25% gainshare model for all Google Cloud negotiation engagements. If we don't find and negotiate verified savings against your current or proposed Google Cloud contract, our fee is zero. That's true for BigQuery, for GCE and GKE committed use, and for any other Google Cloud service in scope.
If you're preparing for a Google Cloud renewal or currently on BigQuery on-demand at scale, contact our team for a no-cost review of your current spend structure. We'll identify the optimisation and negotiation opportunities specific to your workload before you start any conversation with Google.