A Closer Look at Cal Advocates’s Broadband Competition Report
Key Takeaways
- Cal Advocates’s estimate of “consumer harm” is built on unrealistic assumptions. The calculation treats every location as a subscribing household buying gigabit service at the highest promotional price. In reality, many subscribers choose lower (and cheaper) tiers, some households don’t subscribe, and some locations are vacant. When the report’s own regression coefficients are applied to actual subscribers, the implied figure falls below $93 million, less than one-tenth of the headline claim, and even that number is suspect given the many other methodological issues.
- Cal Advocates uses an outcome-driven model to yield supportive results. It chooses benchmark and monopoly prices to maximize the competitive “gap” between markets with and without multiple gigabit providers. The “competitive” benchmark uses the lowest promotional prices across multiple speed tiers, while the “monopoly” price uses each provider’s highest 1 Gbps promotional price—an apples-to-oranges comparison that inflates the claimed harm.
- The study’s own results undermine its conclusions. The models yield wildly inconsistent results across providers. If competition were a uniform driver of pricing, we would expect more consistent patterns. Sub-gigabit providers are associated with higher prices in several statistically significant models, the opposite of what a competition story would predict. And Comcast, one of the four major providers studied, was excluded from the regression because its pricing didn’t fit the model. More fundamentally, the analysis shows correlation, not causation. Without a more rigorous research design, one cannot conclude that competition caused the observed price differences.
- The policy recommendations outrun the evidence. Subsidizing new gigabit builds in areas that already have one gigabit provider is classic overbuilding, which is wasteful and not the highest-value use of limited broadband funds.
Overview
In January 2026, the California Public Utilities Commission’s Public Advocates Office (“Cal Advocates”) released a report examining broadband competition and pricing in four California cities: San Mateo, Oakland, Los Angeles, and San Diego. (Public Advocates Office 2026a) The headline finding was attention-grabbing: “Californians could save more than $1 billion annually if competitive pricing prevailed statewide.” (Public Advocates Office 2026a, 4)
The report makes an appeal to intuition. Where multiple gigabit providers compete, promotional prices are lower. Where a single provider dominates, prices are higher. Therefore, policies should support additional gigabit network builds. (Public Advocates Office 2026a, 5, 19)
However, a careful review of the study’s methodology reveals significant problems. The $1 billion figure rests on inflated assumptions, apples-to-oranges price comparisons, and a calculation that is disconnected from the report’s own statistical models. When applied consistently, those models imply a figure less than one-tenth as large. The report’s own regression results, provided in the technical appendices, show that competition explains only a small fraction of price variation. (Public Advocates Office 2026b, 13–15) And the statistical analysis demonstrates correlation but cannot establish causation.
This post walks through the key methodological concerns. The goal isn’t to dismiss the fundamental economic principle that competition leads to lower prices, but to highlight why the magnitude of the study’s claimed harm is overstated and the policy conclusions are premature. Notably, these methodological shortcomings echo those we identified in an earlier set of broadband studies submitted to the CPUC, which similarly relied on selective data, narrow technology-specific framing, and outcome-oriented analysis to support sweeping policy conclusions. (Santorelli and Karras 2021, 23–33)
A Lofty $1 Billion Estimate
The report’s primary attention-grabbing claim is that California consumers could save $1.13 billion annually if gigabit competition prevailed. This number appears in Table 5 and drives the report’s policy urgency. (Public Advocates Office 2026a, 18) But the calculation involves several assumptions, each of which inflates the final figure.
Every Location Is Not a Subscriber
The report multiplies the price differential by 4.45 million “locations with sole gigabit provider” to arrive at its billion-dollar figure. (Public Advocates Office 2026a, 17–18) But a location is not a subscriber. The calculation makes no adjustment for:
- Subscribers who purchase sub-gigabit tiers (many households choose plans slower than 1 Gbps)
- Non-subscribing households who may have no fixed broadband at all
- Multi-dwelling units where “location” counts may not map cleanly to households
- Vacant locations that have no broadband subscriber
The report describes this approach as “conservative,” but assuming 100% subscription at the gigabit tier is not conservative. Instead, it is a clear overestimate. To illustrate: according to the American Community Survey, only 79.2% of California households subscribe to wired internet service. (U.S. Census Bureau, n.d.) Simply adjusting for non-subscribers—before even accounting for those on sub-gigabit tiers or vacant locations—would reduce the claimed figure by over 20%.
The “Benchmark” Price Is Artificially Low
The $51 “benchmark” is constructed by averaging the lowest promotional prices observed across providers and across three speed tiers (300 Mbps, 500 Mbps, and 1 Gbps). (Public Advocates Office 2026a, 16–17) Several problems emerge:
- Lowest promotional prices are not typical prices. These may only be available at specific addresses, to new customers, or under particular eligibility conditions.
- Cross-tier averaging obscures the comparison. The benchmark mixes 300 Mbps and 500 Mbps prices, but the “harm” calculation uses 1 Gbps monopoly prices. This is an apples-to-oranges comparison.
- Charter’s 100 Mbps price is used as a proxy for 300 Mbps because Charter doesn’t offer a 300 Mbps promotional plan. (Public Advocates Office 2026a, 17) This mechanically lowers the benchmark.
A more defensible benchmark would compare within the same speed tier, use median rather than minimum prices, and reflect realistic eligibility and subscription patterns.
The “Monopoly” Price Is Artificially High
While the benchmark uses the lowest observed prices, the monopoly price uses each provider’s highest promotional price for 1 Gbps service. (Public Advocates Office 2026a, 18) This maximizes the calculated gap.
The combined effect of these choices is purpose-built to yield a dramatic overstatement of consumer “harm”: the lowest prices for the benchmark, the highest prices for the monopoly case, and an assumption of 100% subscription at 1 Gbps.
A Third of the Claimed “Savings” Come From a Provider the Model Can’t Explain
Even setting aside the inflated assumptions above, the $1.13 billion figure includes approximately $410 million in estimated “savings” attributed to Comcast locations. (Public Advocates Office 2026a, 18) Yet the report’s own technical appendix acknowledges that its regression model does not explain Comcast’s pricing behavior. Comcast was excluded from the statistical analysis entirely because its prices “reflect large, market-wide discounts” that “do not correspond to local competition intensity.” (Public Advocates Office 2026b, 16) If the report’s own model cannot establish a relationship between competition and Comcast’s pricing, there is no basis for claiming that competition would produce “savings” at Comcast locations.
The Report’s Own Models Imply Far Smaller Effects
Perhaps most tellingly, the $1.13 billion figure is not derived from the report’s own regression results. The regression models estimate that adding one gigabit competitor is associated with modest monthly price reductions. Comcast shows no statistically significant effect at all. (Public Advocates Office 2026b, 13–15) Yet the headline calculation uses the full raw price gap between “competitive” and “monopoly” markets, implicitly attributing the entire difference to competition when the report’s own statistical analysis shows it explains only a fraction of price variation.
When the regression coefficients are annualized and applied to estimated subscribers in sole-gigabit-provider areas, (Public Advocates Office 2026a, 18) adjusted for the share of California households that actually subscribe to wired service, (U.S. Census Bureau, n.d.) the implied annual figure is approximately $93 million. This is less than one-tenth of the headline claim, and is tiny compared to the billions that would be needed to deploy duplicative fiber statewide.1
| Provider | Monthly Effect | Annual Effect | Sole-Gig Locations | Est. Subscribers (79.2%) | Annual “Savings” |
|---|---|---|---|---|---|
| AT&T | $1.87 | $22 | 22,862 | 18,107 | $398,354 |
| Charter | $3.60 | $43 | 2,119,162 | 1,678,376 | $72,170,168 |
| Cox | $4.32 | $52 | 504,937 | 399,910 | $20,795,320 |
| Comcast | $0.00 | $0 | 1,800,837 | 1,426,263 | $0 |
| Total | 4,447,798 | 3,522,656 | $93,363,842 |
And even that figure is likely overstated, given the methodological concerns discussed below.
Weak Regression Results
Notwithstanding the myriad other issues with the analysis, the study’s own regression results are inconsistent.
The technical appendices provide the actual regression output, (Public Advocates Office 2026b, 13–15) and the results are weaker than the main report’s confident policy assertions suggest.
Very Low Explanatory Power
The R² values tell us how much of the price variation the model explains:
| Provider | Speed Tier | R² |
|---|---|---|
| AT&T | 500 Mbps | 0.016 (1.6%) |
| AT&T | 1 Gbps | 0.126 (12.6%) |
| Charter | 500 Mbps | 0.075 (7.5%) |
| Charter | 1 Gbps | 0.070 (7.0%) |
| Cox | 500 Mbps | 0.537 (53.7%) |
| Cox | 1 Gbps | 0.442 (44.2%) |
The results are wildly inconsistent across providers. Cox’s models show reasonable explanatory power (R² of 0.44–0.54), but the AT&T and Charter models are very weak — AT&T’s 500 Mbps model explains just 1.6% of price variation, and Charter’s models hover around 7%. (Public Advocates Office 2026b, 13–15)
If competition were a primary and consistent driver of pricing, we would expect these models to perform similarly across providers. Instead, the relationship between competition and price appears to vary dramatically depending on which provider is being analyzed — a pattern more consistent with provider-specific pricing strategies than a uniform competitive effect.
Sub-Gigabit Providers: The Wrong Sign
The report claims that sub-gigabit providers (including fixed wireless) “do not reliably constrain price.” (Public Advocates Office 2026a, 15) But the model coefficients cast further doubt on the report’s primary claims:
| Provider | Speed Tier | Sub-Gigabit Coefficient | p-value |
|---|---|---|---|
| Cox | 1 Gbps | +1.53 | < 0.001 |
| Cox | 500 Mbps | +0.78 | < 0.001 |
| Charter | 500 Mbps | +1.18 | < 0.001 |
In multiple statistically significant models, more sub-gigabit providers are counterintuitively associated with higher prices. (Public Advocates Office 2026b, 13–15) The report dismisses this as sub-gigabit providers “not reliably constraining price,” but a more plausible interpretation is that sub-gigabit provider presence is a proxy for something else entirely. Broadband market dynamics are likely more complex than the report’s simple modeling can account for.
If sub-gigabit provider counts can produce statistically significant coefficients in the wrong direction, we should be cautious about interpreting gigabit provider coefficients as causal effects of competition.
Comcast Was Excluded From the Regression
As discussed above, Comcast was entirely excluded from the regression analysis because its pricing didn’t fit the model. (Public Advocates Office 2026b, 16) That the state’s second-largest broadband provider’s pricing behavior—covering 3.5 million gigabit locations—doesn’t conform to the model is a serious challenge to the model’s generalizability. The report effectively says that competition drives pricing, except for one of the four major providers studied, whose pricing doesn’t respond to competition the way the model predicts. This selective exclusion should have been discussed prominently in the main report, not tucked away in an appendix note.
Correlation Is Not Causation
The report’s regression analysis purports to show that the number of gigabit providers is correlated with lower promotional prices. (Public Advocates Office 2026a, 14–15) But the report then draws causal inferences that the methodology cannot support.
The Endogeneity Problem
Provider entry decisions are not random. Companies build fiber networks in markets where they expect to be profitable. Many positive factors may incentivize entry, including:
- Higher location density
- Favorable geography for deployment
- Higher probability of subscription
- Newer housing stock with easier right-of-way access
- Favorable local permitting environments
- Existing utility infrastructure to leverage
These same characteristics might independently support lower per-subscriber costs, enabling lower prices separately from competition. Indeed, the economics of broadband deployment decisions are well understood: providers respond to market signals when choosing where to invest, with supply generally following demand rather than the reverse. (Santorelli and Karras 2021, 14–18) The report cannot distinguish between two stories:
- Competition caused lower prices. Multiple providers entered, and they competed prices down.
- Favorable market conditions attracted competitors and enabled lower prices. The characteristics that attracted multiple providers also made it cheaper to serve those areas, and prices may have been lower regardless.
San Mateo and Oakland—the study’s “competitive” markets—are dense, affluent Bay Area cities with a regional fiber provider, Sonic, present alongside national providers. (Public Advocates Office 2026a, 6) Treating their pricing as a universally achievable benchmark ignores that most California markets lack comparable conditions.
What Would Establish Causation?
Establishing that competition causes lower prices would require a research design that addresses this endogeneity. Options, among others, include:
- Instrumental variables: Finding something that affects provider entry but doesn’t directly affect prices
- Natural experiments: Exploiting exogenous shocks to competition (e.g., a merger that reduced providers in some areas but not others)
- Difference-in-differences: Comparing price changes over time in areas where competition increased versus areas where it didn’t
The report uses none of these approaches. Its simple regression of price on provider counts, income, and nothing else provides correlation only. (Public Advocates Office 2026a, 14)
Promotional Prices Are the Wrong Metric
The entire analysis is based on promotional prices, which raises several concerns: (Public Advocates Office 2026a, 11)
Promotional Prices Are a Poor Proxy for Long-Term Prices
Consumers eventually pay non-promotional rates, often for years. The report’s own Table 2 shows that non-promotional prices are relatively uniform across providers and markets. (Public Advocates Office 2026a, 13–14) If competition primarily affects temporary acquisition pricing rather than sustained rates, the potential welfare implications are much different from what the report suggests.
Promotional Intensity May Be Influenced by Other Factors
The report notes that promotional pricing “represents providers’ efforts to obtain and retain customers.” (Public Advocates Office 2026a, 7) But providers might offer aggressive promotions in markets with higher customer churn rates or with newer deployments requiring customer acquisition. In addition, the implementation of promotional pricing schedules and strategies likely varies between providers in a way that is not captured by a simple aggregation of advertised prices.
What Consumers Actually Pay Is Unknown
The report analyzes advertised promotional prices, not billed costs. It excludes things like equipment fees, taxes and surcharges, bundling or auto-pay discounts, retention offers, and other discounts or fees. Absent billing data, claims about actual consumer burden remain speculative.
Other Methodological Concerns
Frontier’s Exclusion. Frontier, a major fiber competitor, was excluded from the pricing analysis “due to its ongoing merger with Verizon.” (Public Advocates Office 2026a, 6) But Frontier appears as a meaningful competitive presence in the overlap tables: 80% of Charter’s lowest-priced tier in LA has Frontier overlap. (Public Advocates Office 2026b, 8–9) The merger doesn’t change the fact that Frontier’s current network presence affects competitive dynamics, and its exclusion may bias the report’s modeling. Moreover, the rationale is applied inconsistently: Charter and Cox—two of the three providers included in the regression—are themselves in a pending merger, yet neither was excluded on those grounds.
Sampling Methodology Is Undisclosed. The appendices reveal sample sizes but not selection methodology. (Public Advocates Office 2026b, 2–3) Were locations chosen randomly? Stratified by neighborhood characteristics? Convenience-based? The phrase “selected sample locations” does significant work but is never explained. (Public Advocates Office 2026a, 8)
The 10% Overlap Threshold. Appendix B excludes competitors with less than 10% geographic overlap as not representing “meaningful competitive pressure.” (Public Advocates Office 2026b, 4–11) This is an arbitrary threshold that could omit localized competitive effects.
Excel-Based Regression With Apparent Errors. The regression tables show
#NUM!errors and suspicious values (like standard errors of exactly 0) in several cells. (Public Advocates Office 2026b, 13–15) These errors, along with the table formatting, suggest the analysis was conducted in Excel rather than a proper statistical package. For a study of purported policy significance, the use of Excel raises concerns about, among other things: robustness checks, diagnostic testing, correct standard error specification, and reproducibility.Contradictory Overlap Patterns. The appendix reveals that some pricing patterns contradict the assertion that more competition brings about lower prices. For instance, Comcast’s lower-priced set in San Mateo shows less competitive overlap than its higher-priced set, and AT&T’s San Mateo pricing requires a “concentration” explanation rather than the simple provider counts used in the regression. (Public Advocates Office 2026b, 4–7)
Policy Implications
The report recommends that public investments “prioritize areas where consumers have only one gigabit provider.” (Public Advocates Office 2026a, 5, 19) This conclusion requires believing that:
- Correlation between provider counts and prices reflects causation (unproven)
- Promotional prices reflect sustained consumer welfare (questionable)
- Conditions enabling multiple providers in San Mateo can be replicated elsewhere (not analyzed)
- Subsidizing additional gigabit builds is higher-value than other uses of limited funds (not analyzed)
More broadly, the report’s exclusive focus on gigabit provider counts reflects a technology-specific view of competition that ignores the role of other platforms. Fixed wireless, 5G, and satellite services all exert competitive pressure, and consumers increasingly rely on non-wireline options for internet access. A technology-neutral assessment of competition would likely paint a different picture than one that counts only gigabit wireline providers. (Santorelli and Karras 2021, 37–38)
Opportunity Cost
The report doesn’t consider that subsidizing new builds in areas with existing gigabit service might divert resources from other initiatives where welfare gains would be larger. Broadband affordability is a multi-faceted issue, and should be considered holistically since funding is limited.
On the supply-side, subsidizing gigabit network overbuild may yield less consumer benefit than other infrastructure deployment. On the demand-side, funding could instead go towards a variety of possible affordability and adoption related initiatives, including digital literacy programs, subsidy awareness campaigns, and device access. As we have previously argued before the CPUC, in served markets the primary broadband challenge is typically one of adoption, not infrastructure, and policy resources are better directed at addressing the demand-side barriers that keep households offline. (Santorelli and Karras 2021, 19–22, 34–37)
A rigorous cost-benefit analysis would compare the welfare gains from different uses of funding. This report attempts no such analysis before making bold policy recommendations.
References
Footnotes
In California’s BEAD Final Proposal, the average award per fiber location is $9,215. Even very conservatively assuming that a statewide deployment would have 50% of the per-location cost of the BEAD deployments, bringing a second fiber connection to the 4,447,798 “locations with sole Gbps provider” in the report would cost over $17 billion.↩︎