US GDP Noise vs Signal

← Back to Home

Interpreting GDP Growth

Quarterly GDP data are notoriously volatile and have been for some time. Nor does the volatility exhibit “normal” characteristics.

Randomness is part of life, with or without President Trump, and is not just confined to economic outcomes.

Sometimes the randomness and confusion is self-induced. Some Europeans unwittingly gaze in awe of US GDP headline growth until they realise that Americans big it all up with annualisation. We don’t do that this side of the pond. A less obvious issue, and of increasing concern, is that data quality is deteriorating. Official statisticians have to make do with less resource and surveys are finding it harder to tap willing respondents. And if unhelpful statistics upset political masters the messenger can get fired, accused of treason or literally shot.

Users must also remain alert with interpretation, especially where complex multinational linkages are involved. Ireland’s stellar GDP per capita is a classic example.Then there are the dreaded revisions: what really happened to US GDP when Lehman Bros went bust in 2008? The “truth” took some time to emerge, well after time-critical policy decisions had to be made.

So, when we look at the latest GDP numbers, the obvious question - is this quarter’s number telling us something important? - is pretty hard to answer. But economists are there to guide and we have some tools to help distinguish the noise from the signal.

An easy step is to be careful about which GDP measure we use. Jason Furman makes a case for focussing on core GDP rather than the headline number. Thankfully, FRED is at hand. Indeed that weak 2025Q1 US GDP figure was distorted by all sorts of stuff. It’s worth reading the BEA’s technical notes on gold and the California wildfires attached to advance estimates. Not only do we need to pay attention to detail but also, as Bobby Kennedy eloquently reminded us, GDP does not tell you all you need to know.

There are more thorough approaches for identifying signal from noise. These methods typically involve estimating an underlying trend — the part of GDP growth that reflects longer-term, systematic changes in the economy — and separating it from the short-term fluctuations caused by temporary shocks or measurement errors.

As an example, we could use a Bayesian smoothing technique to estimate that trend. Think of it like drawing a smooth curve through a wobbly line: the smooth curve (in blue) captures the longer-term direction of growth, while the wobbly line (in grey) shows the actual, often volatile, GDP data. We also shade in an 80% credibility band around the trend line, which gives a sense of uncertainty — most of the time, the true value of growth should fall inside this band.

So what happens when a new data point — say, for 2025Q2 or 2025Q3 — comes out? We compare it to our trend estimate and ask: Is this new number unusually far from what we expected? That’s what the signal probability table tells us. If the probability is low (e.g. 0.12), the new number is likely just noise — a short-term blip. But if it’s high (e.g. 0.83), there’s a good chance the new data signal a genuine shift in the growth trend — perhaps due to policy changes, a new economic shock, or broader structural change.

Analysis Code

Code
knitr::opts_chunk$set(echo = TRUE)

library(fredr)
library(tidyverse)
library(brms)
library(lubridate)
library(tidybayes)
library(scales)

# setwd(dirname(rstudioapi::getActiveDocumentContext()$path)) # Commented out for Quarto rendering
# getwd() # Commented out
# rm(list=ls()) # Commented out

# API Key Retrieval
# The FRED API key is retrieved from an environment variable for security.
api_key <- Sys.getenv("MY_API_KEY")
if (api_key =="") {
stop("API key 'MY_API_KEY' not found. Please set it as an environment variable in .Renviron.")
}

fredr_set_key(api_key)

# Data Import and Preparation
# We import historical Real Gross Domestic Product (GDPC1) data from the
# Federal Reserve Economic Data (FRED) database. The data is then transformed
# to year-over-year (YoY) growth rates. A dummy variable for the COVID-19
# period (2020-2021) is also created for later exclusion in the trend model.

gdp_raw <- fredr(series_id = "GDPC1", observation_start = as.Date("1960-01-01")) %>%
select(date, value) %>%
mutate(
yoy = (value / lag(value, 4) - 1) * 100,
quarter = year(date) + (quarter(date) - 1) / 4,
covid = if_else(date >= as.Date("2020-01-01") & date <= as.Date("2021-12-31"), 1, 0)
) %>%
filter(!is.na(yoy))

# Glimpse of the prepared data:
gdp_raw %>% 
  {rbind(head(., 4), tail(., 4))} %>%
  knitr::kable(caption = "<span style='color:white;'>Selected Rows of GDP Dataframe</span>",escape=FALSE)
Selected Rows of GDP Dataframe
date value yoy quarter covid
1961-01-01 3493.703 -0.6675232 1961.00 0
1961-04-01 3553.021 1.5657847 1961.25 0
1961-07-01 3621.252 3.0115336 1961.50 0
1961-10-01 3692.289 6.3974990 1961.75 0
2024-07-01 23400.294 2.7187692 2024.50 0
2024-10-01 23542.349 2.5336838 2024.75 0
2025-01-01 23512.717 1.9917631 2025.00 0
2025-04-01 23685.287 1.9866641 2025.25 0
Code
# Fit a Smooth Trend Model (ex-Covid)
# A Bayesian smooth trend model is fitted to the year-over-year GDP growth.
# The model uses a thin plate regression spline (`s(quarter)`) to capture the
# non-linear trend. The data from the COVID-19 period is excluded from the
# model fitting to estimate an underlying trend, given the unusual economic
# disruptions during that time.

train_data <- gdp_raw %>% filter(date <= as.Date("2024-06-30"))

trend_model <- brm(
formula = bf(yoy ~ s(quarter)),
data = train_data %>% filter(covid == 0),
family = gaussian(),
chains = 4, iter = 4000, cores = 4,
control = list(adapt_delta = 0.95),
seed = 1234
)

# Predict with Posterior Draws
# Posterior draws from the fitted trend model are used to generate predictions
# across the entire dataset, including the latest outturns. This allows for
# a robust estimate of the underlying trend and its uncertainty.

# Define test set
test_data <- gdp_raw %>% filter(date > as.Date("2024-06-30"))

# Combine for prediction
pred_data <- bind_rows(train_data, test_data) %>%
select(date, quarter, yoy, covid)

pred_draws <- add_epred_draws(object = trend_model, newdata = pred_data, ndraws = 1000)

# Plot Actual vs Estimated Trend
# The plot below visualizes the actual year-over-year GDP growth alongside
# the estimated smooth trend and its 80% credible interval. Recent outturns
# (from 2024Q3 onwards) are highlighted in red.

pred_summary <- pred_draws %>%
  group_by(date) %>%
  median_qi(.epred, .width = 0.8) %>%
  left_join(pred_data, by = "date")

Plotting Results

Probability of Significant Deviation from Trend
date prob_diff
2024-07-01 0.220
2024-10-01 0.198
2025-01-01 0.340
2025-04-01 0.336

Takeaways

By the time the 2025Q3 US GDP data are released (the advance estimate is scheduled for 30 Oct 2025), such methods can quantify whether the number reflects meaningful economic change or not. This helps avoid overreacting to headlines and anchors policy and business decisions in a more stable view of economic momentum. The Bayesian approach, though advanced behind the scenes, offers a powerful and intuitive way to think about the economy as a dynamic system — evolving over time, but not at the mercy of every bump in the road.