Stay Ahead, Stay ONMINE

Linear Regression in Time Series: Sources of Spurious Regression

1. Introduction It’s pretty clear that most of our work will be automated by AI in the future. This will be possible because many researchers and professionals are working hard to make their work available online. These contributions not only help us understand fundamental concepts but also refine AI models, ultimately freeing up time to focus on other activities. However, there is one concept that remains misunderstood, even among experts. It is spurious regression in time series analysis. This issue arises when regression models suggest strong relationships between variables, even when none exist. It is typically observed in time series regression equations that seem to have a high degree of fit — as indicated by a high R² (coefficient of multiple correlation) — but with an extremely low Durbin-Watson statistic (d), signaling strong autocorrelation in the error terms. What is particularly surprising is that almost all econometric textbooks warn about the danger of autocorrelated errors, yet this issue persists in many published papers. Granger and Newbold (1974) identified several examples. For instance, they found published equations with R² = 0.997 and the Durbin-Watson statistic (d) equal to 0.53. The most extreme found is an equation with R² = 0.999 and d = 0.093. It is especially problematic in economics and finance, where many key variables exhibit autocorrelation or serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month, leading to misleading conclusions if not handled correctly. For example, today’s GDP is strongly correlated with the GDP of the previous quarter. Our post provides a detailed explanation of the results from Granger and Newbold (1974) and Python simulation (see section 7) replicating the key results presented in their article. Whether you’re an economist, data scientist, or analyst working with time series data, understanding this issue is crucial to ensuring your models produce meaningful results. To walk you through this paper, the next section will introduce the random walk and the ARIMA(0,1,1) process. In section 3, we will explain how Granger and Newbold (1974) describe the emergence of nonsense regressions, with examples illustrated in section 4. Finally, we’ll show how to avoid spurious regressions when working with time series data. 2. Simple presentation of a Random Walk and ARIMA(0,1,1) Process 2.1 Random Walk Let 𝐗ₜ be a time series. We say that 𝐗ₜ follows a random walk if its representation is given by: 𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ. (1) Where 𝜖ₜ is a white noise. It can be written as a sum of white noise, a useful form for simulation. It is a non-stationary time series because its variance depends on the time t. 2.2 ARIMA(0,1,1) Process The ARIMA(0,1,1) process is given by: 𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ − 𝜃 𝜖ₜ₋₁. (2) where 𝜖ₜ is a white noise. The ARIMA(0,1,1) process is non-stationary. It can be written as a sum of an independent random walk and white noise: 𝐗ₜ = 𝐗₀ + random walk + white noise. (3) This form is useful for simulation. Those non-stationary series are often employed as benchmarks against which the forecasting performance of other models is judged. 3. Random walk can lead to Nonsense Regression First, let’s recall the Linear Regression model. The linear regression model is given by: 𝐘 = 𝐗𝛽 + 𝜖. (4) Where 𝐘 is a T × 1 vector of the dependent variable, 𝛽 is a K × 1 vector of the coefficients, 𝐗 is a T × K matrix of the independent variables containing a column of ones and (K−1) columns with T observations on each of the (K−1) independent variables, which are stochastic but distributed independently of the T × 1 vector of the errors 𝜖. It is generally assumed that: 𝐄(𝜖) = 0, (5) and 𝐄(𝜖𝜖′) = 𝜎²𝐈. (6) where 𝐈 is the identity matrix. A test of the contribution of independent variables to the explanation of the dependent variable is the F-test. The null hypothesis of the test is given by: 𝐇₀: 𝛽₁ = 𝛽₂ = ⋯ = 𝛽ₖ₋₁ = 0, (7) And the statistic of the test is given by: 𝐅 = (𝐑² / (𝐊−1)) / ((1−𝐑²) / (𝐓−𝐊)). (8) where 𝐑² is the coefficient of determination. If we want to construct the statistic of the test, let’s assume that the null hypothesis is true, and one tries to fit a regression of the form (Equation 4) to the levels of an economic time series. Suppose next that these series are not stationary or are highly autocorrelated. In such a situation, the test procedure is invalid since 𝐅 in (Equation 8) is not distributed as an F-distribution under the null hypothesis (Equation 7). In fact, under the null hypothesis, the errors or residuals from (Equation 4) are given by: 𝜖ₜ = 𝐘ₜ − 𝐗𝛽₀ ; t = 1, 2, …, T. (9) And will have the same autocorrelation structure as the original series 𝐘. Some idea of the distribution problem can arise in the situation when: 𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ. (10) Where 𝐘ₜ and 𝐗ₜ follow independent first-order autoregressive processes: 𝐘ₜ = 𝜌 𝐘ₜ₋₁ + 𝜂ₜ, and 𝐗ₜ = 𝜌* 𝐗ₜ₋₁ + 𝜈ₜ. (11) Where 𝜂ₜ and 𝜈ₜ are white noise. We know that in this case, 𝐑² is the square of the correlation between 𝐘ₜ and 𝐗ₜ. They use Kendall’s result from the article Knowles (1954), which expresses the variance of 𝐑: 𝐕𝐚𝐫(𝐑) = (1/T)* (1 + 𝜌𝜌*) / (1 − 𝜌𝜌*). (12) Since 𝐑 is constrained to lie between -1 and 1, if its variance is greater than 1/3, the distribution of 𝐑 cannot have a mode at 0. This implies that 𝜌𝜌* > (T−1) / (T+1). Thus, for example, if T = 20 and 𝜌 = 𝜌*, a distribution that is not unimodal at 0 will be obtained if 𝜌 > 0.86, and if 𝜌 = 0.9, 𝐕𝐚𝐫(𝐑) = 0.47. So the 𝐄(𝐑²) will be close to 0.47. It has been shown that when 𝜌 is close to 1, 𝐑² can be very high, suggesting a strong relationship between 𝐘ₜ and 𝐗ₜ. However, in reality, the two series are completely independent. When 𝜌 is near 1, both series behave like random walks or near-random walks. On top of that, both series are highly autocorrelated, which causes the residuals from the regression to also be strongly autocorrelated. As a result, the Durbin-Watson statistic 𝐝 will be very low. This is why a high 𝐑² in this context should never be taken as evidence of a true relationship between the two series. To explore the possibility of obtaining a spurious regression when regressing two independent random walks, a series of simulations proposed by Granger and Newbold (1974) will be conducted in the next section. 4. Simulation results using Python. In this section, we will show using simulations that using the regression model with independent random walks bias the estimation of the coefficients and the hypothesis tests of the coefficients are invalid. The Python code that will produce the results of the simulation will be presented in section 6. A regression equation proposed by Granger and Newbold (1974) is given by: 𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ Where 𝐘ₜ and 𝐗ₜ were generated as independent random walks, each of length 50. The values 𝐒 = |𝛽̂₁| / √(𝐒𝐄̂(𝛽̂₁)), representing the statistic for testing the significance of 𝛽₁, for 100 simulations will be reported in the table below. Table 1: Regressing two independent random walks The null hypothesis of no relationship between 𝐘ₜ and 𝐗ₜ is rejected at the 5% level if 𝐒 > 2. This table shows that the null hypothesis (𝛽 = 0) is wrongly rejected in about a quarter (71 times) of all cases. This is awkward because the two variables are independent random walks, meaning there’s no actual relationship. Let’s break down why this happens. If 𝛽̂₁ / 𝐒𝐄̂ follows a 𝐍(0,1), the expected value of 𝐒, its absolute value, should be √2 / π ≈ 0.8 (√2/π is the mean of the absolute value of a standard normal distribution). However, the simulation results show an average of 4.59, meaning the estimated 𝐒 is underestimated by a factor of: 4.59 / 0.8 = 5.7 In classical statistics, we usually use a t-test threshold of around 2 to check the significance of a coefficient. However, these results show that, in this case, you would need to use a threshold of 11.4 to properly test for significance: 2 × (4.59 / 0.8) = 11.4 Interpretation: We’ve just shown that including variables that don’t belong in the model — especially random walks — can lead to completely invalid significance tests for the coefficients. To make their simulations even clearer, Granger and Newbold (1974) ran a series of regressions using variables that follow either a random walk or an ARIMA(0,1,1) process. Here is how they set up their simulations: They regressed a dependent series 𝐘ₜ on m series 𝐗ⱼ,ₜ (with j = 1, 2, …, m), varying m from 1 to 5. The dependent series 𝐘ₜ and the independent series 𝐗ⱼ,ₜ follow the same types of processes, and they tested four cases: Case 1 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow random walks. Case 2 (Differences): They use the first differences of the random walks, which are stationary. Case 3 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow ARIMA(0,1,1). Case 4 (Differences): They use the first differences of the previous ARIMA(0,1,1) processes, which are stationary. Each series has a length of 50 observations, and they ran 100 simulations for each case. All error terms are distributed as 𝐍(0,1), and the ARIMA(0,1,1) series are derived as the sum of the random walk and independent white noise. The simulation results, based on 100 replications with series of length 50, are summarized in the next table. Table 2: Regressions of a series on m independent ‘explanatory’ series. Interpretation of the results : It is seen that the probability of not rejecting the null hypothesis of no relationship between 𝐘ₜ and 𝐗ⱼ,ₜ becomes very small when m ≥ 3 when regressions are made with random walk series (rw-levels). The 𝐑² and the mean Durbin-Watson increase. Similar results are obtained when the regressions are made with ARIMA(0,1,1) series (arima-levels). When white noise series (rw-diffs) are used, classical regression analysis is valid since the error series will be white noise and least squares will be efficient. However, when the regressions are made with the differences of ARIMA(0,1,1) series (arima-diffs) or first-order moving average series MA(1) process, the null hypothesis is rejected, on average: (10 + 16 + 5 + 6 + 6) / 5 = 8.6 which is greater than 5% of the time. If your variables are random walks or close to them, and you include unnecessary variables in your regression, you will often get fallacious results. High 𝐑² and low Durbin-Watson values do not confirm a true relationship but instead indicate a likely spurious one. 5. How to avoid spurious regression in time series It’s really hard to come up with a complete list of ways to avoid spurious regressions. However, there are a few good practices you can follow to minimize the risk as much as possible. If one performs a regression analysis with time series data and finds that the residuals are strongly autocorrelated, there is a serious problem when it comes to interpreting the coefficients of the equation. To check for autocorrelation in the residuals, one can use the Durbin-Watson test or the Portmanteau test. Based on the study above, we can conclude that if a regression analysis performed with economical variables produces strongly autocorrelated residuals, meaning a low Durbin-Watson statistic, then the results of the analysis are likely to be spurious, whatever the value of the coefficient of determination R² observed. In such cases, it is important to understand where the mis-specification comes from. According to the literature, misspecification usually falls into three categories : (i) the omission of a relevant variable, (ii) the inclusion of an irrelevant variable, or (iii) autocorrelation of the errors. Most of the time, mis-specification comes from a mix of these three sources. To avoid spurious regression in a time series, several recommendations can be made: The first recommendation is to select the right macroeconomic variables that are likely to explain the dependent variable. This can be done by reviewing the literature or consulting experts in the field. The second recommendation is to stationarize the series by taking first differences. In most cases, the first differences of macroeconomic variables are stationary and still easy to interpret. For macroeconomic data, it’s strongly recommended to differentiate the series once to reduce the autocorrelation of the residuals, especially when the sample size is small. There is indeed sometimes strong serial correlation observed in these variables. A simple calculation shows that the first differences will almost always have much smaller serial correlations than the original series. The third recommendation is to use the Box-Jenkins methodology to model each macroeconomic variable individually and then search for relationships between the series by relating the residuals from each individual model. The idea here is that the Box-Jenkins process extracts the explained part of the series, leaving the residuals, which contain only what can’t be explained by the series’ own past behavior. This makes it easier to check whether these unexplained parts (residuals) are related across variables. 6. Conclusion Many econometrics textbooks warn about specification errors in regression models, but the problem still shows up in many published papers. Granger and Newbold (1974) highlighted the risk of spurious regressions, where you get a high paired with very low Durbin-Watson statistics. Using Python simulations, we showed some of the main causes of these spurious regressions, especially including variables that don’t belong in the model and are highly autocorrelated. We also demonstrated how these issues can completely distort hypothesis tests on the coefficients. Hopefully, this post will help reduce the risk of spurious regressions in future econometric analyses. 7. Appendice: Python code for simulation. #####################################################Simulation Code for table 1 ##################################################### import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt np.random.seed(123) M = 100 n = 50 S = np.zeros(M) for i in range(M): #————————————————————— # Generate the data #————————————————————— espilon_y = np.random.normal(0, 1, n) espilon_x = np.random.normal(0, 1, n) Y = np.cumsum(espilon_y) X = np.cumsum(espilon_x) #————————————————————— # Fit the model #————————————————————— X = sm.add_constant(X) model = sm.OLS(Y, X).fit() #————————————————————— # Compute the statistic #—————————————————— S[i] = np.abs(model.params[1])/model.bse[1] #—————————————————— # Maximum value of S #—————————————————— S_max = int(np.ceil(max(S))) #—————————————————— # Create bins #—————————————————— bins = np.arange(0, S_max + 2, 1) #—————————————————— # Compute the histogram #—————————————————— frequency, bin_edges = np.histogram(S, bins=bins) #—————————————————— # Create a dataframe #—————————————————— df = pd.DataFrame({ “S Interval”: [f”{int(bin_edges[i])}-{int(bin_edges[i+1])}” for i in range(len(bin_edges)-1)], “Frequency”: frequency }) print(df) print(np.mean(S)) #####################################################Simulation Code for table 2 ##################################################### import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels.stats.stattools import durbin_watson from tabulate import tabulate np.random.seed(1) # Pour rendre les résultats reproductibles #—————————————————— # Definition of functions #—————————————————— def generate_random_walk(T): “”” Génère une série de longueur T suivant un random walk : Y_t = Y_{t-1} + e_t, où e_t ~ N(0,1). “”” e = np.random.normal(0, 1, size=T) return np.cumsum(e) def generate_arima_0_1_1(T): “”” Génère un ARIMA(0,1,1) selon la méthode de Granger & Newbold : la série est obtenue en additionnant une marche aléatoire et un bruit blanc indépendant. “”” rw = generate_random_walk(T) wn = np.random.normal(0, 1, size=T) return rw + wn def difference(series): “”” Calcule la différence première d’une série unidimensionnelle. Retourne une série de longueur T-1. “”” return np.diff(series) #—————————————————— # Paramètres #—————————————————— T = 50 # longueur de chaque série n_sims = 100 # nombre de simulations Monte Carlo alpha = 0.05 # seuil de significativité #—————————————————— # Definition of function for simulation #—————————————————— def run_simulation_case(case_name, m_values=[1,2,3,4,5]): “”” case_name : un identifiant pour le type de génération : – ‘rw-levels’ : random walk (levels) – ‘rw-diffs’ : differences of RW (white noise) – ‘arima-levels’ : ARIMA(0,1,1) en niveaux – ‘arima-diffs’ : différences d’un ARIMA(0,1,1) = > MA(1) m_values : liste du nombre de régresseurs. Retourne un DataFrame avec pour chaque m : – % de rejets de H0 – Durbin-Watson moyen – R^2_adj moyen – % de R^2 > 0.1 “”” results = [] for m in m_values: count_reject = 0 dw_list = [] r2_adjusted_list = [] for _ in range(n_sims): #————————————– # 1) Generation of independents de Y_t and X_{j,t}. #—————————————- if case_name == ‘rw-levels’: Y = generate_random_walk(T) Xs = [generate_random_walk(T) for __ in range(m)] elif case_name == ‘rw-diffs’: # Y et X sont les différences d’un RW, i.e. ~ white noise Y_rw = generate_random_walk(T) Y = difference(Y_rw) Xs = [] for __ in range(m): X_rw = generate_random_walk(T) Xs.append(difference(X_rw)) # NB : maintenant Y et Xs ont longueur T-1 # = > ajuster T_effectif = T-1 # = > on prendra T_effectif points pour la régression elif case_name == ‘arima-levels’: Y = generate_arima_0_1_1(T) Xs = [generate_arima_0_1_1(T) for __ in range(m)] elif case_name == ‘arima-diffs’: # Différences d’un ARIMA(0,1,1) = > MA(1) Y_arima = generate_arima_0_1_1(T) Y = difference(Y_arima) Xs = [] for __ in range(m): X_arima = generate_arima_0_1_1(T) Xs.append(difference(X_arima)) # 2) Prépare les données pour la régression # Selon le cas, la longueur est T ou T-1 if case_name in [‘rw-levels’,’arima-levels’]: Y_reg = Y X_reg = np.column_stack(Xs) if m >0 else np.array([]) else: # dans les cas de différences, la longueur est T-1 Y_reg = Y X_reg = np.column_stack(Xs) if m >0 else np.array([]) # 3) Régression OLS X_with_const = sm.add_constant(X_reg) # Ajout de l’ordonnée à l’origine model = sm.OLS(Y_reg, X_with_const).fit() # 4) Test global F : H0 : tous les beta_j = 0 # On regarde si p-value < alpha if model.f_pvalue is not None and model.f_pvalue 0.7) results.append({ ‘m’: m, ‘Reject %’: reject_percent, ‘Mean DW’: dw_mean, ‘Mean R^2’: r2_mean, ‘% R^2_adj >0.7’: r2_above_0_7_percent }) return pd.DataFrame(results) #—————————————————— # Application of the simulation #—————————————————— cases = [‘rw-levels’, ‘rw-diffs’, ‘arima-levels’, ‘arima-diffs’] all_results = {} for c in cases: df_res = run_simulation_case(c, m_values=[1,2,3,4,5]) all_results[c] = df_res #—————————————————— # Store data in table #—————————————————— for case, df_res in all_results.items(): print(f”nn{case}”) print(tabulate(df_res, headers=’keys’, tablefmt=’fancy_grid’)) References Granger, Clive WJ, and Paul Newbold. 1974. “Spurious Regressions in Econometrics.” Journal of Econometrics 2 (2): 111–20. Knowles, EAG. 1954. “Exercises in Theoretical Statistics.” Oxford University Press.

1. Introduction

It’s pretty clear that most of our work will be automated by AI in the future. This will be possible because many researchers and professionals are working hard to make their work available online. These contributions not only help us understand fundamental concepts but also refine AI models, ultimately freeing up time to focus on other activities.

However, there is one concept that remains misunderstood, even among experts. It is spurious regression in time series analysis. This issue arises when regression models suggest strong relationships between variables, even when none exist. It is typically observed in time series regression equations that seem to have a high degree of fit — as indicated by a high R² (coefficient of multiple correlation) — but with an extremely low Durbin-Watson statistic (d), signaling strong autocorrelation in the error terms.

What is particularly surprising is that almost all econometric textbooks warn about the danger of autocorrelated errors, yet this issue persists in many published papers. Granger and Newbold (1974) identified several examples. For instance, they found published equations with R² = 0.997 and the Durbin-Watson statistic (d) equal to 0.53. The most extreme found is an equation with R² = 0.999 and d = 0.093.

It is especially problematic in economics and finance, where many key variables exhibit autocorrelation or serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month, leading to misleading conclusions if not handled correctly. For example, today’s GDP is strongly correlated with the GDP of the previous quarter. Our post provides a detailed explanation of the results from Granger and Newbold (1974) and Python simulation (see section 7) replicating the key results presented in their article.

Whether you’re an economist, data scientist, or analyst working with time series data, understanding this issue is crucial to ensuring your models produce meaningful results.

To walk you through this paper, the next section will introduce the random walk and the ARIMA(0,1,1) process. In section 3, we will explain how Granger and Newbold (1974) describe the emergence of nonsense regressions, with examples illustrated in section 4. Finally, we’ll show how to avoid spurious regressions when working with time series data.

2. Simple presentation of a Random Walk and ARIMA(0,1,1) Process

2.1 Random Walk

Let 𝐗ₜ be a time series. We say that 𝐗ₜ follows a random walk if its representation is given by:

𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ. (1)

Where 𝜖ₜ is a white noise. It can be written as a sum of white noise, a useful form for simulation. It is a non-stationary time series because its variance depends on the time t.

2.2 ARIMA(0,1,1) Process

The ARIMA(0,1,1) process is given by:

𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ − 𝜃 𝜖ₜ₋₁. (2)

where 𝜖ₜ is a white noise. The ARIMA(0,1,1) process is non-stationary. It can be written as a sum of an independent random walk and white noise:

𝐗ₜ = 𝐗₀ + random walk + white noise. (3) This form is useful for simulation.

Those non-stationary series are often employed as benchmarks against which the forecasting performance of other models is judged.

3. Random walk can lead to Nonsense Regression

First, let’s recall the Linear Regression model. The linear regression model is given by:

𝐘 = 𝐗𝛽 + 𝜖. (4)

Where 𝐘 is a T × 1 vector of the dependent variable, 𝛽 is a K × 1 vector of the coefficients, 𝐗 is a T × K matrix of the independent variables containing a column of ones and (K−1) columns with T observations on each of the (K−1) independent variables, which are stochastic but distributed independently of the T × 1 vector of the errors 𝜖. It is generally assumed that:

𝐄(𝜖) = 0, (5)

and

𝐄(𝜖𝜖′) = 𝜎²𝐈. (6)

where 𝐈 is the identity matrix.

A test of the contribution of independent variables to the explanation of the dependent variable is the F-test. The null hypothesis of the test is given by:

𝐇₀: 𝛽₁ = 𝛽₂ = ⋯ = 𝛽ₖ₋₁ = 0, (7)

And the statistic of the test is given by:

𝐅 = (𝐑² / (𝐊−1)) / ((1−𝐑²) / (𝐓−𝐊)). (8)

where 𝐑² is the coefficient of determination.

If we want to construct the statistic of the test, let’s assume that the null hypothesis is true, and one tries to fit a regression of the form (Equation 4) to the levels of an economic time series. Suppose next that these series are not stationary or are highly autocorrelated. In such a situation, the test procedure is invalid since 𝐅 in (Equation 8) is not distributed as an F-distribution under the null hypothesis (Equation 7). In fact, under the null hypothesis, the errors or residuals from (Equation 4) are given by:

𝜖ₜ = 𝐘ₜ − 𝐗𝛽₀ ; t = 1, 2, …, T. (9)

And will have the same autocorrelation structure as the original series 𝐘.

Some idea of the distribution problem can arise in the situation when:

𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ. (10)

Where 𝐘ₜ and 𝐗ₜ follow independent first-order autoregressive processes:

𝐘ₜ = 𝜌 𝐘ₜ₋₁ + 𝜂ₜ, and 𝐗ₜ = 𝜌* 𝐗ₜ₋₁ + 𝜈ₜ. (11)

Where 𝜂ₜ and 𝜈ₜ are white noise.

We know that in this case, 𝐑² is the square of the correlation between 𝐘ₜ and 𝐗ₜ. They use Kendall’s result from the article Knowles (1954), which expresses the variance of 𝐑:

𝐕𝐚𝐫(𝐑) = (1/T)* (1 + 𝜌𝜌*) / (1 − 𝜌𝜌*). (12)

Since 𝐑 is constrained to lie between -1 and 1, if its variance is greater than 1/3, the distribution of 𝐑 cannot have a mode at 0. This implies that 𝜌𝜌* > (T−1) / (T+1).

Thus, for example, if T = 20 and 𝜌 = 𝜌*, a distribution that is not unimodal at 0 will be obtained if 𝜌 > 0.86, and if 𝜌 = 0.9, 𝐕𝐚𝐫(𝐑) = 0.47. So the 𝐄(𝐑²) will be close to 0.47.

It has been shown that when 𝜌 is close to 1, 𝐑² can be very high, suggesting a strong relationship between 𝐘ₜ and 𝐗ₜ. However, in reality, the two series are completely independent. When 𝜌 is near 1, both series behave like random walks or near-random walks. On top of that, both series are highly autocorrelated, which causes the residuals from the regression to also be strongly autocorrelated. As a result, the Durbin-Watson statistic 𝐝 will be very low.

This is why a high 𝐑² in this context should never be taken as evidence of a true relationship between the two series.

To explore the possibility of obtaining a spurious regression when regressing two independent random walks, a series of simulations proposed by Granger and Newbold (1974) will be conducted in the next section.

4. Simulation results using Python.

In this section, we will show using simulations that using the regression model with independent random walks bias the estimation of the coefficients and the hypothesis tests of the coefficients are invalid. The Python code that will produce the results of the simulation will be presented in section 6.

A regression equation proposed by Granger and Newbold (1974) is given by:

𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ

Where 𝐘ₜ and 𝐗ₜ were generated as independent random walks, each of length 50. The values 𝐒 = |𝛽̂₁| / √(𝐒𝐄̂(𝛽̂₁)), representing the statistic for testing the significance of 𝛽₁, for 100 simulations will be reported in the table below.

Table 1: Regressing two independent random walks

The null hypothesis of no relationship between 𝐘ₜ and 𝐗ₜ is rejected at the 5% level if 𝐒 > 2. This table shows that the null hypothesis (𝛽 = 0) is wrongly rejected in about a quarter (71 times) of all cases. This is awkward because the two variables are independent random walks, meaning there’s no actual relationship. Let’s break down why this happens.

If 𝛽̂₁ / 𝐒𝐄̂ follows a 𝐍(0,1), the expected value of 𝐒, its absolute value, should be √2 / π ≈ 0.8 (√2/π is the mean of the absolute value of a standard normal distribution). However, the simulation results show an average of 4.59, meaning the estimated 𝐒 is underestimated by a factor of:

4.59 / 0.8 = 5.7

In classical statistics, we usually use a t-test threshold of around 2 to check the significance of a coefficient. However, these results show that, in this case, you would need to use a threshold of 11.4 to properly test for significance:

2 × (4.59 / 0.8) = 11.4

Interpretation: We’ve just shown that including variables that don’t belong in the model — especially random walks — can lead to completely invalid significance tests for the coefficients.

To make their simulations even clearer, Granger and Newbold (1974) ran a series of regressions using variables that follow either a random walk or an ARIMA(0,1,1) process.

Here is how they set up their simulations:

They regressed a dependent series 𝐘ₜ on m series 𝐗ⱼ,ₜ (with j = 1, 2, …, m), varying m from 1 to 5. The dependent series 𝐘ₜ and the independent series 𝐗ⱼ,ₜ follow the same types of processes, and they tested four cases:

  • Case 1 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow random walks.
  • Case 2 (Differences): They use the first differences of the random walks, which are stationary.
  • Case 3 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow ARIMA(0,1,1).
  • Case 4 (Differences): They use the first differences of the previous ARIMA(0,1,1) processes, which are stationary.

Each series has a length of 50 observations, and they ran 100 simulations for each case.

All error terms are distributed as 𝐍(0,1), and the ARIMA(0,1,1) series are derived as the sum of the random walk and independent white noise. The simulation results, based on 100 replications with series of length 50, are summarized in the next table.

Table 2: Regressions of a series on m independent ‘explanatory’ series.

Interpretation of the results :

  • It is seen that the probability of not rejecting the null hypothesis of no relationship between 𝐘ₜ and 𝐗ⱼ,ₜ becomes very small when m ≥ 3 when regressions are made with random walk series (rw-levels). The 𝐑² and the mean Durbin-Watson increase. Similar results are obtained when the regressions are made with ARIMA(0,1,1) series (arima-levels).
  • When white noise series (rw-diffs) are used, classical regression analysis is valid since the error series will be white noise and least squares will be efficient.
  • However, when the regressions are made with the differences of ARIMA(0,1,1) series (arima-diffs) or first-order moving average series MA(1) process, the null hypothesis is rejected, on average:

(10 + 16 + 5 + 6 + 6) / 5 = 8.6

which is greater than 5% of the time.

If your variables are random walks or close to them, and you include unnecessary variables in your regression, you will often get fallacious results. High 𝐑² and low Durbin-Watson values do not confirm a true relationship but instead indicate a likely spurious one.

5. How to avoid spurious regression in time series

It’s really hard to come up with a complete list of ways to avoid spurious regressions. However, there are a few good practices you can follow to minimize the risk as much as possible.

If one performs a regression analysis with time series data and finds that the residuals are strongly autocorrelated, there is a serious problem when it comes to interpreting the coefficients of the equation. To check for autocorrelation in the residuals, one can use the Durbin-Watson test or the Portmanteau test.

Based on the study above, we can conclude that if a regression analysis performed with economical variables produces strongly autocorrelated residuals, meaning a low Durbin-Watson statistic, then the results of the analysis are likely to be spurious, whatever the value of the coefficient of determination R² observed.

In such cases, it is important to understand where the mis-specification comes from. According to the literature, misspecification usually falls into three categories : (i) the omission of a relevant variable, (ii) the inclusion of an irrelevant variable, or (iii) autocorrelation of the errors. Most of the time, mis-specification comes from a mix of these three sources.

To avoid spurious regression in a time series, several recommendations can be made:

  • The first recommendation is to select the right macroeconomic variables that are likely to explain the dependent variable. This can be done by reviewing the literature or consulting experts in the field.
  • The second recommendation is to stationarize the series by taking first differences. In most cases, the first differences of macroeconomic variables are stationary and still easy to interpret. For macroeconomic data, it’s strongly recommended to differentiate the series once to reduce the autocorrelation of the residuals, especially when the sample size is small. There is indeed sometimes strong serial correlation observed in these variables. A simple calculation shows that the first differences will almost always have much smaller serial correlations than the original series.
  • The third recommendation is to use the Box-Jenkins methodology to model each macroeconomic variable individually and then search for relationships between the series by relating the residuals from each individual model. The idea here is that the Box-Jenkins process extracts the explained part of the series, leaving the residuals, which contain only what can’t be explained by the series’ own past behavior. This makes it easier to check whether these unexplained parts (residuals) are related across variables.

6. Conclusion

Many econometrics textbooks warn about specification errors in regression models, but the problem still shows up in many published papers. Granger and Newbold (1974) highlighted the risk of spurious regressions, where you get a high paired with very low Durbin-Watson statistics.

Using Python simulations, we showed some of the main causes of these spurious regressions, especially including variables that don’t belong in the model and are highly autocorrelated. We also demonstrated how these issues can completely distort hypothesis tests on the coefficients.

Hopefully, this post will help reduce the risk of spurious regressions in future econometric analyses.

7. Appendice: Python code for simulation.

#####################################################Simulation Code for table 1 #####################################################

import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt

np.random.seed(123)
M = 100 
n = 50
S = np.zeros(M)
for i in range(M):
#---------------------------------------------------------------
# Generate the data
#---------------------------------------------------------------
    espilon_y = np.random.normal(0, 1, n)
    espilon_x = np.random.normal(0, 1, n)

    Y = np.cumsum(espilon_y)
    X = np.cumsum(espilon_x)
#---------------------------------------------------------------
# Fit the model
#---------------------------------------------------------------
    X = sm.add_constant(X)
    model = sm.OLS(Y, X).fit()
#---------------------------------------------------------------
# Compute the statistic
#------------------------------------------------------
    S[i] = np.abs(model.params[1])/model.bse[1]


#------------------------------------------------------ 
#              Maximum value of S
#------------------------------------------------------
S_max = int(np.ceil(max(S)))

#------------------------------------------------------ 
#                Create bins
#------------------------------------------------------
bins = np.arange(0, S_max + 2, 1)  

#------------------------------------------------------
#    Compute the histogram
#------------------------------------------------------
frequency, bin_edges = np.histogram(S, bins=bins)

#------------------------------------------------------
#    Create a dataframe
#------------------------------------------------------

df = pd.DataFrame({
    "S Interval": [f"{int(bin_edges[i])}-{int(bin_edges[i+1])}" for i in range(len(bin_edges)-1)],
    "Frequency": frequency
})
print(df)
print(np.mean(S))

#####################################################Simulation Code for table 2 #####################################################

import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.stats.stattools import durbin_watson
from tabulate import tabulate

np.random.seed(1)  # Pour rendre les résultats reproductibles

#------------------------------------------------------
# Definition of functions
#------------------------------------------------------

def generate_random_walk(T):
    """
    Génère une série de longueur T suivant un random walk :
        Y_t = Y_{t-1} + e_t,
    où e_t ~ N(0,1).
    """
    e = np.random.normal(0, 1, size=T)
    return np.cumsum(e)

def generate_arima_0_1_1(T):
    """
    Génère un ARIMA(0,1,1) selon la méthode de Granger & Newbold :
    la série est obtenue en additionnant une marche aléatoire et un bruit blanc indépendant.
    """
    rw = generate_random_walk(T)
    wn = np.random.normal(0, 1, size=T)
    return rw + wn

def difference(series):
    """
    Calcule la différence première d'une série unidimensionnelle.
    Retourne une série de longueur T-1.
    """
    return np.diff(series)

#------------------------------------------------------
# Paramètres
#------------------------------------------------------

T = 50           # longueur de chaque série
n_sims = 100     # nombre de simulations Monte Carlo
alpha = 0.05     # seuil de significativité

#------------------------------------------------------
# Definition of function for simulation
#------------------------------------------------------

def run_simulation_case(case_name, m_values=[1,2,3,4,5]):
    """
    case_name : un identifiant pour le type de génération :
        - 'rw-levels' : random walk (levels)
        - 'rw-diffs'  : differences of RW (white noise)
        - 'arima-levels' : ARIMA(0,1,1) en niveaux
        - 'arima-diffs'  : différences d'un ARIMA(0,1,1) => MA(1)
    
    m_values : liste du nombre de régresseurs.
    
    Retourne un DataFrame avec pour chaque m :
        - % de rejets de H0
        - Durbin-Watson moyen
        - R^2_adj moyen
        - % de R^2 > 0.1
    """
    results = []
    
    for m in m_values:
        count_reject = 0
        dw_list = []
        r2_adjusted_list = []
        
        for _ in range(n_sims):
#--------------------------------------
# 1) Generation of independents de Y_t and X_{j,t}.
#----------------------------------------
            if case_name == 'rw-levels':
                Y = generate_random_walk(T)
                Xs = [generate_random_walk(T) for __ in range(m)]
            
            elif case_name == 'rw-diffs':
                # Y et X sont les différences d'un RW, i.e. ~ white noise
                Y_rw = generate_random_walk(T)
                Y = difference(Y_rw)
                Xs = []
                for __ in range(m):
                    X_rw = generate_random_walk(T)
                    Xs.append(difference(X_rw))
                # NB : maintenant Y et Xs ont longueur T-1
                # => ajuster T_effectif = T-1
                # => on prendra T_effectif points pour la régression
            
            elif case_name == 'arima-levels':
                Y = generate_arima_0_1_1(T)
                Xs = [generate_arima_0_1_1(T) for __ in range(m)]
            
            elif case_name == 'arima-diffs':
                # Différences d'un ARIMA(0,1,1) => MA(1)
                Y_arima = generate_arima_0_1_1(T)
                Y = difference(Y_arima)
                Xs = []
                for __ in range(m):
                    X_arima = generate_arima_0_1_1(T)
                    Xs.append(difference(X_arima))
            
            # 2) Prépare les données pour la régression
            #    Selon le cas, la longueur est T ou T-1
            if case_name in ['rw-levels','arima-levels']:
                Y_reg = Y
                X_reg = np.column_stack(Xs) if m>0 else np.array([])
            else:
                # dans les cas de différences, la longueur est T-1
                Y_reg = Y
                X_reg = np.column_stack(Xs) if m>0 else np.array([])
            
            # 3) Régression OLS
            X_with_const = sm.add_constant(X_reg)  # Ajout de l'ordonnée à l'origine
            model = sm.OLS(Y_reg, X_with_const).fit()
            
            # 4) Test global F : H0 : tous les beta_j = 0
            #    On regarde si p-value < alpha
            if model.f_pvalue is not None and model.f_pvalue  0.7)
        
        results.append({
            'm': m,
            'Reject %': reject_percent,
            'Mean DW': dw_mean,
            'Mean R^2': r2_mean,
            '% R^2_adj>0.7': r2_above_0_7_percent
        })
    
    return pd.DataFrame(results)
    
#------------------------------------------------------
# Application of the simulation
#------------------------------------------------------       

cases = ['rw-levels', 'rw-diffs', 'arima-levels', 'arima-diffs']
all_results = {}

for c in cases:
    df_res = run_simulation_case(c, m_values=[1,2,3,4,5])
    all_results[c] = df_res

#------------------------------------------------------
# Store data in table
#------------------------------------------------------

for case, df_res in all_results.items():
    print(f"nn{case}")
    print(tabulate(df_res, headers='keys', tablefmt='fancy_grid'))

References

  • Granger, Clive WJ, and Paul Newbold. 1974. “Spurious Regressions in Econometrics.” Journal of Econometrics 2 (2): 111–20.
  • Knowles, EAG. 1954. “Exercises in Theoretical Statistics.” Oxford University Press.
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Rust 1.93 updates bundled musl library to boost networking

The Rust team has unveiled Rust 1.93, the latest version of the programming language designed to create fast and safe system-level software. This release improves operations involving the DNS resolver for the musl implementation of the  C standard library. Linux binaries are expected to be more reliable for networking as

Read More »

Intel nabs Qualcomm veteran to lead GPU initiative

Intel has struggled for more than two decades to develop a successful GPU/accelerated computing strategy, going all the way back to the aughts and the ill-fated Larrabee effort.  Its most recent efforts centered around Ponte Vecchio and Gaudi chips, neither of which have gained any traction. Still, CEO Lip-Bu Tan

Read More »

New Relic extends observability into ChatGPT-hosted apps

New Relic’s cloud-based observability platform monitors applications and services in real time to provide insights into software, hardware, and cloud performance. The new capability extends the platform’s browser agent into the GPT iframe environment. It captures standard telemetry data, including latency and connectivity of an application within the GPT iframe.

Read More »

Crude Closes Higher on Iran, Cold Weather

Oil rose as traders factored in the possibility of US military action in Iran that could upend supplies from one of OPEC’s leading producers, and a massive winter storm in the US pushing up the price of refined products. West Texas Intermediate rose 2.9% to settle above $61, posting a fifth weekly gain. Prices rose after President Donald Trump revived his threats to use military force against Iran’s senior leadership, with a US Navy carrier strike group moving toward the Middle East. While Trump previously walked back pledges to attack the country, a renewed pressure campaign could add to oil’s geopolitical risk premium given Iran’s strategic importance to the industry. Adding to the concern and the geopolitically driven bullish momentum, the US is also pressuring Iraq to disarm Iran-backed militias, the Financial Times reported. Meanwhile, the Kremlin poured cold water on hopes of a breakthrough to end Russia’s war in Ukraine. An end to the conflict could limit supply disruptions and sanctions on Moscow’s crude. “The bottom line is that geopolitical headlines remain plentiful and uncertainty remains exceptionally high. Heading into the weekend, crude is likely to trade in whichever direction the headlines push it,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. “For now, recent shifts in military assets and official commentary appear to be leaning back toward renewed concerns over potential military action involving Iran,” Babin added. If the US strikes Iran, prompting a retaliation, it is unlikely but possible that the conflict will impact oil supplies, according to Rapidan Energy Group. The geopolitical analysis firm assigned a 20% probability to a “sustained and severe interruption” in energy production and flows in the region. Oil products such as diesel, which can be used as heating oil in the US Northeast, are also pushing higher

Read More »

SLB Predicts Worst Is Behind Global Oil Market

SLB, the world’s largest oilfield-services provider, raised its dividend and posted fourth-quarter earnings that beat estimates as activity in the Middle East and other key regions accelerated and its data-center business rapidly expanded. The worst may be behind the global oil market, Chief Executive Officer Olivier Le Peuch said in a statement, predicting a gradual ramp-up in drilling activity in major regions including OPEC countries after a supply glut sent crude prices tumbling last year. Deriving the bulk of its revenue from overseas markets, Houston-based SLB is often regarded as a bellwether for the global oil industry and its financial health. Shares rose by as much as 4.8% to $51.67, briefly hitting the highest price since April 2024 before paring gains. “As we move into 2026, we believe that the headwinds we experienced in key regions in 2025 are behind us,” Le Peuch said. “In particular, we expect rig activity in the Middle East to increase compared to today’s level, and our footprint in the region puts us in a strong position to benefit from this recovery.” The data-center business, which grew 121% from a year earlier, helped to shield the company from lower oil prices and geopolitical uncertainty, he added. SLB has also increased its focus on production and recovery services, which help drillers to boost efficiency and extract more crude at lower cost. SLB has been expanding into oilfield tech and other ancillary business lines to offset muted growth in traditional drilling and US shale activity.  SLB posted adjusted fourth-quarter earnings of 78 cents a share, surpassing analysts’ estimates of 74 cents. The company increased its quarterly divided 3.5% to 29.5 cents a share. The company’s global footprint positions it to benefit from US government efforts to revive Venezuelan oil production, Bloomberg Intelligence analyst Scott Levine wrote in

Read More »

France Boards Oil Tanker Linked to Russia Shadow Fleet

France’s navy boarded an oil tanker coming from Russia into the Mediterranean Sea, as part of a global crackdown on shadow fleet ships used to export sanctioned crude. The operation was carried out on the high seas, with the support of several of France’s allies, French President Emmanuel Macron said on X. The vessel – the Grinch – is subject to international sanctions and suspected of flying a false flag, he said.  The operation comes amid a step-up in pressure on the shadow fleet of aging tankers globally. The US has been seizing ships tied to Venezuela’s oil exports – one of which sought the shelter of the Russian flag – while European nations have long talked about tougher measures against aging ships sailing through their waters. “We are determined to uphold international law and to ensure the effective enforcement of sanctions,” Macron said. “The activities of the ‘shadow fleet’ contribute to financing the war of aggression against Ukraine.” A judicial investigation has been opened and the vessel has been diverted, Macron said. UN rules allow checks to be carried out on ships suspected of carrying false flags, according to a statement from the country’s administration for maritime affairs in the Mediterranean. The French navy said the tanker, which came from Murmansk on Russia’s Arctic coast, was boarded in the Alboran Sea, south of Spain, and taken to a mooring. The tanker was loaded with cargo at the time, vessel-tracking data compiled by Bloomberg show. While not giving a destination, it was sailing in the direction of the Suez Canal, a common waypoint for tankers taking Russian barrels to Asia. It disappeared from the industry’s digital tracking system on Wednesday, not long after passing Gibraltar. The Equasis international shipping database does not provide contact details for the manager of the Grinch. A clampdown on the shadow

Read More »

Natural Gas Prices Across the USA Surge

Natural gas prices for near-term delivery at regional trading hubs across the US jumped as the market braced for a historic winter storm that’s poised to send temperatures plummeting and boost demand for the heating fuel.  So-called cash prices for gas at the benchmark Henry Hub in Louisiana to be delivered over the weekend surged early Friday to $18.80 per million British thermal units, according to traders. That compares with $8.42 on Thursday. Spot prices at the SoCal Citygate hub in California traded as high as $8 per million Btu as gas volumes delivered via pipeline from the Permian Basin in West Texas to the West Coast have likely been reduced, traders said. That’s up from $4.42 on Thursday. This week’s surge has been driven by forecasts for below-normal temperatures across most of the country, threatening to boost gas consumption and drain inventories. The freeze — particularly in the southern gas-producing states — has raised concerns about water icing in pipelines, potentially disrupting output starting this weekend. US natural gas futures for February delivery, meanwhile, rose for a fourth straight day. They were up 6.3% to $5.362 per million Btu as of 9:22 a.m. in New York, heading for their biggest weekly gain in records going back to 1990. The shift in US weather forecasts came days after hedge funds turned more bearish on gas at the end of last week, leaving the market poised for a rally as traders rushed to close out those wagers. Gas prices briefly climbed above $5.50 per million Btu on Thursday, a level that a Citigroup Inc. analysis on Thursday showed would wipe out all shorts.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or

Read More »

USA Crude Oil Stocks Increase Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 3.6 million barrels from the week ending January 9 to the week ending January 16, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. Crude oil stocks, not including the SPR, stood at 426.0 million barrels on January 16, 422.4 million barrels on January 9, and 411.7 million barrels on January 17, 2025, the EIA report, which was released on January 22 and included data for the week ending January 16, showed. Crude oil in the SPR stood at 414.5 million barrels on January 16, 413.7 million barrels on January 9, and 394.6 million barrels on January 17, 2025, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.722 billion barrels on January 16, the report highlighted. Total petroleum stocks were up 8.3 million barrels week on week and up 100.3 million barrels year on year, the report pointed out. “At 426.0 million barrels, U.S. crude oil inventories are about two percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 6.0 million barrels from last week and are about five percent above the five year average for this time of year. Both finished gasoline and blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 3.3 million barrels last week and are about one percent below the five year average for this time of year. Propane/propylene inventories decreased 2.1 million barrels from last week and are about 39 percent above the five year average for this

Read More »

Alaska LNG Secures Preliminary Deals with Suppliers, Offtakers

Glenfarne Group LLC said Thursday it had signed several preliminary agreements to source natural gas for and sell output from Alaska LNG, designed to also supply the domestic market, as well as conditionally awarded build contracts. Alaska LNG holds an Energy Department permit to export 20 million metric tons per annum (MMtpa) of LNG, or 2.55 billion cubic feet a day of natural gas equivalent according to Alaska LNG, to both FTA and non-FTA countries. The project secured the authorization November 2014 for the portion for countries with a free trade agreement (FTA) with the United States and August 2020 for the non-FTA portion. On December 11, 2025 the Federal Permitting Improvement Steering Council announced the completion of permit renewal for the project, which would process gas from the North Slope for both the domestic and overseas markets, following a review of environmental opinions. On Thursday Glenfarne said it has executed “gas sales precedent agreements” with Exxon Mobil Corp and Hilcorp Energy Co for the supply of gas from phase 1 of Alaska LNG. The Houston, Texas-based developer did not disclose any prospective contract volume. Glenfarne has a previous gas sales precedent agreement with Pantheon Resources PLC. “These agreements, which include pricing, contract length and other fundamental commercial terms, are a monumental step in achieving the decades-long objective of bringing the benefits of Alaska’s incredible North Slope reserves to Alaskans and to global markets”, said Adam Prestidge, Glenfarne president for Alaska LNG. Thursday’s statement also said Glenfarne had signed a letter of intent with Alaskan utility ENSTAR Natural Gas Co for a 30-year supply of LNG from the project. The volume under negotiation was not disclosed. Earlier this month it announced a letter of intent with Donlin Gold LLC of Novagold Resources Inc for a power plant for the Donlin gold

Read More »

Intel’s AI pivot could make lower-end PCs scarce in 2026

However, he noted, “CPUs are not being cannibalized by GPUs. Instead, they have become ‘chokepoints’ in AI infrastructure.” For instance, CPUs such as Granite Rapids are essential in GPU clusters, and for handling agentic AI workloads and orchestrating distributed inference. How pricing might increase for enterprises Ultimately, rapid demand for higher-end offerings resulted in foundry shortages of Intel 10/7 nodes, Bickley noted, which represent the bulk of the company’s production volume. He pointed out that it can take up to three quarters for new server wafers to move through the fab process, so Intel will be “under the gun” until at least Q2 2026, when it projects an increase in chip production. Meanwhile, manufacturing capacity for Xeon is currently sold out for 2026, with varying lead times by distributor, while custom silicon programs are seeing lead times of 6 to 8 months, with some orders rolling into 2027, Bickley said. In the data center, memory is the key bottleneck, with expected price increases of more than 65% year over year in 2026 and up to 25% for NAND Flash, he noted. Some specific products have already seen price inflation of over 1,000% since 2025, and new greenfield capacity for memory is not expected until 2027 or 2028. Moor’s Sag was a little more optimistic, forecasting that, on the client side, “memory prices will probably stabilize this year until more capacity comes online in 2027.” How enterprises can prepare Supplier diversification is the best solution for enterprises right now, Sag noted. While it might make things more complex, it also allows data center operators to better absorb price shocks because they can rebalance against suppliers who have either planned better or have more resilient supply chains.

Read More »

Reports of SATA’s demise are overblown, but the technology is aging fast

The SATA 1.0 interface made its debut in 2003. It was developed by a consortium consisting of Intel, Dell, and storage vendors like Seagate and Maxtor. It quickly advanced to SATA III in 2009, but there never was a SATA IV. There was just nibbling around the edges with incremental updates as momentum and emphasis shifted to PCI Express and NVMe. So is there any life to be had in the venerable SATA interface? Surprisingly, yes, say the analysts. “At a high level, yes, SATA for consumer is pretty much a dead end, although if you’re storing TB of photos and videos, it is still the least expensive option,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. Similarly for enterprise, for massive storage demands, the 20 and 30 TB SATA drives from companies like Seagate and WD are apparently still in wide use in cloud data centers for things like cold storage. “In fact, both of those companies are seeing recording revenues based, in part, on the demand for these huge, high-capacity low-cost drives,” he said. “SATA doesn’t make much sense anymore. It underperforms NVMe significantly,” said Rob Enderle, principal analyst with The Enderle Group. “It really doesn’t make much sense to continue make it given Samsung allegedly makes three to four times more margin on NVMe.” And like O’Donnell, Enderle sees continued life for SATA-based high-capacity hard drives. “There will likely be legacy makers doing SATA for some time. IT doesn’t flip technology quickly and SATA drives do wear out, so there will likely be those producing legacy SATA products for some time,” he said.

Read More »

DCN becoming the new WAN for AI-era applications

“DCN is increasingly treated as an end-to-end operating model that standardizes connectivity, security policy enforcement, and telemetry across users, the middle mile, and cloud/application edges,” Sanchez said. Dell’Oro defines DCN as platforms and services that deliver consistent connectivity, policy enforcement, and telemetry from users, across the WAN, to distributed cloud and application edges spanning branch sites, data centers and public clouds. The category is gaining relevance as hybrid architectures and AI-era traffic patterns increase the operational penalty for fragmented control planes. DCN buyers are moving beyond isolated upgrades and are prioritizing architectures that reduce operational seams across connectivity, security and telemetry so that incident response and change control can follow a single thread, according to Dell’Oro’s research. What makes DCN distinct is that it links user-to-application experience with where policy and visibility are enforced. This matters as application delivery paths become more dynamic and workloads shift between on-premises data centers, public cloud, and edge locations. The architectural requirement is eliminating handoffs between networking and security teams rather than optimizing individual network segments. Where DCN is growing the fastest Cloud/application edge is the fastest-growing DCN pillar. This segment deploys policy enforcement and telemetry collection points adjacent to workloads rather than backhauling traffic to centralized security stacks. “Multi-cloud remains a reality, but it is no longer the durable driver by itself,” Sanchez said. “Cloud/application edge is accelerating because enterprises are trying to make application paths predictable and secure across hybrid environments, and that requires pushing application-aware steering, policy enforcement, and unified telemetry closer to workloads.”

Read More »

Edged US Builds Waterless, High-Density AI Data Center Campuses at Scale

Edged US is targeting a narrow but increasingly valuable lane of the hyperscale AI infrastructure market: high-density compute delivered at speed, paired with a sustainability posture centered on waterless, closed-loop cooling and a portfolio-wide design PUE target of roughly 1.15. Two recent announcements illustrate the model. In Aurora, Illinois, Edged is developing a 72-MW facility purpose-built for AI training and inference, with liquid-to-chip cooling designed to support rack densities exceeding 200 kW. In Irving, Texas, a 24-MW campus expansion combines air-cooled densities above 120 kW per rack with liquid-to-chip capability reaching 400 kW. Taken together, the projects point to a consistent strategy: standardized, multi-building campuses in major markets; a vertically integrated technical stack with cooling at its core; and an operating model built around repeatable designs, modular systems, and readiness for rapidly escalating AI densities. A Campus-First Platform Strategy Edged US’s platform strategy is built around campus-scale expansion rather than one-off facilities. The company positions itself as a gigawatt-scale, AI-ready portfolio expanding across major U.S. metros through repeatable design targets and multi-building campuses: an emphasis that is deliberate and increasingly consequential. In Chicago/Aurora, Edged is developing a multi-building campus with an initial facility already online and a second 72-MW building under construction. Dallas/Irving follows the same playbook: the first facility opened in January 2025, with a second 24-MW building approved unanimously by the city. Taken together with developments in Atlanta, Chicago, Columbus, Dallas, Des Moines, Kansas City, and Phoenix, the footprint reflects a portfolio-first mindset rather than a collection of bespoke sites. This focus on campus-based expansion matters because the AI factory era increasingly rewards developers that can execute three things at once: Lock down power and land at scale. Standardize delivery across markets. Operate efficiently while staying aligned with community and regulatory expectations. Edged is explicitly selling the second

Read More »

CBRE’s 2026 Data Center Outlook: Demand Surges as Delivery Becomes the Constraint

The U.S. data center market is entering 2026 with fundamentals that remain unmatched across commercial real estate, but the nature of the dominant constraint has shifted. Demand is no longer gated by capital, connectivity, or even land. It is gated by the ability to deliver very large blocks of power, on aggressive timelines, at a predictable cost. According to the CBRE 2026 U.S. Real Estate Market Outlook as overseen by Gordon Dolven and Pat Lynch, the sector is on track to post another record year for leasing activity, even as vacancy remains at historic lows and pricing reaches all-time highs. What has changed is the scale at which demand now presents itself, and the difficulty of meeting it. Large-Block Leasing Rewrites the Economics AI-driven workloads are reshaping leasing dynamics in ways that break from prior hyperscale norms. Where 10-MW-plus deployments once commanded pricing concessions, CBRE now observes the opposite behavior: large, contiguous blocks of capacity are commanding premiums. Neocloud providers, GPU-as-a-service platforms and AI startups, many backed by aggressive capital deployment strategies, are actively competing for full-building and campus-scale capacity.  For operators, this is altering development and merchandising strategies. Rather than subdividing shells for flexibility, owners increasingly face a strategic choice: hold buildings intact to preserve optionality for single-tenant, high-density users who are willing to pay for scale. In effect, scale itself has become the scarce asset. Behind-the-Meter Power Moves to the Foreground For data centers, power availability meaning not just access, but certainty of delivery, is now the defining variable in the market.  CBRE notes accelerating adoption of behind-the-meter strategies as operators seek to bypass increasingly constrained utility timelines. On-site generation using natural gas, solar, wind, and battery storage is gaining traction, particularly in deregulated electricity markets where operators have more latitude to structure BYOP (bring your own power) solutions. 

Read More »

Blue Origin targets enterprise networks with a multi-terabit satellite connectivity plan

“It’s ideal for remote, sparse, or sensitive regions,” said Manish Rawat, analyst at TechInsights. “Key use cases include cloud-to-cloud links, data center replication, government, defense, and disaster recovery workloads. It supports rapid or temporary deployments and prioritizes fewer customers with high capacity, strict SLAs, and deep carrier integration.” Adoption, however, is expected to largely depend on the sector. For governments and organizations operating highly critical or sensitive infrastructure, where reliability and security outweigh cost considerations, this could be attractive as a redundancy option. “Banks, national security agencies, and other mission-critical operators may consider it as an alternate routing path,” Jain said. “For most enterprises, however, it is unlikely to replace terrestrial connectivity and would instead function as a supplementary layer.” Real-world performance Although satellite connectivity offers potential advantages, analysts note that questions remain around real-world performance. “TeraWave’s 6 Tbps refers to total constellation capacity, not per-user throughput, achieved via multiple optical inter-satellite links and ground gateways,” Rawat said. “Optical crosslinks provide high aggregate bandwidth but not a single terabit-class pipe. Performance lies between fiber and GEO satellites, with lower intercontinental latency than GEO but higher than fiber.” Operational factors could also affect network stability. Jitter is generally low, but handovers, rerouting, and weather conditions can introduce intermittent performance spikes. Packet loss is expected to remain modest but episodic, Rawat added.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »