Stay Ahead, Stay ONMINE

Linear Regression in Time Series: Sources of Spurious Regression

1. Introduction It’s pretty clear that most of our work will be automated by AI in the future. This will be possible because many researchers and professionals are working hard to make their work available online. These contributions not only help us understand fundamental concepts but also refine AI models, ultimately freeing up time to focus on other activities. However, there is one concept that remains misunderstood, even among experts. It is spurious regression in time series analysis. This issue arises when regression models suggest strong relationships between variables, even when none exist. It is typically observed in time series regression equations that seem to have a high degree of fit — as indicated by a high R² (coefficient of multiple correlation) — but with an extremely low Durbin-Watson statistic (d), signaling strong autocorrelation in the error terms. What is particularly surprising is that almost all econometric textbooks warn about the danger of autocorrelated errors, yet this issue persists in many published papers. Granger and Newbold (1974) identified several examples. For instance, they found published equations with R² = 0.997 and the Durbin-Watson statistic (d) equal to 0.53. The most extreme found is an equation with R² = 0.999 and d = 0.093. It is especially problematic in economics and finance, where many key variables exhibit autocorrelation or serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month, leading to misleading conclusions if not handled correctly. For example, today’s GDP is strongly correlated with the GDP of the previous quarter. Our post provides a detailed explanation of the results from Granger and Newbold (1974) and Python simulation (see section 7) replicating the key results presented in their article. Whether you’re an economist, data scientist, or analyst working with time series data, understanding this issue is crucial to ensuring your models produce meaningful results. To walk you through this paper, the next section will introduce the random walk and the ARIMA(0,1,1) process. In section 3, we will explain how Granger and Newbold (1974) describe the emergence of nonsense regressions, with examples illustrated in section 4. Finally, we’ll show how to avoid spurious regressions when working with time series data. 2. Simple presentation of a Random Walk and ARIMA(0,1,1) Process 2.1 Random Walk Let 𝐗ₜ be a time series. We say that 𝐗ₜ follows a random walk if its representation is given by: 𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ. (1) Where 𝜖ₜ is a white noise. It can be written as a sum of white noise, a useful form for simulation. It is a non-stationary time series because its variance depends on the time t. 2.2 ARIMA(0,1,1) Process The ARIMA(0,1,1) process is given by: 𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ − 𝜃 𝜖ₜ₋₁. (2) where 𝜖ₜ is a white noise. The ARIMA(0,1,1) process is non-stationary. It can be written as a sum of an independent random walk and white noise: 𝐗ₜ = 𝐗₀ + random walk + white noise. (3) This form is useful for simulation. Those non-stationary series are often employed as benchmarks against which the forecasting performance of other models is judged. 3. Random walk can lead to Nonsense Regression First, let’s recall the Linear Regression model. The linear regression model is given by: 𝐘 = 𝐗𝛽 + 𝜖. (4) Where 𝐘 is a T × 1 vector of the dependent variable, 𝛽 is a K × 1 vector of the coefficients, 𝐗 is a T × K matrix of the independent variables containing a column of ones and (K−1) columns with T observations on each of the (K−1) independent variables, which are stochastic but distributed independently of the T × 1 vector of the errors 𝜖. It is generally assumed that: 𝐄(𝜖) = 0, (5) and 𝐄(𝜖𝜖′) = 𝜎²𝐈. (6) where 𝐈 is the identity matrix. A test of the contribution of independent variables to the explanation of the dependent variable is the F-test. The null hypothesis of the test is given by: 𝐇₀: 𝛽₁ = 𝛽₂ = ⋯ = 𝛽ₖ₋₁ = 0, (7) And the statistic of the test is given by: 𝐅 = (𝐑² / (𝐊−1)) / ((1−𝐑²) / (𝐓−𝐊)). (8) where 𝐑² is the coefficient of determination. If we want to construct the statistic of the test, let’s assume that the null hypothesis is true, and one tries to fit a regression of the form (Equation 4) to the levels of an economic time series. Suppose next that these series are not stationary or are highly autocorrelated. In such a situation, the test procedure is invalid since 𝐅 in (Equation 8) is not distributed as an F-distribution under the null hypothesis (Equation 7). In fact, under the null hypothesis, the errors or residuals from (Equation 4) are given by: 𝜖ₜ = 𝐘ₜ − 𝐗𝛽₀ ; t = 1, 2, …, T. (9) And will have the same autocorrelation structure as the original series 𝐘. Some idea of the distribution problem can arise in the situation when: 𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ. (10) Where 𝐘ₜ and 𝐗ₜ follow independent first-order autoregressive processes: 𝐘ₜ = 𝜌 𝐘ₜ₋₁ + 𝜂ₜ, and 𝐗ₜ = 𝜌* 𝐗ₜ₋₁ + 𝜈ₜ. (11) Where 𝜂ₜ and 𝜈ₜ are white noise. We know that in this case, 𝐑² is the square of the correlation between 𝐘ₜ and 𝐗ₜ. They use Kendall’s result from the article Knowles (1954), which expresses the variance of 𝐑: 𝐕𝐚𝐫(𝐑) = (1/T)* (1 + 𝜌𝜌*) / (1 − 𝜌𝜌*). (12) Since 𝐑 is constrained to lie between -1 and 1, if its variance is greater than 1/3, the distribution of 𝐑 cannot have a mode at 0. This implies that 𝜌𝜌* > (T−1) / (T+1). Thus, for example, if T = 20 and 𝜌 = 𝜌*, a distribution that is not unimodal at 0 will be obtained if 𝜌 > 0.86, and if 𝜌 = 0.9, 𝐕𝐚𝐫(𝐑) = 0.47. So the 𝐄(𝐑²) will be close to 0.47. It has been shown that when 𝜌 is close to 1, 𝐑² can be very high, suggesting a strong relationship between 𝐘ₜ and 𝐗ₜ. However, in reality, the two series are completely independent. When 𝜌 is near 1, both series behave like random walks or near-random walks. On top of that, both series are highly autocorrelated, which causes the residuals from the regression to also be strongly autocorrelated. As a result, the Durbin-Watson statistic 𝐝 will be very low. This is why a high 𝐑² in this context should never be taken as evidence of a true relationship between the two series. To explore the possibility of obtaining a spurious regression when regressing two independent random walks, a series of simulations proposed by Granger and Newbold (1974) will be conducted in the next section. 4. Simulation results using Python. In this section, we will show using simulations that using the regression model with independent random walks bias the estimation of the coefficients and the hypothesis tests of the coefficients are invalid. The Python code that will produce the results of the simulation will be presented in section 6. A regression equation proposed by Granger and Newbold (1974) is given by: 𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ Where 𝐘ₜ and 𝐗ₜ were generated as independent random walks, each of length 50. The values 𝐒 = |𝛽̂₁| / √(𝐒𝐄̂(𝛽̂₁)), representing the statistic for testing the significance of 𝛽₁, for 100 simulations will be reported in the table below. Table 1: Regressing two independent random walks The null hypothesis of no relationship between 𝐘ₜ and 𝐗ₜ is rejected at the 5% level if 𝐒 > 2. This table shows that the null hypothesis (𝛽 = 0) is wrongly rejected in about a quarter (71 times) of all cases. This is awkward because the two variables are independent random walks, meaning there’s no actual relationship. Let’s break down why this happens. If 𝛽̂₁ / 𝐒𝐄̂ follows a 𝐍(0,1), the expected value of 𝐒, its absolute value, should be √2 / π ≈ 0.8 (√2/π is the mean of the absolute value of a standard normal distribution). However, the simulation results show an average of 4.59, meaning the estimated 𝐒 is underestimated by a factor of: 4.59 / 0.8 = 5.7 In classical statistics, we usually use a t-test threshold of around 2 to check the significance of a coefficient. However, these results show that, in this case, you would need to use a threshold of 11.4 to properly test for significance: 2 × (4.59 / 0.8) = 11.4 Interpretation: We’ve just shown that including variables that don’t belong in the model — especially random walks — can lead to completely invalid significance tests for the coefficients. To make their simulations even clearer, Granger and Newbold (1974) ran a series of regressions using variables that follow either a random walk or an ARIMA(0,1,1) process. Here is how they set up their simulations: They regressed a dependent series 𝐘ₜ on m series 𝐗ⱼ,ₜ (with j = 1, 2, …, m), varying m from 1 to 5. The dependent series 𝐘ₜ and the independent series 𝐗ⱼ,ₜ follow the same types of processes, and they tested four cases: Case 1 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow random walks. Case 2 (Differences): They use the first differences of the random walks, which are stationary. Case 3 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow ARIMA(0,1,1). Case 4 (Differences): They use the first differences of the previous ARIMA(0,1,1) processes, which are stationary. Each series has a length of 50 observations, and they ran 100 simulations for each case. All error terms are distributed as 𝐍(0,1), and the ARIMA(0,1,1) series are derived as the sum of the random walk and independent white noise. The simulation results, based on 100 replications with series of length 50, are summarized in the next table. Table 2: Regressions of a series on m independent ‘explanatory’ series. Interpretation of the results : It is seen that the probability of not rejecting the null hypothesis of no relationship between 𝐘ₜ and 𝐗ⱼ,ₜ becomes very small when m ≥ 3 when regressions are made with random walk series (rw-levels). The 𝐑² and the mean Durbin-Watson increase. Similar results are obtained when the regressions are made with ARIMA(0,1,1) series (arima-levels). When white noise series (rw-diffs) are used, classical regression analysis is valid since the error series will be white noise and least squares will be efficient. However, when the regressions are made with the differences of ARIMA(0,1,1) series (arima-diffs) or first-order moving average series MA(1) process, the null hypothesis is rejected, on average: (10 + 16 + 5 + 6 + 6) / 5 = 8.6 which is greater than 5% of the time. If your variables are random walks or close to them, and you include unnecessary variables in your regression, you will often get fallacious results. High 𝐑² and low Durbin-Watson values do not confirm a true relationship but instead indicate a likely spurious one. 5. How to avoid spurious regression in time series It’s really hard to come up with a complete list of ways to avoid spurious regressions. However, there are a few good practices you can follow to minimize the risk as much as possible. If one performs a regression analysis with time series data and finds that the residuals are strongly autocorrelated, there is a serious problem when it comes to interpreting the coefficients of the equation. To check for autocorrelation in the residuals, one can use the Durbin-Watson test or the Portmanteau test. Based on the study above, we can conclude that if a regression analysis performed with economical variables produces strongly autocorrelated residuals, meaning a low Durbin-Watson statistic, then the results of the analysis are likely to be spurious, whatever the value of the coefficient of determination R² observed. In such cases, it is important to understand where the mis-specification comes from. According to the literature, misspecification usually falls into three categories : (i) the omission of a relevant variable, (ii) the inclusion of an irrelevant variable, or (iii) autocorrelation of the errors. Most of the time, mis-specification comes from a mix of these three sources. To avoid spurious regression in a time series, several recommendations can be made: The first recommendation is to select the right macroeconomic variables that are likely to explain the dependent variable. This can be done by reviewing the literature or consulting experts in the field. The second recommendation is to stationarize the series by taking first differences. In most cases, the first differences of macroeconomic variables are stationary and still easy to interpret. For macroeconomic data, it’s strongly recommended to differentiate the series once to reduce the autocorrelation of the residuals, especially when the sample size is small. There is indeed sometimes strong serial correlation observed in these variables. A simple calculation shows that the first differences will almost always have much smaller serial correlations than the original series. The third recommendation is to use the Box-Jenkins methodology to model each macroeconomic variable individually and then search for relationships between the series by relating the residuals from each individual model. The idea here is that the Box-Jenkins process extracts the explained part of the series, leaving the residuals, which contain only what can’t be explained by the series’ own past behavior. This makes it easier to check whether these unexplained parts (residuals) are related across variables. 6. Conclusion Many econometrics textbooks warn about specification errors in regression models, but the problem still shows up in many published papers. Granger and Newbold (1974) highlighted the risk of spurious regressions, where you get a high paired with very low Durbin-Watson statistics. Using Python simulations, we showed some of the main causes of these spurious regressions, especially including variables that don’t belong in the model and are highly autocorrelated. We also demonstrated how these issues can completely distort hypothesis tests on the coefficients. Hopefully, this post will help reduce the risk of spurious regressions in future econometric analyses. 7. Appendice: Python code for simulation. #####################################################Simulation Code for table 1 ##################################################### import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt np.random.seed(123) M = 100 n = 50 S = np.zeros(M) for i in range(M): #————————————————————— # Generate the data #————————————————————— espilon_y = np.random.normal(0, 1, n) espilon_x = np.random.normal(0, 1, n) Y = np.cumsum(espilon_y) X = np.cumsum(espilon_x) #————————————————————— # Fit the model #————————————————————— X = sm.add_constant(X) model = sm.OLS(Y, X).fit() #————————————————————— # Compute the statistic #—————————————————— S[i] = np.abs(model.params[1])/model.bse[1] #—————————————————— # Maximum value of S #—————————————————— S_max = int(np.ceil(max(S))) #—————————————————— # Create bins #—————————————————— bins = np.arange(0, S_max + 2, 1) #—————————————————— # Compute the histogram #—————————————————— frequency, bin_edges = np.histogram(S, bins=bins) #—————————————————— # Create a dataframe #—————————————————— df = pd.DataFrame({ “S Interval”: [f”{int(bin_edges[i])}-{int(bin_edges[i+1])}” for i in range(len(bin_edges)-1)], “Frequency”: frequency }) print(df) print(np.mean(S)) #####################################################Simulation Code for table 2 ##################################################### import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels.stats.stattools import durbin_watson from tabulate import tabulate np.random.seed(1) # Pour rendre les résultats reproductibles #—————————————————— # Definition of functions #—————————————————— def generate_random_walk(T): “”” Génère une série de longueur T suivant un random walk : Y_t = Y_{t-1} + e_t, où e_t ~ N(0,1). “”” e = np.random.normal(0, 1, size=T) return np.cumsum(e) def generate_arima_0_1_1(T): “”” Génère un ARIMA(0,1,1) selon la méthode de Granger & Newbold : la série est obtenue en additionnant une marche aléatoire et un bruit blanc indépendant. “”” rw = generate_random_walk(T) wn = np.random.normal(0, 1, size=T) return rw + wn def difference(series): “”” Calcule la différence première d’une série unidimensionnelle. Retourne une série de longueur T-1. “”” return np.diff(series) #—————————————————— # Paramètres #—————————————————— T = 50 # longueur de chaque série n_sims = 100 # nombre de simulations Monte Carlo alpha = 0.05 # seuil de significativité #—————————————————— # Definition of function for simulation #—————————————————— def run_simulation_case(case_name, m_values=[1,2,3,4,5]): “”” case_name : un identifiant pour le type de génération : – ‘rw-levels’ : random walk (levels) – ‘rw-diffs’ : differences of RW (white noise) – ‘arima-levels’ : ARIMA(0,1,1) en niveaux – ‘arima-diffs’ : différences d’un ARIMA(0,1,1) = > MA(1) m_values : liste du nombre de régresseurs. Retourne un DataFrame avec pour chaque m : – % de rejets de H0 – Durbin-Watson moyen – R^2_adj moyen – % de R^2 > 0.1 “”” results = [] for m in m_values: count_reject = 0 dw_list = [] r2_adjusted_list = [] for _ in range(n_sims): #————————————– # 1) Generation of independents de Y_t and X_{j,t}. #—————————————- if case_name == ‘rw-levels’: Y = generate_random_walk(T) Xs = [generate_random_walk(T) for __ in range(m)] elif case_name == ‘rw-diffs’: # Y et X sont les différences d’un RW, i.e. ~ white noise Y_rw = generate_random_walk(T) Y = difference(Y_rw) Xs = [] for __ in range(m): X_rw = generate_random_walk(T) Xs.append(difference(X_rw)) # NB : maintenant Y et Xs ont longueur T-1 # = > ajuster T_effectif = T-1 # = > on prendra T_effectif points pour la régression elif case_name == ‘arima-levels’: Y = generate_arima_0_1_1(T) Xs = [generate_arima_0_1_1(T) for __ in range(m)] elif case_name == ‘arima-diffs’: # Différences d’un ARIMA(0,1,1) = > MA(1) Y_arima = generate_arima_0_1_1(T) Y = difference(Y_arima) Xs = [] for __ in range(m): X_arima = generate_arima_0_1_1(T) Xs.append(difference(X_arima)) # 2) Prépare les données pour la régression # Selon le cas, la longueur est T ou T-1 if case_name in [‘rw-levels’,’arima-levels’]: Y_reg = Y X_reg = np.column_stack(Xs) if m >0 else np.array([]) else: # dans les cas de différences, la longueur est T-1 Y_reg = Y X_reg = np.column_stack(Xs) if m >0 else np.array([]) # 3) Régression OLS X_with_const = sm.add_constant(X_reg) # Ajout de l’ordonnée à l’origine model = sm.OLS(Y_reg, X_with_const).fit() # 4) Test global F : H0 : tous les beta_j = 0 # On regarde si p-value < alpha if model.f_pvalue is not None and model.f_pvalue 0.7) results.append({ ‘m’: m, ‘Reject %’: reject_percent, ‘Mean DW’: dw_mean, ‘Mean R^2’: r2_mean, ‘% R^2_adj >0.7’: r2_above_0_7_percent }) return pd.DataFrame(results) #—————————————————— # Application of the simulation #—————————————————— cases = [‘rw-levels’, ‘rw-diffs’, ‘arima-levels’, ‘arima-diffs’] all_results = {} for c in cases: df_res = run_simulation_case(c, m_values=[1,2,3,4,5]) all_results[c] = df_res #—————————————————— # Store data in table #—————————————————— for case, df_res in all_results.items(): print(f”nn{case}”) print(tabulate(df_res, headers=’keys’, tablefmt=’fancy_grid’)) References Granger, Clive WJ, and Paul Newbold. 1974. “Spurious Regressions in Econometrics.” Journal of Econometrics 2 (2): 111–20. Knowles, EAG. 1954. “Exercises in Theoretical Statistics.” Oxford University Press.

1. Introduction

It’s pretty clear that most of our work will be automated by AI in the future. This will be possible because many researchers and professionals are working hard to make their work available online. These contributions not only help us understand fundamental concepts but also refine AI models, ultimately freeing up time to focus on other activities.

However, there is one concept that remains misunderstood, even among experts. It is spurious regression in time series analysis. This issue arises when regression models suggest strong relationships between variables, even when none exist. It is typically observed in time series regression equations that seem to have a high degree of fit — as indicated by a high R² (coefficient of multiple correlation) — but with an extremely low Durbin-Watson statistic (d), signaling strong autocorrelation in the error terms.

What is particularly surprising is that almost all econometric textbooks warn about the danger of autocorrelated errors, yet this issue persists in many published papers. Granger and Newbold (1974) identified several examples. For instance, they found published equations with R² = 0.997 and the Durbin-Watson statistic (d) equal to 0.53. The most extreme found is an equation with R² = 0.999 and d = 0.093.

It is especially problematic in economics and finance, where many key variables exhibit autocorrelation or serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month, leading to misleading conclusions if not handled correctly. For example, today’s GDP is strongly correlated with the GDP of the previous quarter. Our post provides a detailed explanation of the results from Granger and Newbold (1974) and Python simulation (see section 7) replicating the key results presented in their article.

Whether you’re an economist, data scientist, or analyst working with time series data, understanding this issue is crucial to ensuring your models produce meaningful results.

To walk you through this paper, the next section will introduce the random walk and the ARIMA(0,1,1) process. In section 3, we will explain how Granger and Newbold (1974) describe the emergence of nonsense regressions, with examples illustrated in section 4. Finally, we’ll show how to avoid spurious regressions when working with time series data.

2. Simple presentation of a Random Walk and ARIMA(0,1,1) Process

2.1 Random Walk

Let 𝐗ₜ be a time series. We say that 𝐗ₜ follows a random walk if its representation is given by:

𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ. (1)

Where 𝜖ₜ is a white noise. It can be written as a sum of white noise, a useful form for simulation. It is a non-stationary time series because its variance depends on the time t.

2.2 ARIMA(0,1,1) Process

The ARIMA(0,1,1) process is given by:

𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ − 𝜃 𝜖ₜ₋₁. (2)

where 𝜖ₜ is a white noise. The ARIMA(0,1,1) process is non-stationary. It can be written as a sum of an independent random walk and white noise:

𝐗ₜ = 𝐗₀ + random walk + white noise. (3) This form is useful for simulation.

Those non-stationary series are often employed as benchmarks against which the forecasting performance of other models is judged.

3. Random walk can lead to Nonsense Regression

First, let’s recall the Linear Regression model. The linear regression model is given by:

𝐘 = 𝐗𝛽 + 𝜖. (4)

Where 𝐘 is a T × 1 vector of the dependent variable, 𝛽 is a K × 1 vector of the coefficients, 𝐗 is a T × K matrix of the independent variables containing a column of ones and (K−1) columns with T observations on each of the (K−1) independent variables, which are stochastic but distributed independently of the T × 1 vector of the errors 𝜖. It is generally assumed that:

𝐄(𝜖) = 0, (5)

and

𝐄(𝜖𝜖′) = 𝜎²𝐈. (6)

where 𝐈 is the identity matrix.

A test of the contribution of independent variables to the explanation of the dependent variable is the F-test. The null hypothesis of the test is given by:

𝐇₀: 𝛽₁ = 𝛽₂ = ⋯ = 𝛽ₖ₋₁ = 0, (7)

And the statistic of the test is given by:

𝐅 = (𝐑² / (𝐊−1)) / ((1−𝐑²) / (𝐓−𝐊)). (8)

where 𝐑² is the coefficient of determination.

If we want to construct the statistic of the test, let’s assume that the null hypothesis is true, and one tries to fit a regression of the form (Equation 4) to the levels of an economic time series. Suppose next that these series are not stationary or are highly autocorrelated. In such a situation, the test procedure is invalid since 𝐅 in (Equation 8) is not distributed as an F-distribution under the null hypothesis (Equation 7). In fact, under the null hypothesis, the errors or residuals from (Equation 4) are given by:

𝜖ₜ = 𝐘ₜ − 𝐗𝛽₀ ; t = 1, 2, …, T. (9)

And will have the same autocorrelation structure as the original series 𝐘.

Some idea of the distribution problem can arise in the situation when:

𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ. (10)

Where 𝐘ₜ and 𝐗ₜ follow independent first-order autoregressive processes:

𝐘ₜ = 𝜌 𝐘ₜ₋₁ + 𝜂ₜ, and 𝐗ₜ = 𝜌* 𝐗ₜ₋₁ + 𝜈ₜ. (11)

Where 𝜂ₜ and 𝜈ₜ are white noise.

We know that in this case, 𝐑² is the square of the correlation between 𝐘ₜ and 𝐗ₜ. They use Kendall’s result from the article Knowles (1954), which expresses the variance of 𝐑:

𝐕𝐚𝐫(𝐑) = (1/T)* (1 + 𝜌𝜌*) / (1 − 𝜌𝜌*). (12)

Since 𝐑 is constrained to lie between -1 and 1, if its variance is greater than 1/3, the distribution of 𝐑 cannot have a mode at 0. This implies that 𝜌𝜌* > (T−1) / (T+1).

Thus, for example, if T = 20 and 𝜌 = 𝜌*, a distribution that is not unimodal at 0 will be obtained if 𝜌 > 0.86, and if 𝜌 = 0.9, 𝐕𝐚𝐫(𝐑) = 0.47. So the 𝐄(𝐑²) will be close to 0.47.

It has been shown that when 𝜌 is close to 1, 𝐑² can be very high, suggesting a strong relationship between 𝐘ₜ and 𝐗ₜ. However, in reality, the two series are completely independent. When 𝜌 is near 1, both series behave like random walks or near-random walks. On top of that, both series are highly autocorrelated, which causes the residuals from the regression to also be strongly autocorrelated. As a result, the Durbin-Watson statistic 𝐝 will be very low.

This is why a high 𝐑² in this context should never be taken as evidence of a true relationship between the two series.

To explore the possibility of obtaining a spurious regression when regressing two independent random walks, a series of simulations proposed by Granger and Newbold (1974) will be conducted in the next section.

4. Simulation results using Python.

In this section, we will show using simulations that using the regression model with independent random walks bias the estimation of the coefficients and the hypothesis tests of the coefficients are invalid. The Python code that will produce the results of the simulation will be presented in section 6.

A regression equation proposed by Granger and Newbold (1974) is given by:

𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ

Where 𝐘ₜ and 𝐗ₜ were generated as independent random walks, each of length 50. The values 𝐒 = |𝛽̂₁| / √(𝐒𝐄̂(𝛽̂₁)), representing the statistic for testing the significance of 𝛽₁, for 100 simulations will be reported in the table below.

Table 1: Regressing two independent random walks

The null hypothesis of no relationship between 𝐘ₜ and 𝐗ₜ is rejected at the 5% level if 𝐒 > 2. This table shows that the null hypothesis (𝛽 = 0) is wrongly rejected in about a quarter (71 times) of all cases. This is awkward because the two variables are independent random walks, meaning there’s no actual relationship. Let’s break down why this happens.

If 𝛽̂₁ / 𝐒𝐄̂ follows a 𝐍(0,1), the expected value of 𝐒, its absolute value, should be √2 / π ≈ 0.8 (√2/π is the mean of the absolute value of a standard normal distribution). However, the simulation results show an average of 4.59, meaning the estimated 𝐒 is underestimated by a factor of:

4.59 / 0.8 = 5.7

In classical statistics, we usually use a t-test threshold of around 2 to check the significance of a coefficient. However, these results show that, in this case, you would need to use a threshold of 11.4 to properly test for significance:

2 × (4.59 / 0.8) = 11.4

Interpretation: We’ve just shown that including variables that don’t belong in the model — especially random walks — can lead to completely invalid significance tests for the coefficients.

To make their simulations even clearer, Granger and Newbold (1974) ran a series of regressions using variables that follow either a random walk or an ARIMA(0,1,1) process.

Here is how they set up their simulations:

They regressed a dependent series 𝐘ₜ on m series 𝐗ⱼ,ₜ (with j = 1, 2, …, m), varying m from 1 to 5. The dependent series 𝐘ₜ and the independent series 𝐗ⱼ,ₜ follow the same types of processes, and they tested four cases:

  • Case 1 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow random walks.
  • Case 2 (Differences): They use the first differences of the random walks, which are stationary.
  • Case 3 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow ARIMA(0,1,1).
  • Case 4 (Differences): They use the first differences of the previous ARIMA(0,1,1) processes, which are stationary.

Each series has a length of 50 observations, and they ran 100 simulations for each case.

All error terms are distributed as 𝐍(0,1), and the ARIMA(0,1,1) series are derived as the sum of the random walk and independent white noise. The simulation results, based on 100 replications with series of length 50, are summarized in the next table.

Table 2: Regressions of a series on m independent ‘explanatory’ series.

Interpretation of the results :

  • It is seen that the probability of not rejecting the null hypothesis of no relationship between 𝐘ₜ and 𝐗ⱼ,ₜ becomes very small when m ≥ 3 when regressions are made with random walk series (rw-levels). The 𝐑² and the mean Durbin-Watson increase. Similar results are obtained when the regressions are made with ARIMA(0,1,1) series (arima-levels).
  • When white noise series (rw-diffs) are used, classical regression analysis is valid since the error series will be white noise and least squares will be efficient.
  • However, when the regressions are made with the differences of ARIMA(0,1,1) series (arima-diffs) or first-order moving average series MA(1) process, the null hypothesis is rejected, on average:

(10 + 16 + 5 + 6 + 6) / 5 = 8.6

which is greater than 5% of the time.

If your variables are random walks or close to them, and you include unnecessary variables in your regression, you will often get fallacious results. High 𝐑² and low Durbin-Watson values do not confirm a true relationship but instead indicate a likely spurious one.

5. How to avoid spurious regression in time series

It’s really hard to come up with a complete list of ways to avoid spurious regressions. However, there are a few good practices you can follow to minimize the risk as much as possible.

If one performs a regression analysis with time series data and finds that the residuals are strongly autocorrelated, there is a serious problem when it comes to interpreting the coefficients of the equation. To check for autocorrelation in the residuals, one can use the Durbin-Watson test or the Portmanteau test.

Based on the study above, we can conclude that if a regression analysis performed with economical variables produces strongly autocorrelated residuals, meaning a low Durbin-Watson statistic, then the results of the analysis are likely to be spurious, whatever the value of the coefficient of determination R² observed.

In such cases, it is important to understand where the mis-specification comes from. According to the literature, misspecification usually falls into three categories : (i) the omission of a relevant variable, (ii) the inclusion of an irrelevant variable, or (iii) autocorrelation of the errors. Most of the time, mis-specification comes from a mix of these three sources.

To avoid spurious regression in a time series, several recommendations can be made:

  • The first recommendation is to select the right macroeconomic variables that are likely to explain the dependent variable. This can be done by reviewing the literature or consulting experts in the field.
  • The second recommendation is to stationarize the series by taking first differences. In most cases, the first differences of macroeconomic variables are stationary and still easy to interpret. For macroeconomic data, it’s strongly recommended to differentiate the series once to reduce the autocorrelation of the residuals, especially when the sample size is small. There is indeed sometimes strong serial correlation observed in these variables. A simple calculation shows that the first differences will almost always have much smaller serial correlations than the original series.
  • The third recommendation is to use the Box-Jenkins methodology to model each macroeconomic variable individually and then search for relationships between the series by relating the residuals from each individual model. The idea here is that the Box-Jenkins process extracts the explained part of the series, leaving the residuals, which contain only what can’t be explained by the series’ own past behavior. This makes it easier to check whether these unexplained parts (residuals) are related across variables.

6. Conclusion

Many econometrics textbooks warn about specification errors in regression models, but the problem still shows up in many published papers. Granger and Newbold (1974) highlighted the risk of spurious regressions, where you get a high paired with very low Durbin-Watson statistics.

Using Python simulations, we showed some of the main causes of these spurious regressions, especially including variables that don’t belong in the model and are highly autocorrelated. We also demonstrated how these issues can completely distort hypothesis tests on the coefficients.

Hopefully, this post will help reduce the risk of spurious regressions in future econometric analyses.

7. Appendice: Python code for simulation.

#####################################################Simulation Code for table 1 #####################################################

import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt

np.random.seed(123)
M = 100 
n = 50
S = np.zeros(M)
for i in range(M):
#---------------------------------------------------------------
# Generate the data
#---------------------------------------------------------------
    espilon_y = np.random.normal(0, 1, n)
    espilon_x = np.random.normal(0, 1, n)

    Y = np.cumsum(espilon_y)
    X = np.cumsum(espilon_x)
#---------------------------------------------------------------
# Fit the model
#---------------------------------------------------------------
    X = sm.add_constant(X)
    model = sm.OLS(Y, X).fit()
#---------------------------------------------------------------
# Compute the statistic
#------------------------------------------------------
    S[i] = np.abs(model.params[1])/model.bse[1]


#------------------------------------------------------ 
#              Maximum value of S
#------------------------------------------------------
S_max = int(np.ceil(max(S)))

#------------------------------------------------------ 
#                Create bins
#------------------------------------------------------
bins = np.arange(0, S_max + 2, 1)  

#------------------------------------------------------
#    Compute the histogram
#------------------------------------------------------
frequency, bin_edges = np.histogram(S, bins=bins)

#------------------------------------------------------
#    Create a dataframe
#------------------------------------------------------

df = pd.DataFrame({
    "S Interval": [f"{int(bin_edges[i])}-{int(bin_edges[i+1])}" for i in range(len(bin_edges)-1)],
    "Frequency": frequency
})
print(df)
print(np.mean(S))

#####################################################Simulation Code for table 2 #####################################################

import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.stats.stattools import durbin_watson
from tabulate import tabulate

np.random.seed(1)  # Pour rendre les résultats reproductibles

#------------------------------------------------------
# Definition of functions
#------------------------------------------------------

def generate_random_walk(T):
    """
    Génère une série de longueur T suivant un random walk :
        Y_t = Y_{t-1} + e_t,
    où e_t ~ N(0,1).
    """
    e = np.random.normal(0, 1, size=T)
    return np.cumsum(e)

def generate_arima_0_1_1(T):
    """
    Génère un ARIMA(0,1,1) selon la méthode de Granger & Newbold :
    la série est obtenue en additionnant une marche aléatoire et un bruit blanc indépendant.
    """
    rw = generate_random_walk(T)
    wn = np.random.normal(0, 1, size=T)
    return rw + wn

def difference(series):
    """
    Calcule la différence première d'une série unidimensionnelle.
    Retourne une série de longueur T-1.
    """
    return np.diff(series)

#------------------------------------------------------
# Paramètres
#------------------------------------------------------

T = 50           # longueur de chaque série
n_sims = 100     # nombre de simulations Monte Carlo
alpha = 0.05     # seuil de significativité

#------------------------------------------------------
# Definition of function for simulation
#------------------------------------------------------

def run_simulation_case(case_name, m_values=[1,2,3,4,5]):
    """
    case_name : un identifiant pour le type de génération :
        - 'rw-levels' : random walk (levels)
        - 'rw-diffs'  : differences of RW (white noise)
        - 'arima-levels' : ARIMA(0,1,1) en niveaux
        - 'arima-diffs'  : différences d'un ARIMA(0,1,1) => MA(1)
    
    m_values : liste du nombre de régresseurs.
    
    Retourne un DataFrame avec pour chaque m :
        - % de rejets de H0
        - Durbin-Watson moyen
        - R^2_adj moyen
        - % de R^2 > 0.1
    """
    results = []
    
    for m in m_values:
        count_reject = 0
        dw_list = []
        r2_adjusted_list = []
        
        for _ in range(n_sims):
#--------------------------------------
# 1) Generation of independents de Y_t and X_{j,t}.
#----------------------------------------
            if case_name == 'rw-levels':
                Y = generate_random_walk(T)
                Xs = [generate_random_walk(T) for __ in range(m)]
            
            elif case_name == 'rw-diffs':
                # Y et X sont les différences d'un RW, i.e. ~ white noise
                Y_rw = generate_random_walk(T)
                Y = difference(Y_rw)
                Xs = []
                for __ in range(m):
                    X_rw = generate_random_walk(T)
                    Xs.append(difference(X_rw))
                # NB : maintenant Y et Xs ont longueur T-1
                # => ajuster T_effectif = T-1
                # => on prendra T_effectif points pour la régression
            
            elif case_name == 'arima-levels':
                Y = generate_arima_0_1_1(T)
                Xs = [generate_arima_0_1_1(T) for __ in range(m)]
            
            elif case_name == 'arima-diffs':
                # Différences d'un ARIMA(0,1,1) => MA(1)
                Y_arima = generate_arima_0_1_1(T)
                Y = difference(Y_arima)
                Xs = []
                for __ in range(m):
                    X_arima = generate_arima_0_1_1(T)
                    Xs.append(difference(X_arima))
            
            # 2) Prépare les données pour la régression
            #    Selon le cas, la longueur est T ou T-1
            if case_name in ['rw-levels','arima-levels']:
                Y_reg = Y
                X_reg = np.column_stack(Xs) if m>0 else np.array([])
            else:
                # dans les cas de différences, la longueur est T-1
                Y_reg = Y
                X_reg = np.column_stack(Xs) if m>0 else np.array([])
            
            # 3) Régression OLS
            X_with_const = sm.add_constant(X_reg)  # Ajout de l'ordonnée à l'origine
            model = sm.OLS(Y_reg, X_with_const).fit()
            
            # 4) Test global F : H0 : tous les beta_j = 0
            #    On regarde si p-value < alpha
            if model.f_pvalue is not None and model.f_pvalue  0.7)
        
        results.append({
            'm': m,
            'Reject %': reject_percent,
            'Mean DW': dw_mean,
            'Mean R^2': r2_mean,
            '% R^2_adj>0.7': r2_above_0_7_percent
        })
    
    return pd.DataFrame(results)
    
#------------------------------------------------------
# Application of the simulation
#------------------------------------------------------       

cases = ['rw-levels', 'rw-diffs', 'arima-levels', 'arima-diffs']
all_results = {}

for c in cases:
    df_res = run_simulation_case(c, m_values=[1,2,3,4,5])
    all_results[c] = df_res

#------------------------------------------------------
# Store data in table
#------------------------------------------------------

for case, df_res in all_results.items():
    print(f"nn{case}")
    print(tabulate(df_res, headers='keys', tablefmt='fancy_grid'))

References

  • Granger, Clive WJ, and Paul Newbold. 1974. “Spurious Regressions in Econometrics.” Journal of Econometrics 2 (2): 111–20.
  • Knowles, EAG. 1954. “Exercises in Theoretical Statistics.” Oxford University Press.
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Pure Storage becomes Everpure, acquires 1touch

Other recent research confirms this. In an October Cisco survey of over 8,000 AI leaders, only 35% of companies have clean, centralized data with real-time integration for AI agents. And by 2027, according to IDC, companies that don’t prioritize high-quality, AI-ready data will struggle scaling gen AI and agentic solutions,

Read More »

Western Digital wants to ramp-up hard disk drive speeds

Most enterprises are not using SATA drives, at least not with hot data. Perhaps cold storage but not frequently accessed data. They are using PCI Express based drives and those are considerably faster than anything Western Digital can engineer in a hard disk. Capacity aside, Western Digital is also aiming

Read More »

Energy Secretary Keeps Critical Generation Online in Mid-Atlantic

Emergency order keeps critical generation online and addresses critical grid reliability issues facing the Mid-Atlantic region of the United States WASHINGTON—U.S. Secretary of Energy Chris Wright issued an emergency order to address critical grid reliability issues facing the Mid-Atlantic region of the United States. The emergency order directs PJM Interconnection, L.L.C. (PJM), in coordination with Constellation Energy Corporation, to ensure Units 3 and 4 of the Eddystone Generating Station in Pennsylvania remain available for operation and to employ economic dispatch to minimize costs for the American people. The units were originally slated to shut down on May 31, 2025. “The energy sources that perform when you need them most are inherently the most valuable—that’s why natural gas and oil were valuable during recent winter storms,” Secretary Wright said. “Hundreds of American lives have likely been saved because of President Trump’s actions keeping critical generation online, including this Pennsylvania generating station which ran during Winter Storm Fern. This emergency order will mitigate the risk of blackouts and maintain affordable, reliable, and secure electricity access across the region.” The Eddystone Units were integral in stabilizing the grid during Winter Storm Fern. Between January 26-29, the units ran for over 124 hours cumulatively, providing critical generation in the midst of the energy emergency. As outlined in DOE’s Resource Adequacy Report, power outages could increase by 100 times in 2030 if the U.S. continues to take reliable power offline. Furthermore, NERC’s 2025 Long-Term Reliability Assessment warns, “The continuing shift in the resource mix toward weather-dependent resources and less fuel diversity increases risks of supply shortfalls during winter months.” Secretary Wright ordered that the two Eddystone Generating Station units remain online past their planned retirement date in a May 30, 2025 emergency order. Subsequent orders were issued on August 28, 2025 and November 26, 2025. Keeping these units operational

Read More »

Insights: Venezuela – new legal frameworks vs. the inertia of history

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } In this Insights episode of the Oil & Gas Journal ReEnterprised podcast, Head of Content Chris Smith updates the evolving situation in Venezuela as the industry attempts to navigate the best path forward while the two governments continue to hammer out the details. The discussion centers on the new legal frameworks being established in both countries within the context of fraught relations stretching back for decades. Want to hear more? Listen in on a January episode highlighting industry’s initial take following the removal of Nicholas Maduro from power. References Politico podcast Monaldi Substack Baker webinar Washington, Caracas open Venezuela to allow more oil sales 

Read More »

Eni makes Calao South discovery offshore Ivory Coast

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Eni SPA discovered gas and condensate in the Murene South-1X exploration well in Block CI-501, Ivory Coast. The well is the first exploration in the block and was drilled by the Saipem Santorini drilling ship about 8 km southwest of the Murene-1X discovery well in adjacent CI-205 block. The well was drilled to about 5,000 m TD in 2,200 m of water. Extensive data acquisition confirmed a main hydrocarbon bearing interval in high-quality Cenomanian sands with a gross thickness of about 50 m with excellent petrophysical properties, the operator said. Murene South-1X will undergo a full conventional drill stem test (DST) to assess the production capacity of this discovery, named Calao South. Calao South confirms the potential of the Calao channel complex that also includes the Calao discovery. It is the second largest discovery in the country after Baleine, with estimated volumes of up to 5.0 tcf of gas and 450 million bbl of condensate (about 1.4 billion bbl of oil). Eni is operator of Block CI-501 (90%) with partner Petroci Holding (10%).

Read More »

CFEnergía to supply natural gas to low-carbon methanol plant in Mexico

CFEnergía, a subsidiary of Mexico’s Federal Electricity Commission (CFE), has agreed to supply natural gas to Transition Industries LLC for its Pacifico Mexinol project near Topolobampo, Sinaloa, Mexico. Under the signed agreement, which enables the start of Pacifico Mexinol’s construction phase, CFEnergía will supply about 160 MMcfd of natural gas for an unspecified timeframe noted as “long term,” Transition Industries said in a release Feb. 16. The natural gas—to be sourced from the US and supplied at market prices via existing infrastructure—will be used as “critical input for Mexinol’s production of ultra-low carbon methanol,” the company said. Pacifico Mexinol The $3.3-billion Mexinol project, when it begins operations in late 2029 to early 2030, is expected to be the world’s largest ultra-low carbon chemicals plant with production of about 1.8 million tonnes of blue methanol and 350,000 tonnes of green methanol annually. Supply is aimed at markets in Asia, including Japan, while also boosting the development of the domestic market and the Mexican chemical industry. Mitsubishi Gas Chemical has committed to purchasing about 1 million tonnes/year of methanol from the project, about 50% of the project’s planned production. Transition Industries is jointly developing Pacifico Mexinol with the International Finance Corporation (IFC), a member of the World Bank Group. Last year, the company signed a contingent engineering, procurement, and construction (EPC) contract with the consortium of Samsung E&A Co., Ltd., Grupo Samsung E&A Mexico SA de CV, and Techint Engineering and Construction for the project. MAIRE group’s technology division NextChem, through its subsidiary KT TECH SpA, also signed a basic engineering, critical and proprietary equipment supply agreement with Samsung E&A in connection with its proprietary NX AdWinMethanol®Zero technology supply to the project.

Read More »

North Atlantic’s Gravenchon refinery scheduled for major turnaround

Canada-based North Atlantic Refining Ltd. France-based subsidiary North Atlantic France SAS is undertaking planned maintenance in March at its North Atlantic Energies-operated 230,000-b/d Notre-Dame-de-Gravenchon refinery in Port-Jérôme-sur-Seine, Normandy. Scheduled to begin on Mar. 3 with the phased shutdown of unidentified units at the refinery, the upcoming turnaround will involve thorough inspections of associated equipment designed for continuous operation, as well as unspecified works to improve energy efficiency, environmental performance, and overall competitiveness of the site, North Atlantic Energies said on Feb. 16. Part of the operator’s routine maintenance program aimed at meeting regulatory requirements to ensure the safety, compliance, and long-term performance of the refinery, North Atlantic Energies said the scheduled turnaround will not interrupt product supplies to customers during the shutdown period. While the company confirmed the phased shutdown of units slated for work during the maintenance event would last for several days, the operator did not reveal a definitive timeline for the entire duration of the turnaround. Further details regarding specific works to be carried out during the major maintenance event were not revealed. The upcoming turnaround will be the first to be executed under North Atlantic Group’s ownership, which completed its purchase of the formerly majority-owned ExxonMobil Corp. refinery and associated petrochemical assets at the site in November 2025.

Read More »

Azule Energy starts Ndungu full field production offshore Angola

Azule Energy has started full field production from Ndungu, part of the Agogo Integrated West Hub Project (IWH) in the western area of Block 15/06, offshore Angola. Ndungo full field lies about 10 km from the NGOMA FPSO in a water depth of around 1,100 m and comprises seven production wells and four injection wells, with an expected production peak of 60,000 b/d of oil. The National Agency for Petroleum, Gas and Biofuels (ANPG) and Azule Energy noted the full field start-up with first oil of three production wells. The phased integration of IWH, with Ndungu full field producing first via N’goma FPSO and later via Agogo FPSO, is expected to reach a peak output of about 175,000 b/d across the two fields. The fields have combined estimated reserves of about 450 million bbl. The Agogo IWH project is operated by Azule Energy with a 36.84% stake alongside partners Sonangol E&P (36.84%) and Sinopec International (26.32%).   

Read More »

Nvidia lines up partners to boost security for industrial operations

Akamai extends its micro-segmentation and zero-trust security platform Guardicore to run on Nvidia BlueField GPUs The integration offloads user-configurable security processes from the host system to the Nvidia BlueField DPU and enables zero-trust segmentation without requiring software agents on fragile or legacy systems, according to Akamai. Organizations can implement this hardware-isolated, “agentless” security approach to help align with regulatory requirements and lower their risk profile for cyber insurance. “It delivers deep, out-of-band visibility across systems, networks, and applications without disrupting operations. Security policies can be enforced in real time and are capable of creating a strong protective boundary around critical operational systems. The result is trusted insight into operational activity and improved overall cyber resilience,” according to Akamai. Forescout works with Nvidia to bring zero-trust technology to OT networks Forescout applies network segmentation to contain lateral movement and enforce zero-trust controls. The technology would be further integrated into partnership work already being done by the two companies. By running Forescout’s on-premises sensor directly on the Nvidia BlueField, part of Nvidia Cybersecurity AI platform, customers can offload intensive computing tasks, such as deep packet inspections. This speeds up data processing, enhances asset intelligence, and improves real-time monitoring, providing security teams with the insights needed to stay ahead of emerging threats, according to Forescout. Palo Alto to demo Prisma AIRS AI Runtime Security on Nvidia BlueField DPU Palo Alto Networks recently partnered with Nvidia to run its Prisma AI-powered Radio Security(AIRs) package on the Nvidia BlueField DPU and will show off the technology at the conference. The technology is part of the Nvidia Enterprise AI Factory validated design and can offer real-time security protection for industrial network settings. “Prisma AIRS AI Runtime Security delivers deep visibility into industrial traffic and continuous monitoring for abnormal behavior. By running these security services on Nvidia BlueField, inspection

Read More »

Raising the temp on liquid cooling

IBM isn’t the only one. “We’ve been doing liquid cooling since 2012 on our supercomputers,” says Scott Tease, vice president and general manager of AI and high-performance computing at Lenovo’s infrastructure solutions group. “And we’ve been improving it ever since—we’re now on the sixth generation of that technology.” And the liquid Lenovo uses in its Neptune liquid cooling solution is warm water. Or, more precisely, hot water: 45 degrees Celsius. And when the water leaves the servers, it’s even hotter, Tease says. “I don’t have to chill that water, even if I’m in a hot climate,” he says. Even at high temperatures, the water still provides enough cooling to the chips that it has real value. “Generally, a data center will use evaporation to chill water down,” Tease adds. “Since we don’t have to chill the water, we don’t have to use evaporation. That’s huge amounts of savings on the water. For us, it’s almost like a perfect solution. It delivers the highest performance possible, the highest density possible, the lowest power consumption. So, it’s the most sustainable solution possible.” So, how is the water cooled down? It gets piped up to the roof, Tease says, where there are giant radiators with massive amounts of surface area. The heat radiates away, and then all the water flows right back to the servers again. Though not always. The hot water can also be used to, say, heat campus or community swimming pools. “We have data centers in the Nordics who are giving the heat to the local communities’ water systems,” Tease says.

Read More »

Vertiv’s AI Infrastructure Surge: Record Orders, Liquid Cooling Expansion, and Grid-Scale Power Reflect Data Center Growth

2) “Units of compute”: OneCore and SmartRun On the earnings call, Albertazzi highlighted Vertiv OneCore, an end-to-end data center solution designed to accelerate “time to token,” scaling in 12.5 MW building blocks; and Vertiv SmartRun, a prefabricated white space infrastructure solution aimed at rapidly accelerating fit-out and readiness. He pointed to collaborations (including Hut 8 and Compass Data Centers) as proof points of adoption, emphasizing that SmartRun can stand alone or plug into OneCore. 3) Cooling evolution: hybrid thermal chains and the “trim cooler” Asked how cooling architectures may change (amid industry chatter about warmer-temperature operations and shifting mixes of chillers, CDUs, and other components) Albertazzi leaned into complexity as a feature, not a bug. He argued heat rejection doesn’t disappear, even if some GPU loads can run at higher temperatures. Instead, the future looks hybrid, with mixed loads and resiliency requirements forcing more nuanced thermal chains. Vertiv’s strategic product anchor here is its “trim cooler” concept: a chiller optimized for higher-temperature operation while retaining flexibility for lower-temperature requirements in the same facility, maximizing free cooling where climate and design allow. And importantly, Albertazzi dismissed the idea that CDUs are going away: “We are pretty sure that CDUs in various shapes and forms are a long-term element of the thermal chain.” 4) Edge densification: CoolPhase Ceiling + CoolPhase Row (Feb. 3) Vertiv also expanded its thermal portfolio for edge and small IT environments with the: Vertiv CoolPhase Ceiling (launching Q2 2026): ceiling-mounted, 3.5 kW to 28 kW, designed to preserve floor space. Vertiv CoolPhase Row (available now in North America) for row-based cooling up to 30 kW (300 mm width) or 40 kW (600 mm width). Vertiv Director of Edge Thermal Michal Podmaka tied the products directly to AI-driven edge densification and management consistency, saying the new systems “integrate seamlessly

Read More »

Execution, Power, and Public Trust: Rich Miller on 2026’s Data Center Reality and Why He Built Data Center Richness

DCF founder Rich Miller has spent much of his career explaining how the data center industry works. Now, with his latest venture, Data Center Richness, he’s also examining how the industry learns. That thread provided the opening for the latest episode of The DCF Show Podcast, where Miller joined present Data Center Frontier Editor in Chief Matt Vincent and Senior Editor David Chernicoff for a wide-ranging discussion that ultimately landed on a simple conclusion: after two years of unprecedented AI-driven announcements, 2026 will be the year reality asserts itself. Projects will either get built, or they won’t. Power will either materialize, or it won’t. Communities will either accept data center expansion – or they’ll stop it. In other words, the industry is entering its execution phase. Why Data Center Richness Matters Now Miller launched Data Center Richness as both a podcast and a Substack publication, an effort to experiment with formats and better understand how professionals now consume industry information. Podcasts have become a primary way many practitioners follow the business, while YouTube’s discovery advantages increasingly make video versions essential. At the same time, Miller remains committed to written analysis, using Substack as a venue for deeper dives and format experimentation. One example is his weekly newsletter distilling key industry developments into just a handful of essential links rather than overwhelming readers with volume. The approach reflects a broader recognition: the pace of change has accelerated so much that clarity matters more than quantity. The topic of how people learn about data centers isn’t separate from the industry’s trajectory; it’s becoming part of it. Public perception, regulatory scrutiny, and investor expectations are now shaped by how stories are told as much as by how facilities are built. That context sets the stage for the conversation’s core theme. Execution Defines 2026 After

Read More »

Utah’s 4 GW AI Campus Tests the Limits of Speed-to-Power

Back in September 2025, we examined an ambitious proposal from infrastructure developer Joule Capital Partners – often branding the effort as “Joule Power” – in partnership with Caterpillar. The concept is straightforward but consequential: acquire a vast rural tract in Millard County, Utah, and pair an AI-focused data center campus with large-scale, on-site “behind-the-meter” generation to bypass the interconnection queues, transmission constraints, and substation bottlenecks slowing projects nationwide. The appeal is clear: speed-to-power and greater control over delivery timelines. But that speed shifts the project’s risk profile. Instead of navigating traditional utility procurement, the development begins to resemble a distributed power plant subject to industrial permitting, fuel supply logistics, air emissions scrutiny, noise controls, and groundwater governance. These are issues communities typically associate with generation facilities, not hyperscale data centers. Our earlier coverage focused on the technical and strategic logic of pairing compute with on-site generation. Now the story has evolved. Community opposition is emerging as a material variable that could influence schedule and scope. Although groundbreaking was held in November 2025, final site plans and key conditional use permits remain pending at the time of publication. What Is Actually Being Proposed? Public records from Millard County show Joule pursuing a zone change for approximately 4,000 acres (about 6.25 square miles), converting agricultural land near 11000 N McCornick Road to Heavy Industrial use. At a July 2025 public meeting, residents raised familiar concerns that surface when a rural landscape is targeted for hyperscale development: labor influx and housing strain, water use, traffic, dust and wildfire risk, wildlife disruption, and the broader loss of farmland and local character. What has proven less clear is the precise scale and sequencing of the buildout. Local reporting describes an initial phase of six data center buildings, each supported by a substantial fleet of Caterpillar

Read More »

From Lab to Gigawatt: CoreWeave’s ARENA and the AI Validation Imperative

The Production Readiness Gap AI teams continue to confront a familiar challenge: moving from experimentation to predictable production performance. Models that train successfully on small clusters or sandbox environments often behave very differently when deployed at scale. Performance characteristics shift. Data pipelines strain under sustained load. Cost assumptions unravel. Synthetic benchmarks and reduced test sets rarely capture the complex interactions between compute, storage, networking, and orchestration that define real-world AI systems. The result can be an expensive “Day One” surprise:  unexpected infrastructure costs, bottlenecks across distributed components, and delays that ripple across product timelines. CoreWeave’s view is that benchmarking and production launch can no longer be treated as separate phases. Instead, validation must occur in environments that replicate the architectural, operational, and economic realities of live deployment. ARENA is designed around that premise. The platform allows customers to run full workloads on CoreWeave’s production-grade GPU infrastructure, using standardized compute stacks, network configurations, data paths, and service integrations that mirror actual deployment environments. Rather than approximating production behavior, the goal is to observe it directly. Key capabilities include: Running real workloads on GPU clusters that match production configurations. Benchmarking both performance and cost under realistic operational conditions. Diagnosing bottlenecks and scaling behavior across compute, storage, and networking layers. Leveraging standardized observability tools and guided engineering support. CoreWeave positions ARENA as an alternative to traditional demo or sandbox environments; one informed by its own experience operating large-scale AI infrastructure. By validating workloads under production conditions early in the lifecycle, teams gain empirical insight into performance dynamics and cost curves before committing capital and operational resources. Why Production-Scale Validation Has Become Strategic The demand for environments like ARENA reflects how fundamentally AI workloads have changed. Several structural shifts are driving the need for production-scale validation: Continuous, Multi-Layered Workloads AI systems are no longer

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »