Stay Ahead, Stay ONMINE

Linear Regression in Time Series: Sources of Spurious Regression

1. Introduction It’s pretty clear that most of our work will be automated by AI in the future. This will be possible because many researchers and professionals are working hard to make their work available online. These contributions not only help us understand fundamental concepts but also refine AI models, ultimately freeing up time to focus on other activities. However, there is one concept that remains misunderstood, even among experts. It is spurious regression in time series analysis. This issue arises when regression models suggest strong relationships between variables, even when none exist. It is typically observed in time series regression equations that seem to have a high degree of fit — as indicated by a high R² (coefficient of multiple correlation) — but with an extremely low Durbin-Watson statistic (d), signaling strong autocorrelation in the error terms. What is particularly surprising is that almost all econometric textbooks warn about the danger of autocorrelated errors, yet this issue persists in many published papers. Granger and Newbold (1974) identified several examples. For instance, they found published equations with R² = 0.997 and the Durbin-Watson statistic (d) equal to 0.53. The most extreme found is an equation with R² = 0.999 and d = 0.093. It is especially problematic in economics and finance, where many key variables exhibit autocorrelation or serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month, leading to misleading conclusions if not handled correctly. For example, today’s GDP is strongly correlated with the GDP of the previous quarter. Our post provides a detailed explanation of the results from Granger and Newbold (1974) and Python simulation (see section 7) replicating the key results presented in their article. Whether you’re an economist, data scientist, or analyst working with time series data, understanding this issue is crucial to ensuring your models produce meaningful results. To walk you through this paper, the next section will introduce the random walk and the ARIMA(0,1,1) process. In section 3, we will explain how Granger and Newbold (1974) describe the emergence of nonsense regressions, with examples illustrated in section 4. Finally, we’ll show how to avoid spurious regressions when working with time series data. 2. Simple presentation of a Random Walk and ARIMA(0,1,1) Process 2.1 Random Walk Let 𝐗ₜ be a time series. We say that 𝐗ₜ follows a random walk if its representation is given by: 𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ. (1) Where 𝜖ₜ is a white noise. It can be written as a sum of white noise, a useful form for simulation. It is a non-stationary time series because its variance depends on the time t. 2.2 ARIMA(0,1,1) Process The ARIMA(0,1,1) process is given by: 𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ − 𝜃 𝜖ₜ₋₁. (2) where 𝜖ₜ is a white noise. The ARIMA(0,1,1) process is non-stationary. It can be written as a sum of an independent random walk and white noise: 𝐗ₜ = 𝐗₀ + random walk + white noise. (3) This form is useful for simulation. Those non-stationary series are often employed as benchmarks against which the forecasting performance of other models is judged. 3. Random walk can lead to Nonsense Regression First, let’s recall the Linear Regression model. The linear regression model is given by: 𝐘 = 𝐗𝛽 + 𝜖. (4) Where 𝐘 is a T × 1 vector of the dependent variable, 𝛽 is a K × 1 vector of the coefficients, 𝐗 is a T × K matrix of the independent variables containing a column of ones and (K−1) columns with T observations on each of the (K−1) independent variables, which are stochastic but distributed independently of the T × 1 vector of the errors 𝜖. It is generally assumed that: 𝐄(𝜖) = 0, (5) and 𝐄(𝜖𝜖′) = 𝜎²𝐈. (6) where 𝐈 is the identity matrix. A test of the contribution of independent variables to the explanation of the dependent variable is the F-test. The null hypothesis of the test is given by: 𝐇₀: 𝛽₁ = 𝛽₂ = ⋯ = 𝛽ₖ₋₁ = 0, (7) And the statistic of the test is given by: 𝐅 = (𝐑² / (𝐊−1)) / ((1−𝐑²) / (𝐓−𝐊)). (8) where 𝐑² is the coefficient of determination. If we want to construct the statistic of the test, let’s assume that the null hypothesis is true, and one tries to fit a regression of the form (Equation 4) to the levels of an economic time series. Suppose next that these series are not stationary or are highly autocorrelated. In such a situation, the test procedure is invalid since 𝐅 in (Equation 8) is not distributed as an F-distribution under the null hypothesis (Equation 7). In fact, under the null hypothesis, the errors or residuals from (Equation 4) are given by: 𝜖ₜ = 𝐘ₜ − 𝐗𝛽₀ ; t = 1, 2, …, T. (9) And will have the same autocorrelation structure as the original series 𝐘. Some idea of the distribution problem can arise in the situation when: 𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ. (10) Where 𝐘ₜ and 𝐗ₜ follow independent first-order autoregressive processes: 𝐘ₜ = 𝜌 𝐘ₜ₋₁ + 𝜂ₜ, and 𝐗ₜ = 𝜌* 𝐗ₜ₋₁ + 𝜈ₜ. (11) Where 𝜂ₜ and 𝜈ₜ are white noise. We know that in this case, 𝐑² is the square of the correlation between 𝐘ₜ and 𝐗ₜ. They use Kendall’s result from the article Knowles (1954), which expresses the variance of 𝐑: 𝐕𝐚𝐫(𝐑) = (1/T)* (1 + 𝜌𝜌*) / (1 − 𝜌𝜌*). (12) Since 𝐑 is constrained to lie between -1 and 1, if its variance is greater than 1/3, the distribution of 𝐑 cannot have a mode at 0. This implies that 𝜌𝜌* > (T−1) / (T+1). Thus, for example, if T = 20 and 𝜌 = 𝜌*, a distribution that is not unimodal at 0 will be obtained if 𝜌 > 0.86, and if 𝜌 = 0.9, 𝐕𝐚𝐫(𝐑) = 0.47. So the 𝐄(𝐑²) will be close to 0.47. It has been shown that when 𝜌 is close to 1, 𝐑² can be very high, suggesting a strong relationship between 𝐘ₜ and 𝐗ₜ. However, in reality, the two series are completely independent. When 𝜌 is near 1, both series behave like random walks or near-random walks. On top of that, both series are highly autocorrelated, which causes the residuals from the regression to also be strongly autocorrelated. As a result, the Durbin-Watson statistic 𝐝 will be very low. This is why a high 𝐑² in this context should never be taken as evidence of a true relationship between the two series. To explore the possibility of obtaining a spurious regression when regressing two independent random walks, a series of simulations proposed by Granger and Newbold (1974) will be conducted in the next section. 4. Simulation results using Python. In this section, we will show using simulations that using the regression model with independent random walks bias the estimation of the coefficients and the hypothesis tests of the coefficients are invalid. The Python code that will produce the results of the simulation will be presented in section 6. A regression equation proposed by Granger and Newbold (1974) is given by: 𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ Where 𝐘ₜ and 𝐗ₜ were generated as independent random walks, each of length 50. The values 𝐒 = |𝛽̂₁| / √(𝐒𝐄̂(𝛽̂₁)), representing the statistic for testing the significance of 𝛽₁, for 100 simulations will be reported in the table below. Table 1: Regressing two independent random walks The null hypothesis of no relationship between 𝐘ₜ and 𝐗ₜ is rejected at the 5% level if 𝐒 > 2. This table shows that the null hypothesis (𝛽 = 0) is wrongly rejected in about a quarter (71 times) of all cases. This is awkward because the two variables are independent random walks, meaning there’s no actual relationship. Let’s break down why this happens. If 𝛽̂₁ / 𝐒𝐄̂ follows a 𝐍(0,1), the expected value of 𝐒, its absolute value, should be √2 / π ≈ 0.8 (√2/π is the mean of the absolute value of a standard normal distribution). However, the simulation results show an average of 4.59, meaning the estimated 𝐒 is underestimated by a factor of: 4.59 / 0.8 = 5.7 In classical statistics, we usually use a t-test threshold of around 2 to check the significance of a coefficient. However, these results show that, in this case, you would need to use a threshold of 11.4 to properly test for significance: 2 × (4.59 / 0.8) = 11.4 Interpretation: We’ve just shown that including variables that don’t belong in the model — especially random walks — can lead to completely invalid significance tests for the coefficients. To make their simulations even clearer, Granger and Newbold (1974) ran a series of regressions using variables that follow either a random walk or an ARIMA(0,1,1) process. Here is how they set up their simulations: They regressed a dependent series 𝐘ₜ on m series 𝐗ⱼ,ₜ (with j = 1, 2, …, m), varying m from 1 to 5. The dependent series 𝐘ₜ and the independent series 𝐗ⱼ,ₜ follow the same types of processes, and they tested four cases: Case 1 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow random walks. Case 2 (Differences): They use the first differences of the random walks, which are stationary. Case 3 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow ARIMA(0,1,1). Case 4 (Differences): They use the first differences of the previous ARIMA(0,1,1) processes, which are stationary. Each series has a length of 50 observations, and they ran 100 simulations for each case. All error terms are distributed as 𝐍(0,1), and the ARIMA(0,1,1) series are derived as the sum of the random walk and independent white noise. The simulation results, based on 100 replications with series of length 50, are summarized in the next table. Table 2: Regressions of a series on m independent ‘explanatory’ series. Interpretation of the results : It is seen that the probability of not rejecting the null hypothesis of no relationship between 𝐘ₜ and 𝐗ⱼ,ₜ becomes very small when m ≥ 3 when regressions are made with random walk series (rw-levels). The 𝐑² and the mean Durbin-Watson increase. Similar results are obtained when the regressions are made with ARIMA(0,1,1) series (arima-levels). When white noise series (rw-diffs) are used, classical regression analysis is valid since the error series will be white noise and least squares will be efficient. However, when the regressions are made with the differences of ARIMA(0,1,1) series (arima-diffs) or first-order moving average series MA(1) process, the null hypothesis is rejected, on average: (10 + 16 + 5 + 6 + 6) / 5 = 8.6 which is greater than 5% of the time. If your variables are random walks or close to them, and you include unnecessary variables in your regression, you will often get fallacious results. High 𝐑² and low Durbin-Watson values do not confirm a true relationship but instead indicate a likely spurious one. 5. How to avoid spurious regression in time series It’s really hard to come up with a complete list of ways to avoid spurious regressions. However, there are a few good practices you can follow to minimize the risk as much as possible. If one performs a regression analysis with time series data and finds that the residuals are strongly autocorrelated, there is a serious problem when it comes to interpreting the coefficients of the equation. To check for autocorrelation in the residuals, one can use the Durbin-Watson test or the Portmanteau test. Based on the study above, we can conclude that if a regression analysis performed with economical variables produces strongly autocorrelated residuals, meaning a low Durbin-Watson statistic, then the results of the analysis are likely to be spurious, whatever the value of the coefficient of determination R² observed. In such cases, it is important to understand where the mis-specification comes from. According to the literature, misspecification usually falls into three categories : (i) the omission of a relevant variable, (ii) the inclusion of an irrelevant variable, or (iii) autocorrelation of the errors. Most of the time, mis-specification comes from a mix of these three sources. To avoid spurious regression in a time series, several recommendations can be made: The first recommendation is to select the right macroeconomic variables that are likely to explain the dependent variable. This can be done by reviewing the literature or consulting experts in the field. The second recommendation is to stationarize the series by taking first differences. In most cases, the first differences of macroeconomic variables are stationary and still easy to interpret. For macroeconomic data, it’s strongly recommended to differentiate the series once to reduce the autocorrelation of the residuals, especially when the sample size is small. There is indeed sometimes strong serial correlation observed in these variables. A simple calculation shows that the first differences will almost always have much smaller serial correlations than the original series. The third recommendation is to use the Box-Jenkins methodology to model each macroeconomic variable individually and then search for relationships between the series by relating the residuals from each individual model. The idea here is that the Box-Jenkins process extracts the explained part of the series, leaving the residuals, which contain only what can’t be explained by the series’ own past behavior. This makes it easier to check whether these unexplained parts (residuals) are related across variables. 6. Conclusion Many econometrics textbooks warn about specification errors in regression models, but the problem still shows up in many published papers. Granger and Newbold (1974) highlighted the risk of spurious regressions, where you get a high paired with very low Durbin-Watson statistics. Using Python simulations, we showed some of the main causes of these spurious regressions, especially including variables that don’t belong in the model and are highly autocorrelated. We also demonstrated how these issues can completely distort hypothesis tests on the coefficients. Hopefully, this post will help reduce the risk of spurious regressions in future econometric analyses. 7. Appendice: Python code for simulation. #####################################################Simulation Code for table 1 ##################################################### import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt np.random.seed(123) M = 100 n = 50 S = np.zeros(M) for i in range(M): #————————————————————— # Generate the data #————————————————————— espilon_y = np.random.normal(0, 1, n) espilon_x = np.random.normal(0, 1, n) Y = np.cumsum(espilon_y) X = np.cumsum(espilon_x) #————————————————————— # Fit the model #————————————————————— X = sm.add_constant(X) model = sm.OLS(Y, X).fit() #————————————————————— # Compute the statistic #—————————————————— S[i] = np.abs(model.params[1])/model.bse[1] #—————————————————— # Maximum value of S #—————————————————— S_max = int(np.ceil(max(S))) #—————————————————— # Create bins #—————————————————— bins = np.arange(0, S_max + 2, 1) #—————————————————— # Compute the histogram #—————————————————— frequency, bin_edges = np.histogram(S, bins=bins) #—————————————————— # Create a dataframe #—————————————————— df = pd.DataFrame({ “S Interval”: [f”{int(bin_edges[i])}-{int(bin_edges[i+1])}” for i in range(len(bin_edges)-1)], “Frequency”: frequency }) print(df) print(np.mean(S)) #####################################################Simulation Code for table 2 ##################################################### import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels.stats.stattools import durbin_watson from tabulate import tabulate np.random.seed(1) # Pour rendre les résultats reproductibles #—————————————————— # Definition of functions #—————————————————— def generate_random_walk(T): “”” Génère une série de longueur T suivant un random walk : Y_t = Y_{t-1} + e_t, où e_t ~ N(0,1). “”” e = np.random.normal(0, 1, size=T) return np.cumsum(e) def generate_arima_0_1_1(T): “”” Génère un ARIMA(0,1,1) selon la méthode de Granger & Newbold : la série est obtenue en additionnant une marche aléatoire et un bruit blanc indépendant. “”” rw = generate_random_walk(T) wn = np.random.normal(0, 1, size=T) return rw + wn def difference(series): “”” Calcule la différence première d’une série unidimensionnelle. Retourne une série de longueur T-1. “”” return np.diff(series) #—————————————————— # Paramètres #—————————————————— T = 50 # longueur de chaque série n_sims = 100 # nombre de simulations Monte Carlo alpha = 0.05 # seuil de significativité #—————————————————— # Definition of function for simulation #—————————————————— def run_simulation_case(case_name, m_values=[1,2,3,4,5]): “”” case_name : un identifiant pour le type de génération : – ‘rw-levels’ : random walk (levels) – ‘rw-diffs’ : differences of RW (white noise) – ‘arima-levels’ : ARIMA(0,1,1) en niveaux – ‘arima-diffs’ : différences d’un ARIMA(0,1,1) = > MA(1) m_values : liste du nombre de régresseurs. Retourne un DataFrame avec pour chaque m : – % de rejets de H0 – Durbin-Watson moyen – R^2_adj moyen – % de R^2 > 0.1 “”” results = [] for m in m_values: count_reject = 0 dw_list = [] r2_adjusted_list = [] for _ in range(n_sims): #————————————– # 1) Generation of independents de Y_t and X_{j,t}. #—————————————- if case_name == ‘rw-levels’: Y = generate_random_walk(T) Xs = [generate_random_walk(T) for __ in range(m)] elif case_name == ‘rw-diffs’: # Y et X sont les différences d’un RW, i.e. ~ white noise Y_rw = generate_random_walk(T) Y = difference(Y_rw) Xs = [] for __ in range(m): X_rw = generate_random_walk(T) Xs.append(difference(X_rw)) # NB : maintenant Y et Xs ont longueur T-1 # = > ajuster T_effectif = T-1 # = > on prendra T_effectif points pour la régression elif case_name == ‘arima-levels’: Y = generate_arima_0_1_1(T) Xs = [generate_arima_0_1_1(T) for __ in range(m)] elif case_name == ‘arima-diffs’: # Différences d’un ARIMA(0,1,1) = > MA(1) Y_arima = generate_arima_0_1_1(T) Y = difference(Y_arima) Xs = [] for __ in range(m): X_arima = generate_arima_0_1_1(T) Xs.append(difference(X_arima)) # 2) Prépare les données pour la régression # Selon le cas, la longueur est T ou T-1 if case_name in [‘rw-levels’,’arima-levels’]: Y_reg = Y X_reg = np.column_stack(Xs) if m >0 else np.array([]) else: # dans les cas de différences, la longueur est T-1 Y_reg = Y X_reg = np.column_stack(Xs) if m >0 else np.array([]) # 3) Régression OLS X_with_const = sm.add_constant(X_reg) # Ajout de l’ordonnée à l’origine model = sm.OLS(Y_reg, X_with_const).fit() # 4) Test global F : H0 : tous les beta_j = 0 # On regarde si p-value < alpha if model.f_pvalue is not None and model.f_pvalue 0.7) results.append({ ‘m’: m, ‘Reject %’: reject_percent, ‘Mean DW’: dw_mean, ‘Mean R^2’: r2_mean, ‘% R^2_adj >0.7’: r2_above_0_7_percent }) return pd.DataFrame(results) #—————————————————— # Application of the simulation #—————————————————— cases = [‘rw-levels’, ‘rw-diffs’, ‘arima-levels’, ‘arima-diffs’] all_results = {} for c in cases: df_res = run_simulation_case(c, m_values=[1,2,3,4,5]) all_results[c] = df_res #—————————————————— # Store data in table #—————————————————— for case, df_res in all_results.items(): print(f”nn{case}”) print(tabulate(df_res, headers=’keys’, tablefmt=’fancy_grid’)) References Granger, Clive WJ, and Paul Newbold. 1974. “Spurious Regressions in Econometrics.” Journal of Econometrics 2 (2): 111–20. Knowles, EAG. 1954. “Exercises in Theoretical Statistics.” Oxford University Press.

1. Introduction

It’s pretty clear that most of our work will be automated by AI in the future. This will be possible because many researchers and professionals are working hard to make their work available online. These contributions not only help us understand fundamental concepts but also refine AI models, ultimately freeing up time to focus on other activities.

However, there is one concept that remains misunderstood, even among experts. It is spurious regression in time series analysis. This issue arises when regression models suggest strong relationships between variables, even when none exist. It is typically observed in time series regression equations that seem to have a high degree of fit — as indicated by a high R² (coefficient of multiple correlation) — but with an extremely low Durbin-Watson statistic (d), signaling strong autocorrelation in the error terms.

What is particularly surprising is that almost all econometric textbooks warn about the danger of autocorrelated errors, yet this issue persists in many published papers. Granger and Newbold (1974) identified several examples. For instance, they found published equations with R² = 0.997 and the Durbin-Watson statistic (d) equal to 0.53. The most extreme found is an equation with R² = 0.999 and d = 0.093.

It is especially problematic in economics and finance, where many key variables exhibit autocorrelation or serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month, leading to misleading conclusions if not handled correctly. For example, today’s GDP is strongly correlated with the GDP of the previous quarter. Our post provides a detailed explanation of the results from Granger and Newbold (1974) and Python simulation (see section 7) replicating the key results presented in their article.

Whether you’re an economist, data scientist, or analyst working with time series data, understanding this issue is crucial to ensuring your models produce meaningful results.

To walk you through this paper, the next section will introduce the random walk and the ARIMA(0,1,1) process. In section 3, we will explain how Granger and Newbold (1974) describe the emergence of nonsense regressions, with examples illustrated in section 4. Finally, we’ll show how to avoid spurious regressions when working with time series data.

2. Simple presentation of a Random Walk and ARIMA(0,1,1) Process

2.1 Random Walk

Let 𝐗ₜ be a time series. We say that 𝐗ₜ follows a random walk if its representation is given by:

𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ. (1)

Where 𝜖ₜ is a white noise. It can be written as a sum of white noise, a useful form for simulation. It is a non-stationary time series because its variance depends on the time t.

2.2 ARIMA(0,1,1) Process

The ARIMA(0,1,1) process is given by:

𝐗ₜ = 𝐗ₜ₋₁ + 𝜖ₜ − 𝜃 𝜖ₜ₋₁. (2)

where 𝜖ₜ is a white noise. The ARIMA(0,1,1) process is non-stationary. It can be written as a sum of an independent random walk and white noise:

𝐗ₜ = 𝐗₀ + random walk + white noise. (3) This form is useful for simulation.

Those non-stationary series are often employed as benchmarks against which the forecasting performance of other models is judged.

3. Random walk can lead to Nonsense Regression

First, let’s recall the Linear Regression model. The linear regression model is given by:

𝐘 = 𝐗𝛽 + 𝜖. (4)

Where 𝐘 is a T × 1 vector of the dependent variable, 𝛽 is a K × 1 vector of the coefficients, 𝐗 is a T × K matrix of the independent variables containing a column of ones and (K−1) columns with T observations on each of the (K−1) independent variables, which are stochastic but distributed independently of the T × 1 vector of the errors 𝜖. It is generally assumed that:

𝐄(𝜖) = 0, (5)

and

𝐄(𝜖𝜖′) = 𝜎²𝐈. (6)

where 𝐈 is the identity matrix.

A test of the contribution of independent variables to the explanation of the dependent variable is the F-test. The null hypothesis of the test is given by:

𝐇₀: 𝛽₁ = 𝛽₂ = ⋯ = 𝛽ₖ₋₁ = 0, (7)

And the statistic of the test is given by:

𝐅 = (𝐑² / (𝐊−1)) / ((1−𝐑²) / (𝐓−𝐊)). (8)

where 𝐑² is the coefficient of determination.

If we want to construct the statistic of the test, let’s assume that the null hypothesis is true, and one tries to fit a regression of the form (Equation 4) to the levels of an economic time series. Suppose next that these series are not stationary or are highly autocorrelated. In such a situation, the test procedure is invalid since 𝐅 in (Equation 8) is not distributed as an F-distribution under the null hypothesis (Equation 7). In fact, under the null hypothesis, the errors or residuals from (Equation 4) are given by:

𝜖ₜ = 𝐘ₜ − 𝐗𝛽₀ ; t = 1, 2, …, T. (9)

And will have the same autocorrelation structure as the original series 𝐘.

Some idea of the distribution problem can arise in the situation when:

𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ. (10)

Where 𝐘ₜ and 𝐗ₜ follow independent first-order autoregressive processes:

𝐘ₜ = 𝜌 𝐘ₜ₋₁ + 𝜂ₜ, and 𝐗ₜ = 𝜌* 𝐗ₜ₋₁ + 𝜈ₜ. (11)

Where 𝜂ₜ and 𝜈ₜ are white noise.

We know that in this case, 𝐑² is the square of the correlation between 𝐘ₜ and 𝐗ₜ. They use Kendall’s result from the article Knowles (1954), which expresses the variance of 𝐑:

𝐕𝐚𝐫(𝐑) = (1/T)* (1 + 𝜌𝜌*) / (1 − 𝜌𝜌*). (12)

Since 𝐑 is constrained to lie between -1 and 1, if its variance is greater than 1/3, the distribution of 𝐑 cannot have a mode at 0. This implies that 𝜌𝜌* > (T−1) / (T+1).

Thus, for example, if T = 20 and 𝜌 = 𝜌*, a distribution that is not unimodal at 0 will be obtained if 𝜌 > 0.86, and if 𝜌 = 0.9, 𝐕𝐚𝐫(𝐑) = 0.47. So the 𝐄(𝐑²) will be close to 0.47.

It has been shown that when 𝜌 is close to 1, 𝐑² can be very high, suggesting a strong relationship between 𝐘ₜ and 𝐗ₜ. However, in reality, the two series are completely independent. When 𝜌 is near 1, both series behave like random walks or near-random walks. On top of that, both series are highly autocorrelated, which causes the residuals from the regression to also be strongly autocorrelated. As a result, the Durbin-Watson statistic 𝐝 will be very low.

This is why a high 𝐑² in this context should never be taken as evidence of a true relationship between the two series.

To explore the possibility of obtaining a spurious regression when regressing two independent random walks, a series of simulations proposed by Granger and Newbold (1974) will be conducted in the next section.

4. Simulation results using Python.

In this section, we will show using simulations that using the regression model with independent random walks bias the estimation of the coefficients and the hypothesis tests of the coefficients are invalid. The Python code that will produce the results of the simulation will be presented in section 6.

A regression equation proposed by Granger and Newbold (1974) is given by:

𝐘ₜ = 𝛽₀ + 𝐗ₜ𝛽₁ + 𝜖ₜ

Where 𝐘ₜ and 𝐗ₜ were generated as independent random walks, each of length 50. The values 𝐒 = |𝛽̂₁| / √(𝐒𝐄̂(𝛽̂₁)), representing the statistic for testing the significance of 𝛽₁, for 100 simulations will be reported in the table below.

Table 1: Regressing two independent random walks

The null hypothesis of no relationship between 𝐘ₜ and 𝐗ₜ is rejected at the 5% level if 𝐒 > 2. This table shows that the null hypothesis (𝛽 = 0) is wrongly rejected in about a quarter (71 times) of all cases. This is awkward because the two variables are independent random walks, meaning there’s no actual relationship. Let’s break down why this happens.

If 𝛽̂₁ / 𝐒𝐄̂ follows a 𝐍(0,1), the expected value of 𝐒, its absolute value, should be √2 / π ≈ 0.8 (√2/π is the mean of the absolute value of a standard normal distribution). However, the simulation results show an average of 4.59, meaning the estimated 𝐒 is underestimated by a factor of:

4.59 / 0.8 = 5.7

In classical statistics, we usually use a t-test threshold of around 2 to check the significance of a coefficient. However, these results show that, in this case, you would need to use a threshold of 11.4 to properly test for significance:

2 × (4.59 / 0.8) = 11.4

Interpretation: We’ve just shown that including variables that don’t belong in the model — especially random walks — can lead to completely invalid significance tests for the coefficients.

To make their simulations even clearer, Granger and Newbold (1974) ran a series of regressions using variables that follow either a random walk or an ARIMA(0,1,1) process.

Here is how they set up their simulations:

They regressed a dependent series 𝐘ₜ on m series 𝐗ⱼ,ₜ (with j = 1, 2, …, m), varying m from 1 to 5. The dependent series 𝐘ₜ and the independent series 𝐗ⱼ,ₜ follow the same types of processes, and they tested four cases:

  • Case 1 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow random walks.
  • Case 2 (Differences): They use the first differences of the random walks, which are stationary.
  • Case 3 (Levels): 𝐘ₜ and 𝐗ⱼ,ₜ follow ARIMA(0,1,1).
  • Case 4 (Differences): They use the first differences of the previous ARIMA(0,1,1) processes, which are stationary.

Each series has a length of 50 observations, and they ran 100 simulations for each case.

All error terms are distributed as 𝐍(0,1), and the ARIMA(0,1,1) series are derived as the sum of the random walk and independent white noise. The simulation results, based on 100 replications with series of length 50, are summarized in the next table.

Table 2: Regressions of a series on m independent ‘explanatory’ series.

Interpretation of the results :

  • It is seen that the probability of not rejecting the null hypothesis of no relationship between 𝐘ₜ and 𝐗ⱼ,ₜ becomes very small when m ≥ 3 when regressions are made with random walk series (rw-levels). The 𝐑² and the mean Durbin-Watson increase. Similar results are obtained when the regressions are made with ARIMA(0,1,1) series (arima-levels).
  • When white noise series (rw-diffs) are used, classical regression analysis is valid since the error series will be white noise and least squares will be efficient.
  • However, when the regressions are made with the differences of ARIMA(0,1,1) series (arima-diffs) or first-order moving average series MA(1) process, the null hypothesis is rejected, on average:

(10 + 16 + 5 + 6 + 6) / 5 = 8.6

which is greater than 5% of the time.

If your variables are random walks or close to them, and you include unnecessary variables in your regression, you will often get fallacious results. High 𝐑² and low Durbin-Watson values do not confirm a true relationship but instead indicate a likely spurious one.

5. How to avoid spurious regression in time series

It’s really hard to come up with a complete list of ways to avoid spurious regressions. However, there are a few good practices you can follow to minimize the risk as much as possible.

If one performs a regression analysis with time series data and finds that the residuals are strongly autocorrelated, there is a serious problem when it comes to interpreting the coefficients of the equation. To check for autocorrelation in the residuals, one can use the Durbin-Watson test or the Portmanteau test.

Based on the study above, we can conclude that if a regression analysis performed with economical variables produces strongly autocorrelated residuals, meaning a low Durbin-Watson statistic, then the results of the analysis are likely to be spurious, whatever the value of the coefficient of determination R² observed.

In such cases, it is important to understand where the mis-specification comes from. According to the literature, misspecification usually falls into three categories : (i) the omission of a relevant variable, (ii) the inclusion of an irrelevant variable, or (iii) autocorrelation of the errors. Most of the time, mis-specification comes from a mix of these three sources.

To avoid spurious regression in a time series, several recommendations can be made:

  • The first recommendation is to select the right macroeconomic variables that are likely to explain the dependent variable. This can be done by reviewing the literature or consulting experts in the field.
  • The second recommendation is to stationarize the series by taking first differences. In most cases, the first differences of macroeconomic variables are stationary and still easy to interpret. For macroeconomic data, it’s strongly recommended to differentiate the series once to reduce the autocorrelation of the residuals, especially when the sample size is small. There is indeed sometimes strong serial correlation observed in these variables. A simple calculation shows that the first differences will almost always have much smaller serial correlations than the original series.
  • The third recommendation is to use the Box-Jenkins methodology to model each macroeconomic variable individually and then search for relationships between the series by relating the residuals from each individual model. The idea here is that the Box-Jenkins process extracts the explained part of the series, leaving the residuals, which contain only what can’t be explained by the series’ own past behavior. This makes it easier to check whether these unexplained parts (residuals) are related across variables.

6. Conclusion

Many econometrics textbooks warn about specification errors in regression models, but the problem still shows up in many published papers. Granger and Newbold (1974) highlighted the risk of spurious regressions, where you get a high paired with very low Durbin-Watson statistics.

Using Python simulations, we showed some of the main causes of these spurious regressions, especially including variables that don’t belong in the model and are highly autocorrelated. We also demonstrated how these issues can completely distort hypothesis tests on the coefficients.

Hopefully, this post will help reduce the risk of spurious regressions in future econometric analyses.

7. Appendice: Python code for simulation.

#####################################################Simulation Code for table 1 #####################################################

import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt

np.random.seed(123)
M = 100 
n = 50
S = np.zeros(M)
for i in range(M):
#---------------------------------------------------------------
# Generate the data
#---------------------------------------------------------------
    espilon_y = np.random.normal(0, 1, n)
    espilon_x = np.random.normal(0, 1, n)

    Y = np.cumsum(espilon_y)
    X = np.cumsum(espilon_x)
#---------------------------------------------------------------
# Fit the model
#---------------------------------------------------------------
    X = sm.add_constant(X)
    model = sm.OLS(Y, X).fit()
#---------------------------------------------------------------
# Compute the statistic
#------------------------------------------------------
    S[i] = np.abs(model.params[1])/model.bse[1]


#------------------------------------------------------ 
#              Maximum value of S
#------------------------------------------------------
S_max = int(np.ceil(max(S)))

#------------------------------------------------------ 
#                Create bins
#------------------------------------------------------
bins = np.arange(0, S_max + 2, 1)  

#------------------------------------------------------
#    Compute the histogram
#------------------------------------------------------
frequency, bin_edges = np.histogram(S, bins=bins)

#------------------------------------------------------
#    Create a dataframe
#------------------------------------------------------

df = pd.DataFrame({
    "S Interval": [f"{int(bin_edges[i])}-{int(bin_edges[i+1])}" for i in range(len(bin_edges)-1)],
    "Frequency": frequency
})
print(df)
print(np.mean(S))

#####################################################Simulation Code for table 2 #####################################################

import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.stats.stattools import durbin_watson
from tabulate import tabulate

np.random.seed(1)  # Pour rendre les résultats reproductibles

#------------------------------------------------------
# Definition of functions
#------------------------------------------------------

def generate_random_walk(T):
    """
    Génère une série de longueur T suivant un random walk :
        Y_t = Y_{t-1} + e_t,
    où e_t ~ N(0,1).
    """
    e = np.random.normal(0, 1, size=T)
    return np.cumsum(e)

def generate_arima_0_1_1(T):
    """
    Génère un ARIMA(0,1,1) selon la méthode de Granger & Newbold :
    la série est obtenue en additionnant une marche aléatoire et un bruit blanc indépendant.
    """
    rw = generate_random_walk(T)
    wn = np.random.normal(0, 1, size=T)
    return rw + wn

def difference(series):
    """
    Calcule la différence première d'une série unidimensionnelle.
    Retourne une série de longueur T-1.
    """
    return np.diff(series)

#------------------------------------------------------
# Paramètres
#------------------------------------------------------

T = 50           # longueur de chaque série
n_sims = 100     # nombre de simulations Monte Carlo
alpha = 0.05     # seuil de significativité

#------------------------------------------------------
# Definition of function for simulation
#------------------------------------------------------

def run_simulation_case(case_name, m_values=[1,2,3,4,5]):
    """
    case_name : un identifiant pour le type de génération :
        - 'rw-levels' : random walk (levels)
        - 'rw-diffs'  : differences of RW (white noise)
        - 'arima-levels' : ARIMA(0,1,1) en niveaux
        - 'arima-diffs'  : différences d'un ARIMA(0,1,1) => MA(1)
    
    m_values : liste du nombre de régresseurs.
    
    Retourne un DataFrame avec pour chaque m :
        - % de rejets de H0
        - Durbin-Watson moyen
        - R^2_adj moyen
        - % de R^2 > 0.1
    """
    results = []
    
    for m in m_values:
        count_reject = 0
        dw_list = []
        r2_adjusted_list = []
        
        for _ in range(n_sims):
#--------------------------------------
# 1) Generation of independents de Y_t and X_{j,t}.
#----------------------------------------
            if case_name == 'rw-levels':
                Y = generate_random_walk(T)
                Xs = [generate_random_walk(T) for __ in range(m)]
            
            elif case_name == 'rw-diffs':
                # Y et X sont les différences d'un RW, i.e. ~ white noise
                Y_rw = generate_random_walk(T)
                Y = difference(Y_rw)
                Xs = []
                for __ in range(m):
                    X_rw = generate_random_walk(T)
                    Xs.append(difference(X_rw))
                # NB : maintenant Y et Xs ont longueur T-1
                # => ajuster T_effectif = T-1
                # => on prendra T_effectif points pour la régression
            
            elif case_name == 'arima-levels':
                Y = generate_arima_0_1_1(T)
                Xs = [generate_arima_0_1_1(T) for __ in range(m)]
            
            elif case_name == 'arima-diffs':
                # Différences d'un ARIMA(0,1,1) => MA(1)
                Y_arima = generate_arima_0_1_1(T)
                Y = difference(Y_arima)
                Xs = []
                for __ in range(m):
                    X_arima = generate_arima_0_1_1(T)
                    Xs.append(difference(X_arima))
            
            # 2) Prépare les données pour la régression
            #    Selon le cas, la longueur est T ou T-1
            if case_name in ['rw-levels','arima-levels']:
                Y_reg = Y
                X_reg = np.column_stack(Xs) if m>0 else np.array([])
            else:
                # dans les cas de différences, la longueur est T-1
                Y_reg = Y
                X_reg = np.column_stack(Xs) if m>0 else np.array([])
            
            # 3) Régression OLS
            X_with_const = sm.add_constant(X_reg)  # Ajout de l'ordonnée à l'origine
            model = sm.OLS(Y_reg, X_with_const).fit()
            
            # 4) Test global F : H0 : tous les beta_j = 0
            #    On regarde si p-value < alpha
            if model.f_pvalue is not None and model.f_pvalue  0.7)
        
        results.append({
            'm': m,
            'Reject %': reject_percent,
            'Mean DW': dw_mean,
            'Mean R^2': r2_mean,
            '% R^2_adj>0.7': r2_above_0_7_percent
        })
    
    return pd.DataFrame(results)
    
#------------------------------------------------------
# Application of the simulation
#------------------------------------------------------       

cases = ['rw-levels', 'rw-diffs', 'arima-levels', 'arima-diffs']
all_results = {}

for c in cases:
    df_res = run_simulation_case(c, m_values=[1,2,3,4,5])
    all_results[c] = df_res

#------------------------------------------------------
# Store data in table
#------------------------------------------------------

for case, df_res in all_results.items():
    print(f"nn{case}")
    print(tabulate(df_res, headers='keys', tablefmt='fancy_grid'))

References

  • Granger, Clive WJ, and Paul Newbold. 1974. “Spurious Regressions in Econometrics.” Journal of Econometrics 2 (2): 111–20.
  • Knowles, EAG. 1954. “Exercises in Theoretical Statistics.” Oxford University Press.
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

ExxonMobil bumps up 2030 target for Permian production

ExxonMobil Corp., Houston, is looking to grow production in the Permian basin to about 2.5 MMboe/d by 2030, an increase of 200,000 boe/d from executives’ previous forecasts and a jump of more than 45% from this year’s output. Helping drive that higher target is an expected 2030 cost profile that

Read More »

Strategists Say Oil’s Fermi Paradox Nearing an End

In an oil and gas report sent to Rigzone recently by the Macquarie team, Macquarie strategists, including Vikas Dwivedi, noted that oil’s “Fermi Paradox [is] nearing an end”, adding that “onshore stocks [are] starting to build”. “We continue to expect a heavily oversupplied market,” the strategists said in the report. “We estimate a 1Q26 peak supply-demand surplus of over four million barrels per day. Signs of the surplus are showing with continued offshore builds, increasing onshore builds, and extremely strong freight rates,” they added. “We estimate that approximately one-third of the offshore build is long-haul shipments from the Americas to Asia,” they continued. In the report, the strategists revealed that they expect onshore builds to accelerate through year-end 2025 and into early 2026, a process which they said “should drive Brent towards the low $50 range, with a possibility of reaching $45 per barrel”. “Since the end of August, offshore inventories have increased by roughly 250 million barrels and onshore storage up by ~30 million barrels,” the strategists highlighted in the report. “In the past month, the trend has accelerated with onshore … [plus] offshore stocks building by ~ three million barrels per day. Yet, structure remains backwardated, as AB barrels continued clearing East,” they added. A separate report sent to Rigzone by the Macquarie team on December 5 showed that Macquarie was projecting that the Brent price will average $68.21 per barrel overall in 2025 and $60.75 per barrel overall in 2026. According to that report, Macquarie expects the Brent price to average $63.00 per barrel in the fourth quarter of this year, $57.00 per barrel in the first quarter of 2026, $59.00 per barrel in the second quarter, $60.00 per barrel in the third quarter, and $67.00 per barrel in the fourth quarter.   In that report, Macquarie

Read More »

Uniper Approves Its First Scottish Solar Project

Uniper SE said it has decided to proceed constructing its first solar project in Scotland, the 45-megawatt (MW) Berryhill Solar Farm just north of Dundee. The German power and gas utility expects to start construction “early 2026” and fire up the facility later in the year. Berryhill’s output, from about 150,000 solar panels, would be enough “to power the equivalent of over 12,500 UK households each year, 1/5th the population of Angus – contributing to the UK’s net zero targets”, Uniper said in an online statement. “The project has been developed jointly with partner Solar2 and Uniper plans to start the construction process as its sole owner”, Uniper said. Scottish and Southern Electricity Networks, the district network operator, is responsible for installing the nine-kilometer (5.59 miles) medium-voltage cable to connect the solar farm to the grid’s point of connection at the Charleston substation on the Kingsway in Dundee, according to Uniper. The project had obtained planning permission from the Angus Council in 2022, which was modified 2024, it said. “Once operational, there will be a community benefit fund associated with the project which will be agreed with the applicable local community council – Muirhead, Birkhill and Liff Council”, Uniper said. In another Scottish renewable project, Uniper earlier this year greenlighted the construction of a 46.2-MW wind farm in East Ayrshire. It expects to put the facility into production 2028 with enough generation for 66,000 United Kingdom homes each year, it said in a press release June 11. Uniper had announced two other UK renewable power projects in 2025: the Tamworth Solar Farm with a capacity of around 44.2 MWp and the 21.33-MWp Totmonslow Solar Farm. The two projects’ combined capacity can power about 23,300 homes a year, according to Uniper. Uniper aims to connect the projects, which received approval from local

Read More »

BP, Eni Exit Two Angola Producing Assets

A consortium between BW Energy Ltd and Maurel & Prom SA has signed an agreement with Azule Energy Holdings Ltd, a 50-50 venture between BP PLC and Eni SpA, to acquire Azule’s non-operating stakes in Block 14 and Block 14K offshore Angola. Chevron Corp-operated Block 14, a mature deepwater area, produces about 40,000 barrels of oil per day (bopd) gross from nine fields, while Block 14K is a tieback to Block 14 with current production of 2,000 bopd gross, BW Energy and Maurel & Prom noted in separate press releases. BW Energy, part of Singapore-based energy shipping operator BW Group Ltd, and Maurel & Prom, a Paris-based exploration and production company majority-owned by Indonesia’s national oil and gas company PT Pertamina (Persero), will each acquire 10 percent in Block 14 and five percent in Block 14K. Azule said separately its combined share from the blocks averaged 9,600 bopd in 2024. “This transaction is aligned with Azule Energy’s strategy to concentrate our efforts on our core assets in Angola”, said Azule chief executive Joseph Murphy. Last year in Angola, Azule sold its 12 percent interest in Block 3/05 and 16 percent stake in Lower Congo Basin to Afentra, as confirmed by Azule May 23, 2024. The transaction with BW Energy and Maurel & Prom is valued up to $310 million including up to $115 million of deferred payments, Azule said. BW Energy said of the assets, “Current producing reserves are estimated at 9.3 million barrels net to BW Energy, with several identified opportunities to further increase recoverable volumes”. It added, “Abandonment and decommissioning costs are covered by existing provisions”. BW Energy chief executive Carl K. Arnet said, “The entry to Angola is a key step in BW Energy’s West Africa growth strategy and provides further diversification of our resource base”. “Angola is a mature hydrocarbon basin

Read More »

WoodMac Says Eni Find Reinforces Kutei as One of Hottest Plays

Eni’s latest discovery in Indonesia reinforces the Kutei Basin’s reputation as one of the hottest global exploration plays of recent years. That’s what Andrew Harwood, Wood Mackenzie (WoodMac) Vice President, Corporate Research, said in a statement sent to Rigzone, adding that the find “will add to Indonesia’s gas resources when the country increasingly focuses on gas availability”. “It provides options for Indonesia as the nation balances domestic demand needs with future export opportunities,” Harwood said. Harwood noted that the Konta-1 discovery “adds momentum to Eni’s existing plans to invest in and develop new gas sources for the currently underutilized Bontang LNG plant”. “The Konta-1 discovery lies in the northern Muara Bakau area, close to Eni’s pre-FID Kutei North Hub. It provides future tie-back upside and offers Plan B for Eni if the un-appraised Geng North underperforms initial expectations,” he added. Harwood also said Eni’s latest find encourages the company’s ongoing exploration campaign, which he pointed out runs into 2026. “Wood Mackenzie’s pick of prospects in line for drilling is Geliga, which holds multi trillion cubic foot potential,” he stated. Harwood went on to note that 2026 “looks exciting for Eni’s Indonesian portfolio with several major milestones ahead”. “These include exploration campaign results, a final investment decision on the Northern hub development, and the launch of ‘NewCo’ – the strategic satellite venture between Eni and Petronas,” he highlighted. In a statement sent to Rigzone recently, Eni announced a “significant gas discovery” in the Konta-1 exploration well off the coast of East Kalimantan in Indonesia. “Estimates indicate 600 billion cubic feet of gas initially in place (GIIP) with a potential upside beyond one trillion cubic feet,” Eni said in the statement. “Konta-1 was drilled to a depth of 4,575 meters [15,009 feet] in 570 meters [1,870 feet] water depth, encountering gas in

Read More »

China Fossil Fuel Generation Set for First Drop in Decade

China’s fossil fuel power plants are on track to chart their first annual drop in generation in a decade as renewables flood the grid to meet rising demand.  Thermal electricity output fell 4.2 percent in November, according to data published by the National Bureau of Statistics on Monday. Generation from coal and gas-fired plants is down 0.7 percent this year, on track for the first annual decline since 2015 unless there’s a sharp jump in December. China’s massive fleet of coal power stations is the world’s leading source of greenhouse gases fueling global warming. Even though the nation is continuing to build more of the plants, their use is plateauing as huge investments in renewables meet growing consumption needs.  Wind power jumped 22 percent in November from the previous year, while large solar farms saw a 23 percent rise in generation, additional data released Monday showed.  Even as power-sector emissions in China drop, they’ve been largely offset by rising pollution from a growing fleet of chemicals and plastics factories, according to the Centre for Research on Energy and Clean Air.  The nation’s coal output fell on an annual basis for a fifth month, while oil and natural gas continued to rise toward annual production records. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

Smart growth, lower costs: How fuel cells support utility expansion

As utilities work to expand capacity and modernize aging infrastructure to meet growing demand, they face a new imperative: doing more with every dollar invested. Analysts project capital expenditures by U.S. investor-owned electric utilities will reach $1.4 trillion between 2025 and 2030, nearly twice the amount spent during the entire previous decade.  To maintain today’s investment momentum and strengthen reliability and resilience, utilities have an opportunity to look beyond cost control and pursue strategies that deliver broader long-term value. That means seeking systems that maximize output, efficiency and uptime.  In today’s energy landscape, fuel cells are becoming increasingly relevant. They provide modular, reliable power that helps utilities extract more value from their investments while addressing rising demand and aging infrastructure. With high electrical efficiency, modular design and exceptional reliability, advanced fuel cell systems enable utilities to generate more value from their assets and streamline their day-to-day operations. Powering More with Less: Fuel Cells Redefine Efficiency Fuel cells outperform traditional combustion-based generators by converting fuel into electricity through an electrochemical reaction, rather than by burning it. This translates into roughly 15% to 20% higher efficiency than most open-cycle gas turbines or reciprocating engines. That improved conversion efficiency means each kilowatt-hour requires less fuel, increasing energy productivity and reducing exposure to fuel-price swings.  Among the various types of fuel cells, solid oxide fuel cells(SOFCs) offer the greatest advantages. Operating at high temperatures and utilizing a solid ceramic electrolyte, rather than relying on precious metals, corrosive acids or molten materials, SOFCs are a modern technology that converts fuels such as natural gas or hydrogen into electricity with exceptional efficiency and durability. Conversion efficiencies can reach up to 65% and when integrated with combined heat and power (CHP) configurations, the total system efficiency can exceed 90%.  Meeting Demand Faster with Fuel Cells With demand surging,

Read More »

Executive Roundtable: Converging Disciplines in the AI Buildout

At Data Center Frontier, we rely on industry leaders to help us understand the most urgent challenges facing digital infrastructure. And in the fourth quarter of 2025, the data center industry is adjusting to a new kind of complexity.  AI-scale infrastructure is redefining what “mission critical” means, from megawatt density and modular delivery to the chemistry of cooling fluids and the automation of energy systems. Every project has arguably in effect now become an ecosystem challenge, demanding that electrical, mechanical, construction, and environmental disciplines act as one.  For this quarter’s Executive Roundtable, DCF convened subject matter experts from Ecolab, EdgeConneX, Rehlko and Schneider Electric – leaders spanning the full chain of facilities design, deployment, and operation. Their insights illuminate how liquid cooling, energy management, and sustainable process design in data centers are now converging to set the pace for the AI era. Our distinguished executive panelists for this quarter include: Rob Lowe, Director RD&E – Global High Tech, Ecolab Phillip Marangella, Chief Marketing and Product Officer, EdgeConneX Ben Rapp, Manager, Strategic Project Development, Rehlko Joe Reele, Vice President, Datacenter Solution Architects, Schneider Electric Today: Engineering the New Normal – Liquid Cooling at Scale  Today’s kickoff article grapples with how, as liquid cooling technology transitions to default hyperscale design, the challenge is no longer if, but how to scale builds safely, repeatably, and globally.  Cold plates, immersion, dielectric fluids, and liquid-to-chip loops are converging into factory-integrated building blocks, yet variability in chemistry, serviceability, materials, commissioning practices, and long-term maintenance threatens to fragment adoption just as demand accelerates.  Success now hinges on shared standards and tighter collaboration across OEMs, builders, and process specialists worldwide. So how do developers coordinate across the ecosystem to make liquid cooling a safe, maintainable global default? What’s Ahead in the Roundtable Over the coming days, our panel

Read More »

DCF Trends Summit 2025: AI for Good – How Operators, Vendors and Cooling Specialists See the Next Phase of AI Data Centers

At the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., the conversation around AI and infrastructure moved well past the hype. In a panel sponsored by Schneider Electric—“AI for Good: Building for AI Workloads and Using AI for Smarter Data Centers”—three industry leaders explored what it really means to design, cool and operate the new class of AI “factories,” while also turning AI inward to run those facilities more intelligently. Moderated by Data Center Frontier Editor in Chief Matt Vincent, the session brought together: Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters Andrew Whitmore, VP of Sales, Motivair Together, they traced both sides of the “AI for Good” equation: building for AI workloads at densities that would have sounded impossible just a few years ago, and using AI itself to reduce risk, improve efficiency and minimize environmental impact. From Bubble Talk to “AI Factories” Carlini opened by acknowledging the volatility surrounding AI investments, citing recent headlines and even Sam Altman’s public use of the word “bubble” to describe the current phase of exuberance. “It’s moving at an incredible pace,” Carlini noted, pointing out that roughly half of all VC money this year has flowed into AI, with more already spent than in all of the previous year. Not every investor will win, he said, and some companies pouring in hundreds of billions may not recoup their capital. But for infrastructure, the signal is clear: the trajectory is up and to the right. GPU generations are cycling faster than ever. Densities are climbing from high double-digits per rack toward hundreds of kilowatts. The hyperscale “AI factories,” as NVIDIA calls them, are scaling to campus capacities measured in gigawatts. Carlini reminded the audience that in 2024,

Read More »

FinOps Foundation sharpens FOCUS to reduce cloud cost chaos

“The big change that’s really started to happen in late 2024 early 2025 is that the FinOps practice started to expand past the cloud,” Storment said. “A lot of organizations got really good at using FinOps to manage the value of cloud, and then their organizations went, ‘oh, hey, we’re living in this happily hybrid state now where we’ve got cloud, SaaS, data center. Can you also apply the FinOps practice to our SaaS? Or can you apply it to our Snowflake? Can you apply it to our data center?’” The FinOps Foundation’s community has grown to approximately 100,000 practitioners. The organization now includes major cloud vendors, hardware providers like Nvidia and AMD, data center operators and data cloud platforms like Snowflake and Databricks. Some 96 of the Fortune 100 now participate in FinOps Foundation programs. The practice itself has shifted in two directions. It has moved left into earlier architectural and design processes, becoming more proactive rather than reactive. It has also moved up organizationally, from director-level cloud management roles to SVP and COO positions managing converged technology portfolios spanning multiple infrastructure types. This expansion has driven the evolution of FOCUS beyond its original cloud billing focus. Enterprises are implementing FOCUS as an internal standard for chargeback reporting even when their providers don’t generate native FOCUS data. Some newer cloud providers, particularly those focused on AI infrastructure, are using the FOCUS specification to define their billing data structures from the ground up rather than retrofitting existing systems. The FOCUS 1.3 release reflects this maturation, addressing technical gaps that have emerged as organizations apply cost management practices across increasingly complex hybrid environments. FOCUS 1.3 exposes cost allocation logic for shared infrastructure The most significant technical enhancement in FOCUS 1.3 addresses a gap in how shared infrastructure costs are allocated and

Read More »

Aetherflux joins the race to launch orbital data centers by 2027

Enterprises will connect to and manage orbital workloads “the same way they manage cloud workloads today,” using optical links, the spokesperson added. The company’s approach is to “continuously launch new hardware and quickly integrate the latest architectures,” with older systems running lower-priority tasks to serve out the full useful lifetime of their high-end GPUs. The company declined to disclose pricing. Aetherflux plans to launch about 30 satellites at a time on SpaceX Falcon 9 rockets. Before the data center launch, the company will launch a power-beaming demonstration satellite in 2026 to test transmission of one kilowatt of energy from orbit to ground stations, using infrared lasers. Competition in the sector has intensified in recent months. In November, Starcloud launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, which is 100 times more powerful than any previous GPU flown in space, according to the company, and demonstrated running Google’s Gemma AI model in orbit. In the same month, Google announced Project Suncatcher, with a 2027 demonstration mission planned. Analysts see limited near-term applications Despite the competitive activity, orbital data centers won’t replace terrestrial cloud regions for general hosting through 2030, said Ashish Banerjee, senior principal analyst at Gartner. Instead, they suit specific workloads, including meeting data sovereignty requirements for jurisdictionally complex scenarios, offering disaster recovery immune to terrestrial risks, and providing asynchronous high-performance computing, he said. “Orbital centers are ideal for high-compute, low-I/O batch jobs,” Banerjee said. “Think molecular folding simulations for pharma, massive Monte Carlo financial simulations, or training specific AI model weights. If the job takes 48 hours, the 500ms latency penalty of LEO is irrelevant.” One immediate application involves processing satellite-generated data in orbit, he said. Earth observation satellites using synthetic aperture radar generate roughly 10 gigabytes per second, but limited downlink bandwidth creates bottlenecks. Processing data in

Read More »

Here’s what Oracle’s soaring infrastructure spend could mean for enterprises

He said he had earlier told analysts in a separate call that margins for AI workloads in these data centers would be in the 30% to 40% range over the life of a customer contract. Kehring reassured that there would be demand for the data centers when they were completed, pointing to Oracle’s increasing remaining performance obligations, or services contracted but not yet delivered, up $68 billion on the previous quarter, saying that Oracle has been seeing unprecedented demand for AI workloads driven by the likes of Meta and Nvidia. Rising debt and margin risks raise flags for CIOs For analysts, though, the swelling debt load is hard to dismiss, even with Oracle’s attempts to de-risk its spend and squeeze more efficiency out of its buildouts. Gogia sees Oracle already under pressure, with the financial ecosystem around the company pricing the risk — one of the largest debts in corporate history, crossing $100 billion even before the capex spend this quarter — evident in the rising cost of insuring the debt and the shift in credit outlook. “The combination of heavy capex, negative free cash flow, increasing financing cost and long-dated revenue commitments forms a structural pressure that will invariably finds its way into the commercial posture of the vendor,” Gogia said, hinting at an “eventual” increase in pricing of the company’s offerings. He was equally unconvinced by Magouyrk’s assurances about the margin profile of AI workloads as he believes that AI infrastructure, particularly GPU-heavy clusters, delivers significantly lower margins in the early years because utilisation takes time to ramp.

Read More »

New Nvidia software gives data centers deeper visibility into GPU thermals and reliability

Addressing the challenge Modern AI accelerators now draw more than 700W per GPU, and multi-GPU nodes can reach 6kW, creating concentrated heat zones, rapid power swings, and a higher risk of interconnect degradation in dense racks, according to Manish Rawat, semiconductor analyst at TechInsights. Traditional cooling methods and static power planning increasingly struggle to keep pace with these loads. “Rich vendor telemetry covering real-time power draw, bandwidth behavior, interconnect health, and airflow patterns shifts operators from reactive monitoring to proactive design,” Rawat said. “It enables thermally aware workload placement, faster adoption of liquid or hybrid cooling, and smarter network layouts that reduce heat-dense traffic clusters.” Rawat added that the software’s fleet-level configuration insights can also help operators catch silent errors caused by mismatched firmware or driver versions. This can improve training reproducibility and strengthen overall fleet stability. “Real-time error and interconnect health data also significantly accelerates root-cause analysis, reducing MTTR and minimizing cluster fragmentation,” Rawat said. These operational pressures can shape budget decisions and infrastructure strategy at the enterprise level.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »