Stay Ahead, Stay ONMINE

One-Tailed Vs. Two-Tailed Tests

Introduction If you’ve ever analyzed data using built-in t-test functions, such as those in R or SciPy, here’s a question for you: have you ever adjusted the default setting for the alternative hypothesis? If your answer is no—or if you’re not even sure what this means—then this blog post is for you! The alternative hypothesis parameter, commonly referred to as “one-tailed” versus “two-tailed” in statistics, defines the expected direction of the difference between control and treatment groups. In a two-tailed test, we assess whether there is any difference in mean values between the groups, without specifying a direction. A one-tailed test, on the other hand, posits a specific direction—whether the control group’s mean is either less than or greater than that of the treatment group. Choosing between one- and two-tailed hypotheses might seem like a minor detail, but it affects every stage of A/B testing: from test planning to Data Analysis and results interpretation. This article builds a theoretical foundation on why the hypothesis direction matters and explores the pros and cons of each approach. One-tailed vs. two-tailed hypothesis testing: Understanding the difference To understand the importance of choosing between one-tailed and two-tailed hypotheses, let’s briefly review the basics of the t-test, the commonly used method in A/B testing. Like other Hypothesis Testing methods, the t-test begins with a conservative assumption: there is no difference between the two groups (the null hypothesis). Only if we find strong evidence against this assumption can we reject the null hypothesis and conclude that the treatment has had an effect. But what qualifies as “strong evidence”? To that end, a rejection region is determined under the null hypothesis and all results that fall within this region are deemed so unlikely that we take them as evidence against the feasibility of the null hypothesis. The size of this rejection region is based on a predetermined probability, known as alpha (α), which represents the likelihood of incorrectly rejecting the null hypothesis.  What does this have to do with the direction of the alternative hypothesis? Quite a bit, actually. While the alpha level determines the size of the rejection region, the alternative hypothesis dictates its placement. In a one-tailed test, where we hypothesize a specific direction of difference, the rejection region is situated in only one tail of the distribution. For a hypothesized positive effect (e..g., that the treatment group mean is higher than the control group mean), the rejection region would lie in the right tail, creating a right-tailed test. Conversely, if we hypothesize a negative effect (e.g., that the treatment group mean is less than the control group mean), the rejection region would be placed in the left tail, resulting in a left-tailed test. In contrast, a two-tailed test allows for the detection of a difference in either direction, so the rejection region is split between both tails of the distribution. This accommodates the possibility of observing extreme values in either direction, whether the effect is positive or negative. To build intuition, let’s visualize how the rejection regions appear under the different hypotheses. Recall that according to the null hypothesis, the difference between the two groups should center around zero. Thanks to the central limit theorem, we also know this distribution approximates a normal distribution. Consequently, the rejection areas corresponding to the different alternative hypothesis look like that: Why does it make a difference? The choice of direction for the alternative hypothesis impacts the entire A/B testing process, starting with the planning phase—specifically, in determining the sample size. Sample size is calculated based on the desired power of the test, which is the probability of detecting a true difference between the two groups when one exists. To compute power, we examine the area under the alternative hypothesis that corresponds to the rejection region (since power reflects the ability to reject the null hypothesis when the alternative hypothesis is true). Since the direction of the hypothesis affects the size of this rejection region, power is generally lower for a two-tailed hypothesis. This is due to the rejection region being divided across both tails, making it more challenging to detect an effect in any one direction. The following graph illustrates the comparison between the two types of hypotheses. Note that the purple area is larger for the one-tailed hypothesis, compared to the two-tailed hypothesis: In practice, to maintain the desired power level, we compensate for the reduced power of a two-tailed hypothesis by increasing the sample size (Increasing sample size raises power, though the mechanics of this can be a topic for a separate article). Thus, the choice between one- and two-tailed hypotheses directly influences the required sample size for your test.  Beyond the planning phase, the choice of alternative hypothesis directly impacts the analysis and interpretation of results. There are cases where a test may reach significance with a one-tailed approach but not with a two-tailed one, and vice versa. Reviewing the previous graph can help illustrate this: for example, a result in the left tail might be significant under a two-tailed hypothesis but not under a right one-tailed hypothesis. Conversely, certain results might fall within the rejection region of a right one-tailed test but lie outside the rejection area in a two-tailed test. How to decide between a one-tailed and two-tailed hypothesis Let’s start with the bottom line: there’s no absolute right or wrong choice here. Both approaches are valid, and the primary consideration should be your specific business needs. To help you decide which option best suits your company, we’ll outline the key pros and cons of each. At first glance, a one-tailed alternative may appear to be the clear choice, as it often aligns better with business objectives. In industry applications, the focus is typically on improving specific metrics rather than exploring a treatment’s impact in both directions. This is especially relevant in A/B testing, where the goal is often to optimize conversion rates or enhance revenue. If the treatment doesn’t lead to a significant improvement the examined change won’t be implemented. Beyond this conceptual advantage, we have already mentioned one key benefit of a one-tailed hypothesis: it requires a smaller sample size. Thus, choosing a one-tailed alternative can save both time and resources. To illustrate this advantage, the following graphs show the required sample sizes for one- and two-tailed hypotheses with different power levels (alpha is set at 5%). In this context, the decision between one- and two-tailed hypotheses becomes particularly important in sequential testing—a method that allows for ongoing data analysis without inflating the alpha level. Here, selecting a one-tailed test can significantly reduce the duration of the test, enabling faster decision-making, which is especially valuable in dynamic business environments where prompt responses are essential. However, don’t be too quick to dismiss the two-tailed hypothesis! It has its own advantages. In some business contexts, the ability to detect “negative significant results” is a major benefit. As one client once shared, he preferred negative significant results over inconclusive ones because they offer valuable learning opportunities. Even if the outcome wasn’t as expected, he could conclude that the treatment had a negative effect and gain insights into the product. Another benefit of two-tailed tests is their straightforward interpretation using confidence intervals (CIs). In two-tailed tests, a CI that doesn’t include zero directly indicates significance, making it easier for practitioners to interpret results at a glance. This clarity is particularly appealing since CIs are widely used in A/B testing platforms. Conversely, with one-tailed tests, a significant result might still include zero in the CI, potentially leading to confusion or mistrust in the findings. Although one-sided confidence intervals can be employed with one-tailed tests, this practice is less common. Conclusions By adjusting a single parameter, you can significantly impact your A/B testing: specifically, the sample size you need to collect and the interpretation of the results. When deciding between one- and two-tailed hypotheses, consider factors such as the available sample size, the advantages of detecting negative effects, and the convenience of aligning confidence intervals (CIs) with hypothesis testing. Ultimately, this decision should be made thoughtfully, taking into account what best fits your business needs. (Note: all the images in this post were created by the author)

Introduction

If you’ve ever analyzed data using built-in t-test functions, such as those in R or SciPy, here’s a question for you: have you ever adjusted the default setting for the alternative hypothesis? If your answer is no—or if you’re not even sure what this means—then this blog post is for you!

The alternative hypothesis parameter, commonly referred to as “one-tailed” versus “two-tailed” in statistics, defines the expected direction of the difference between control and treatment groups. In a two-tailed test, we assess whether there is any difference in mean values between the groups, without specifying a direction. A one-tailed test, on the other hand, posits a specific direction—whether the control group’s mean is either less than or greater than that of the treatment group.

Choosing between one- and two-tailed hypotheses might seem like a minor detail, but it affects every stage of A/B testing: from test planning to Data Analysis and results interpretation. This article builds a theoretical foundation on why the hypothesis direction matters and explores the pros and cons of each approach.

One-tailed vs. two-tailed hypothesis testing: Understanding the difference

To understand the importance of choosing between one-tailed and two-tailed hypotheses, let’s briefly review the basics of the t-test, the commonly used method in A/B testing. Like other Hypothesis Testing methods, the t-test begins with a conservative assumption: there is no difference between the two groups (the null hypothesis). Only if we find strong evidence against this assumption can we reject the null hypothesis and conclude that the treatment has had an effect.

But what qualifies as “strong evidence”? To that end, a rejection region is determined under the null hypothesis and all results that fall within this region are deemed so unlikely that we take them as evidence against the feasibility of the null hypothesis. The size of this rejection region is based on a predetermined probability, known as alpha (α), which represents the likelihood of incorrectly rejecting the null hypothesis. 

What does this have to do with the direction of the alternative hypothesis? Quite a bit, actually. While the alpha level determines the size of the rejection region, the alternative hypothesis dictates its placement. In a one-tailed test, where we hypothesize a specific direction of difference, the rejection region is situated in only one tail of the distribution. For a hypothesized positive effect (e..g., that the treatment group mean is higher than the control group mean), the rejection region would lie in the right tail, creating a right-tailed test. Conversely, if we hypothesize a negative effect (e.g., that the treatment group mean is less than the control group mean), the rejection region would be placed in the left tail, resulting in a left-tailed test.

In contrast, a two-tailed test allows for the detection of a difference in either direction, so the rejection region is split between both tails of the distribution. This accommodates the possibility of observing extreme values in either direction, whether the effect is positive or negative.

To build intuition, let’s visualize how the rejection regions appear under the different hypotheses. Recall that according to the null hypothesis, the difference between the two groups should center around zero. Thanks to the central limit theorem, we also know this distribution approximates a normal distribution. Consequently, the rejection areas corresponding to the different alternative hypothesis look like that:

Why does it make a difference?

The choice of direction for the alternative hypothesis impacts the entire A/B testing process, starting with the planning phase—specifically, in determining the sample size. Sample size is calculated based on the desired power of the test, which is the probability of detecting a true difference between the two groups when one exists. To compute power, we examine the area under the alternative hypothesis that corresponds to the rejection region (since power reflects the ability to reject the null hypothesis when the alternative hypothesis is true).

Since the direction of the hypothesis affects the size of this rejection region, power is generally lower for a two-tailed hypothesis. This is due to the rejection region being divided across both tails, making it more challenging to detect an effect in any one direction. The following graph illustrates the comparison between the two types of hypotheses. Note that the purple area is larger for the one-tailed hypothesis, compared to the two-tailed hypothesis:

In practice, to maintain the desired power level, we compensate for the reduced power of a two-tailed hypothesis by increasing the sample size (Increasing sample size raises power, though the mechanics of this can be a topic for a separate article). Thus, the choice between one- and two-tailed hypotheses directly influences the required sample size for your test. 

Beyond the planning phase, the choice of alternative hypothesis directly impacts the analysis and interpretation of results. There are cases where a test may reach significance with a one-tailed approach but not with a two-tailed one, and vice versa. Reviewing the previous graph can help illustrate this: for example, a result in the left tail might be significant under a two-tailed hypothesis but not under a right one-tailed hypothesis. Conversely, certain results might fall within the rejection region of a right one-tailed test but lie outside the rejection area in a two-tailed test.

How to decide between a one-tailed and two-tailed hypothesis

Let’s start with the bottom line: there’s no absolute right or wrong choice here. Both approaches are valid, and the primary consideration should be your specific business needs. To help you decide which option best suits your company, we’ll outline the key pros and cons of each.

At first glance, a one-tailed alternative may appear to be the clear choice, as it often aligns better with business objectives. In industry applications, the focus is typically on improving specific metrics rather than exploring a treatment’s impact in both directions. This is especially relevant in A/B testing, where the goal is often to optimize conversion rates or enhance revenue. If the treatment doesn’t lead to a significant improvement the examined change won’t be implemented.

Beyond this conceptual advantage, we have already mentioned one key benefit of a one-tailed hypothesis: it requires a smaller sample size. Thus, choosing a one-tailed alternative can save both time and resources. To illustrate this advantage, the following graphs show the required sample sizes for one- and two-tailed hypotheses with different power levels (alpha is set at 5%).

In this context, the decision between one- and two-tailed hypotheses becomes particularly important in sequential testing—a method that allows for ongoing data analysis without inflating the alpha level. Here, selecting a one-tailed test can significantly reduce the duration of the test, enabling faster decision-making, which is especially valuable in dynamic business environments where prompt responses are essential.

However, don’t be too quick to dismiss the two-tailed hypothesis! It has its own advantages. In some business contexts, the ability to detect “negative significant results” is a major benefit. As one client once shared, he preferred negative significant results over inconclusive ones because they offer valuable learning opportunities. Even if the outcome wasn’t as expected, he could conclude that the treatment had a negative effect and gain insights into the product.

Another benefit of two-tailed tests is their straightforward interpretation using confidence intervals (CIs). In two-tailed tests, a CI that doesn’t include zero directly indicates significance, making it easier for practitioners to interpret results at a glance. This clarity is particularly appealing since CIs are widely used in A/B testing platforms. Conversely, with one-tailed tests, a significant result might still include zero in the CI, potentially leading to confusion or mistrust in the findings. Although one-sided confidence intervals can be employed with one-tailed tests, this practice is less common.

Conclusions

By adjusting a single parameter, you can significantly impact your A/B testing: specifically, the sample size you need to collect and the interpretation of the results. When deciding between one- and two-tailed hypotheses, consider factors such as the available sample size, the advantages of detecting negative effects, and the convenience of aligning confidence intervals (CIs) with hypothesis testing. Ultimately, this decision should be made thoughtfully, taking into account what best fits your business needs.

(Note: all the images in this post were created by the author)

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Many ways to use the date command on Linux

$ date +%s1746203311 You can also convert a timestamp back to a human-friendly date: $ date -d @1746203311Fri May 2 12:28:31 PM EDT 2025 The date -u commands displays the date and time in UTC (coordinated universal time): $ date -uFri May 2 04:30:55 PM UTC 2025 You can display

Read More »

Nvidia eyes China rebound with stripped-down AI chip tailored to export limits

“Nvidia’s region-specific, compliance-driven chip strategy introduces manageable fragmentation risks, but also unlocks significant opportunities for global enterprises,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “While hardware and software inconsistencies may complicate unified AI deployments, these variants enable legal market access, cost optimization, and hybrid architecture

Read More »

Oil Rises Amid Sanctions Risk

Oil climbed as the market assessed the risk of additional US sanctions on Russia and the chance that nuclear talks with Iran will fail to produce an agreement. West Texas Intermediate rose 1.6% to settle near $62 a barrel after President Donald Trump said Russian President Vladimir Putin was “playing with fire” by escalating attacks on Ukraine. The US is weighing additional sanctions on the country after aggressive measures against Russia’s oil industry earlier this year sent crude rallying past $80 a barrel. The commodity eased off of intraday highs on news that Russia-Ukraine talks will be held in Istanbul on June 2. Elsewhere, the New York Times reported that Israeli Prime Minister Benjamin Netanyahu is pressing on with threats to disrupt talks between Washington and Tehran by striking Iran’s nuclear facilities. A wrong turn in the negotiations stands to crimp flows from the OPEC member. Still, bearish forces loom in the background. OPEC+ on Wednesday ratified group-wide production quotas this year and next, ahead of a decision by eight key members over the weekend on whether to bolster output again in July. Members held preliminary talks last week on making a large production hike for a third consecutive month, according to delegates. The early conference likely stifled any remaining hope among the broader group of OPEC+ members for a slower-than-anticipated production unwind, said Robert Yawger, director of the energy futures division at Mizuho Securities USA. “Now, the market is at the mercy of OPEC on Saturday,” he said. The ramp-up of idled production by OPEC and its allies has stoked fears about oversupply and added to the pressure on prices. Parts of the futures curve for Brent are in contango — a bearish structure that signals ample supply. Oil has trended lower since mid-January, with sweeping tariffs from the

Read More »

Smallest Gulf Oil Producer Seeks LNG Imports Amid Ample Supply

Bahrain, the smallest oil producer in the Persian Gulf, is seeking liquefied natural gas imports and considering long-term deals as the market appears well supplied, according to the boss of its state energy company. Bapco Energies this month received an LNG cargo after restarting a floating import terminal and is expecting another in June, Chief Executive Officer Mark Thomas said. The company has already awarded a tender for supplies of the fuel in June and July and could seek more cargoes for later in the year, he said, without naming the seller. “We’ve been pleasantly surprised with the response we’ve gotten from suppliers,” he said. “The market is very dynamic right now,” meaning the company has been able to buy spot cargoes available from various suppliers, he said.  The US-led trade war has weighed on the outlook for global growth and energy demand in recent months, particularly in China — a major buyer of LNG — leaving more fuel available for other buyers. In addition, new projects are expected to come online in the coming years and contribute to supply, attracting demand from buyers who may have previously been priced out of the market. The island nation, which discovered oil in 1932, could also issue tenders before the end of this year for more LNG cargoes in 2026 and 2027, Thomas said in an interview on the sidelines of the MPGC conference in Bahrain. He’s also considering seeking a long-term contract for supply as Bapco Energies tries to balance its spot and shorter-term deals. Bahrain is also seeking to foster its own domestic gas deposits and has given a contract to EOG Resources Inc. to develop unconventional supplies, Thomas said. The concession could yield its first gas for the market this year, Thomas said. Bapco Energies is also finishing ramping

Read More »

USA Issues Limited License for Chevron to Remain in Venezuela

The Trump administration issued a stripped-down license to Chevron Corp. to remain in Venezuela, allowing the company to conduct minimal maintenance on equipment but prohibiting it from producing oil in the sanctioned South American nation.  The waiver granted Tuesday by the Treasury Department is similar to the one the company operated under during President Donald Trump’s first term, according to a person familiar with the decision who asked not to be identified. Bloomberg News reported Friday that Chevron would receive the license.  This new license was issued just as a previous one, which required Chevron and other oil majors to wind down production operations in Venezuela, expires.  Chevron said in an email that its “continued presence in Venezuela remains in compliance with all applicable laws and regulations, including the sanctions framework provided for by the US government.” Secretary of State Marco Rubio announced last week that the U.S. would let that previous license lapse as scheduled, dispelling earlier expectations of an extension to the wind-down period. Ric Grenell, a special envoy for the Trump administration, had said an extension was coming after he met with Venezuelan officials and returned with a US veteran who had been imprisoned there.  The latest, more restrictive license could represent a strategic win for all parties, potentially fostering continued talks between the two countries.  It wasn’t immediately clear whether other oil companies operating in Venezuela are receiving similar licenses to the one granted to Chevron.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Vistra solar, battery projects in MISO face supply chain delays

Vistra on Tuesday asked federal regulators for two-year extensions to interconnection deadlines for solar and battery storage projects in Illinois totaling 833 MW, saying the independent power producer faces multi-year wait times to acquire key equipment. The projects include 440 MW of solar and 106 MW of battery storage at the retired coal-fired Edwards power plant in Bartonsville and 287 MW of storage at the retired coal-fired Joppa power plant in Joppa, according to filings Vistra made at the Federal Energy Regulatory Commission. The projects have been in development since at least 2022. “The petitioners have been diligently pursuing the development of these resources, which can make an important contribution towards meeting resource adequacy needs in Illinois and the [Midcontinent Independent System Operator] region,” Vistra said in its requests at FERC. However, supply chain constraints, growing demand for electrical equipment spurred by Inflation Reduction Act incentives and rapid load growth have significantly increased the time it takes to get critical equipment, according to Vistra. “These supply chain constraints have been compounded by uncertainty and disruption arising from recent increases in import duties and continued uncertainty regarding the future trajectory of U.S. trade policy,” the Irving, Texas-based company said. Vistra has signed contracts reserving capacity with an equipment supplier for needed transformers, but the supplier expects it cannot deliver them until the fourth quarter of 2029, according to the filings. Also, Vistra’s suppliers expect it will take three years to deliver breakers for the projects rather than the 12 to 18 months that typically have been required to obtain the equipment, the company said. Vistra has interconnection agreements that set an Aug. 31, 2028, commercial operation deadline for the Joppa battery project and a Dec. 31, 2028, deadline for the Edwards project. The company asked FERC to extend the deadlines by

Read More »

Trump aims for 400 GW of nuclear by 2050, 10 large reactors under construction by 2030

Dive Brief: The White House wants to deploy 300 GW of net new nuclear capacity by 2050 and have 10 large reactors under construction in the U.S. by 2030 while expanding domestic nuclear fuel supplies, according to an executive order President Trump signed Friday.   Trump signed three other orders on Friday to accelerate Nuclear Regulatory Commission reviews of reactor license applications and reconsider strict NRC radiation limits; expand departments of Energy and Defense roles in nuclear power plant licensing and siting; and speed up deployment of new test reactors.  Nuclear power advocates hailed the orders as a boon for the industry, but warned that staff cuts at NRC and DOE could slow progress. A representative for the Union of Concerned Scientists said the proposed reforms would make the public less safe. Dive Insight: Shares of publicly-traded advanced nuclear and reactor fuel companies soared on Friday, suggesting investors see Trump’s orders as more than just words on paper.  Oklo, the advanced reactor developer previously chaired by Energy Secretary Chris Wright, was up more than 20% since Friday afternoon. Oklo’s shares got another boost Tuesday morning as it announced a design and development partnership with Korea Hydro & Nuclear Power to accelerate deployment of its Aurora powerhouses. Shares of small modular reactor developer NuScale and uranium suppliers Centrus Energy and Uranium Energy also rose more than 20% in Friday and early Tuesday trading. Trump’s “Reinvigorating the Nuclear Industrial Base” executive order called on Wright and other cabinet secretaries to develop a national policy for spent nuclear fuel management. The order singles out recycling and reprocessing activities that could benefit companies like Oklo, which plans to build fuel reprocessing capabilities and is developing reactors that can run on recycled fuel. Another order, “President Donald J. Trump Deploys Advanced Nuclear Reactor Technologies for National

Read More »

Making energy affordable again: The role of cities, states and utilities in enhancing the energy safety net

Diana Hernández is an associate professor of Sociomedical Sciences at Mailman School of Public Health at Columbia University and co-managing director of the Energy Opportunity Lab in the Center of Global Energy Policy. Vivek Shastry is a senior research associate at the Center on Global Energy Policy in the School of International and Public Affairs at Columbia University. Millions of American families are struggling to afford their energy bills. This situation could worsen as electricity prices — particularly in the Northeast — face potential hikes due to reciprocal tariffs from Canada. As energy prices fluctuate, and federal assistance programs face potential cuts, cities, states and utilities must take proactive steps to ensure that their most vulnerable residents can keep their lights on and homes heated and cooled. Department of Energy Secretary Chris Wright, in his introductory remarks after swearing in, shared a troubling statistic: one in 10 Americans had received a utility disconnection notice in the past year. In his new role, one of his goals would be to “shrink that number to zero” by making energy more abundant and affordable. By flooding the market with fossil fuels again, the price of electricity and gas would decrease. However, the primary federal program to tackle energy affordability — the Low Income Home Energy Assistance Program (LIHEAP) — does not fall under Secretary Wright’s purview. LIHEAP is funded through the Department of Health and Human Services, and Secretary Robert F. Kennedy Jr. has reportedly terminated the entire staff running the $4 billion LIHEAP program. In 2023, more than 7 million Americans received assistance through LIHEAP. Despite its importance, LIHEAP reaches only 17% of eligible households, leaving a gaping hole in the energy safety net. Without it, those relying on it to meet energy needs will be at grave risk. Some have called

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Indian startup Refroid launches India’s first data center CDUs

They use heat exchangers and pumps to regulate the flow and temperature of fluid delivered to equipment for cooling, while isolating the technology cooling system loop from facility systems. The technology addresses limitations of traditional air cooling, which industry experts say cannot adequately handle the heat generated by modern AI processors and high-density computing applications. Strategic significance for India Industry analysts view the development as a critical milestone for India’s data center ecosystem. “India generates 20% of global data, yet contributes only 3% to global data center capacity. This imbalance is not merely spatial — it’s systemic,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The emergence of indigenously developed CDUs signals a strategic pivot. Domestic CDU innovation is a defining moment in India’s transition from data centre host to technology co-creator.” Neil Shah, VP for research and partner at Counterpoint Research, noted that major international players like Schneider, Vertiv, Asetek, Liquidstack, and Zutacore have been driving most CDU deployments in Indian enterprises and data centers. “Having a local indigenous CDU tech and supplier designed with Indian weather, infrastructure and costs in mind expands options for domestic data center demand,” he said. AI driving data center cooling revolution India’s data center capacity reached approximately 1,255 MW between January and September 2024 and was projected to expand to around 1,600 MW by the end of 2024, according to CBRE India’s 2024 Data Center Market Update. Multiple market research firms have projected the India data center market to grow from about $5.7 billion in 2024 to $12 billion by 2030. Bhavaraju cited aggressive projections for the sector’s expansion, with AI workloads expected to account for 30% of total workloads by 2030. “All of them need liquid cooling,” he said, noting that “today’s latest GPU servers – GB200 from Nvidia

Read More »

Platform approach gains steam among network teams

Revisting the platform vs. point solutions debate The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. “Organizations are increasingly adopting platform-based approaches to their enterprise connectivity infrastructure to overcome complexity and unlock new business value. When enhanced by AI, enterprise platforms can increase productivity, enrich end-user experiences, enhance security, and ultimately drive new opportunities for innovation.” In IDC’s Worldwide AI in Networking Special Report, 78% of survey respondents agreed or strongly agreed with the statement: “I am moving to an AI-powered platform approach for networking.” Gartner predicts that 70% of enterprises will select a broad platform for new multi-cloud networking software deployments by 2027, an increase from 10% in early 2024. The breakdown of silos between network and security operations will be driven by organizations implementing zero-trust principles as well as the adoption of AI and AIOps. “In the future, enterprise networks will be increasingly automated, AI-assisted and more tightly integrated with security across LAN, data center and WAN domains,” according to Gartner’s 2025 Strategic Roadmap for Enterprise Networking. While all of the major networking vendors have announced cloud-based platforms, it’s still relatively early days. For example, Cisco announced a general framework for Cisco

Read More »

Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trump’s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power. Gogia said OpenAI’s selection of Oracle “is not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that complements its ambition to serve diverse regulatory environments and availability zones.” Power demands create infrastructure dilemma The facility’s power requirements raise serious questions about AI’s sustainability. Gogia noted that the 1.2-gigawatt demand — “on par with a nuclear facility” — highlights “the energy unsustainability of today’s hyperscale AI ambitions.” Shah warned that the power envelope keeps expanding. “As AI scales up and so does the necessary compute infrastructure needs exponentially, the power envelope is also consistently rising,” he said. “The key question is how much is enough? Today it’s 1.2GW, tomorrow it would need even more.” This escalating demand could burden Texas’s infrastructure, potentially requiring billions in new power grid investments that “will eventually put burden on the tax-paying residents,” Shah noted. Alternatively, projects like Stargate may need to “build their own separate scalable power plant.” What this means for enterprises The scale of these facilities explains why many organizations are shifting toward leased AI computing rather than building their own capabilities. The capital requirements and operational complexity are beyond what most enterprises can handle independently.

Read More »

New Intel Xeon 6 CPUs unveiled; one powers rival Nvidia’s DGX B300

He added that his read is that “Intel recognizes that Nvidia is far and away the leader in the market for AI GPUs and is seeking to hitch itself to that wagon.” Roberts said, “basically, Intel, which has struggled tremendously and has turned over its CEO amidst a stock slide, needs to refocus to where it thinks it can win. That’s not competing directly with Nvidia but trying to use this partnership to re-secure its foothold in the data center and squeeze out rivals like AMD for the data center x86 market. In other words, I see this announcement as confirmation that Intel is looking to regroup, and pick fights it thinks it can win. “ He also predicted, “we can expect competition to heat up in this space as Intel takes on AMD’s Epyc lineup in a push to simplify and get back to basics.” Matt Kimball, vice president and principal analyst, who focuses on datacenter compute and storage at Moor Insights & Strategy, had a much different view about the announcement. The selection of the Intel sixth generation Xeon CPU, the 6776P, to support Nvidia’s DGX B300 is, he said, “important, as it validates Intel as a strong choice for the AI market. In the big picture, this isn’t about volumes or revenue, rather it’s about validating a strategy Intel has had for the last couple of generations — delivering accelerated performance across critical workloads.”  Kimball said that, In particular, there are a “couple things that I would think helped make Xeon the chosen CPU.”

Read More »

AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

As of June 1, AWS will no longer allow sub-account transfers or new commitments to be pooled and reallocated across customers. Barrow says the shift is happening because AWS is investing billions in new data centers to meet demand from AI and hyperscale workloads. “That infrastructure requires long-term planning and capital discipline,” he said. Phil Brunkard, executive counselor at Info-Tech Research Group UK, emphasized that AWS isn’t killing RIs or SPs, “it’s just closing a loophole.” “This stops MSPs from bulk‑buying a giant commitment, carving it up across dozens of tenants, and effectively reselling discounted EC2 hours,” he said. “Basically, AWS just tilted the field toward direct negotiations and cleaner billing.” What IT buyers should do now For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments.  If original discounts were broker‑sourced, “budget for a small uptick,” he said. On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »