Stay Ahead, Stay ONMINE

It’s here: OpenAI’s o3-mini advanced reasoning model arrives to counter DeepSeek’s rise

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has released a new proprietary AI model in time to counter the rapid rise of open source rival DeepSeek R1 — but will it be enough to blunt the latter’s success? Today, after several days […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI has released a new proprietary AI model in time to counter the rapid rise of open source rival DeepSeek R1 — but will it be enough to blunt the latter’s success?

Today, after several days of rumors and increasing anticipation among AI users on social media, OpenAl is debuting o3-mini, the second model in its new family of “reasoners,” Al models that take slightly more time to “think,” analyze their own processes and reflect on their own “chains of thought” before responding to user queries and inputs with new outputs.

The result is a model that can perform at the level of a PhD student or even degree holder on answering hard questions in math, science, engineering and many other fields.

The o3-mini model is now available on ChatGPT, including the free tier, and OpenAI’s application programming interface (API), and it’s actually less expensive, faster, and more performant than the previous high-end model, OpenAI’s o1 and its faster, lower-parameter count sibling, o1-mini.

While inevitably it will be compared to DeepSeek R1, and the release date seen as a reaction, it’s important to remember that o3 and o3-mini were announced well prior to the January release of DeepSeek R1, in December 2024 — and that OpenAI CEO Sam Altman stated previously on X that due to feedback from developers and researchers, it would be coming to ChatGPT and the OpenAI API at the same time.

Unlike DeepSeek R1, o3-mini will not be made available as an open source model — meaning the code cannot be taken and downloaded for offline usage, nor customized to the same extent, which may limit its appeal compared to DeepSeek R1 for some applications.

OpenAI did not provide any further details about the (presumed) larger o3 model announced back in December alongside o3-mini. At that time, OpenAI’s opt-in dropdown form for testing o3 stated that it would undergo a “delay of multiple weeks” before third-parties could test it.

Performance and Features

Similar to o1, OpenAI o3-mini is optimized for reasoning in math, coding, and science.

Its performance is comparable to OpenAI o1 when using medium reasoning effort, but offers the following advantages:

  • 24% faster response times compared to o1-mini (OpenAI didn’t provide a specific number here, but looking at third-party evaluation group Artificial Analysis’s tests, o1-mini’s response time is 12.8 seconds to receive and output 100 tokens. So for o3-mini, a 24% speed bump would drop the response time down to 10.32 seconds.)
  • Improved accuracy, with external testers preferring o3-mini’s responses 56% of the time.
  • 39% fewer major errors on complex real-world questions.
  • Better performance in coding and STEM tasks, particularly when using high reasoning effort.
  • Three reasoning effort levels (low, medium, and high), allowing users and developers to balance accuracy and speed.

It also boasts impressive benchmarks, even outpacing o1 in some cases, according to the o3-mini System Card OpenAI released online (and which was published earlier than the official model availability announcement).

o3-mini’s context window — the number of combined tokens it can input/output in a single interaction — is 200,000, with a maximum of 100,000 in each output. That’s the same as the full o1 model and outperforms DeepSeek R1’s context window of around 128,000/130,000 tokens. But it is far below Google Gemini 2.0 Flash Thinking’s new context window of up to 1 million tokens.

While o3-mini focuses on reasoning capabilities, it doesn’t have vision capabilities yet. Developers and users looking to upload images and files should keep using o1 in the meantime.

The competition heats up

The arrival of o3-mini marks the first time OpenAI is making a reasoning model available to free ChatGPT users. The prior o1 model family was only available to paying subscribers of the ChatGPT Plus, Pro and other plans, as well as via OpenAI’s paid application programming interface.

As it did with large language model (LLM)-powered chatbots via the launch of ChatGPT in November 2022, OpenAI essentially created the entire category of reasoning models back in September 2024 when it first unveiled o1, a new class of models with a new training regime and architecture.

But OpenAI, in keeping with its recent history, did not make o1 open source, contrary to its name and original founding mission. Instead, it kept the model’s code proprietary.

And over the last two weeks, o1 has been overshadowed by Chinese AI startup DeepSeek, which launched R1, a rival, highly efficient, largely open-source reasoning model freely available to take, retrain, and customize by anyone around the world, as well as use for free on DeepSeek’s website and mobile app — a model reportedly trained at a fraction of the cost of o1 and other LLMs from top labs.

DeepSeek R1’s permissive MIT Licensing terms, free app/website for consumers, and decision to make R1’s codebase freely available to take and modify has led it to a veritable explosion of usage both in the consumer and enterprise markets — even OpenAI investor Microsoft and Anthropic backer Amazon rushing to add variants of it to their cloud marketplaces. Perplexity, the AI search company, also quickly added a variant of it for users.

DeepSeek also dethroned the ChatGPT iOS app for the number one place in the U.S. Apple App Store, and is notable for outpacing OpenAI by connecting its R1 model to web search in its app and on the web, something that OpenAI has not yet done for o1, leading to further techno anxiety among tech workers and others online that China is catching up or has outpaced the U.S. in AI innovation — even technology more generally.

Many AI researchers and scientists and top VCs such as Marc Andreessen, however, have welcomed the rise of DeepSeek and its open sourcing in particular as a tide that lifts all boats in the AI field, increasing the intelligence available to everyone while reducing costs.

Availability in ChatGPT

The model is now rolling out globally to Free, Plus, Team, and Pro users, with Enterprise and Education access coming next week.

  • Free users can try o3-mini for the first time by selecting the “Reason” button in the chat bar or regenerating a response.
  • Message limits have increased 3X for Plus and Team users, up from 50 to 150 messages per day.
  • Pro users get unlimited access to both o3-mini and a new, even higher-reasoning variant, o3-mini-high.

Additionally, o3-mini now supports search integration within ChatGPT, providing responses with relevant web links. This feature is still in its early stages as OpenAI refines search capabilities across its reasoning models.

API Integration and Pricing

For developers, o3-mini is available via the Chat Completions API, Assistants API, and Batch API. The model supports function calling, Structured Outputs, and developer messages, making it easy to integrate into real-world applications.

One of o3-mini’s most notable advantages is its cost efficiency: It’s 63% cheaper than OpenAI o1-mini and 93% cheaper than the full o1 model, priced at $1.10/$4.40 per million tokens in/out (with a 50% cache discount).

Yet it still pales in comparison to the affordability of the official DeepSeek API‘s offering of R1 at $0.14/$0.55 per million tokens in/out. But given DeepSeek is based in China and comes with attendant geopolitical awareness and security concerns about the user/enterprise’s data flowing into and out of the model, it’s likely that OpenAI will remain the preferred API for some security-focused customers and enterprises in the U.S. and Europe.

Developers can also adjust the reasoning effort level (low, medium, high) based on their application needs, allowing for more control over latency and accuracy trade-offs.

On safety, OpenAI says it used something called “deliberative alignment” with o3-mini. This means the model was asked to reason about the human-authored safety guidelines it was given, understand more of their intent and the harms they are designed to prevent, and come up with its own ways of ensuring those harms are prevented. OpenAI says it allows the model to be less censorious when discussing sensitive topics while also preserving safety.

OpenAI says the model outperforms GPT-4o in handling safety and jailbreak challenges, and that it conducted extensive external safety testing prior to release today.

A recent report covered in Wired (where my wife works) showed that DeepSeek succumbed to every jailbreak prompt and attempt out of 50 tested by security researchers, which may give OpenAI o3-mini the edge over DeepSeek R1 in cases where security and safety are paramount.

What’s next?

The launch of o3-mini represents OpenAI’s broader effort to make advanced reasoning AI more accessible and cost-effective in the face of more intense competition than ever before from DeepSeek’s R1 and others, such as Google, which recently released a free version of its own rival reasoning model Gemini 2 Flash Thinking with an expanded input context of up to 1 million tokens.

With its focus on STEM reasoning and affordability, OpenAI aims to expand the reach of AI-driven problem-solving in both consumer and developer applications.

But as the company becomes more ambitious than ever in its aims — recently announcing a $500 billion data center infrastructure project called Stargate with backing from Softbank — the question remains whether or not its strategy will pay off well enough to justify the multibillions sunken into it by deep-pocketed investors such as Microsoft and other VCs.

As open source models increasingly close the gap with OpenAI in performance and outmatch it in cost, will its reportedly superior safety measures, powerful capabilities, easy-to-use API and user-friendly interfaces be enough to maintain customers — especially in the enterprise — who may prioritize cost and efficiency over these attributes? We’ll be reporting on the developments as they unfold.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

What is NaaS? Providers, delivery models, and benefits explained

Subscription hardware: Instead of making an outright purchase (Capex), a business pays a monthly subscription (OpEx) for the hardware but still handles the installation and operation of the equipment. Managed service: Subscription-based hardware plus a managed service to operate it. Pure NaaS: The provider owns, installs, and operates all equipment,

Read More »

US Justice Department sues to block HPE’s $14 billion Juniper buy

“Even well-resourced networking companies in complementary networking markets are unlikely to be strong alternatives to Cisco and HPE immediately, as several face reputational headwinds and have not developed the distribution networks for rapid growth in the enterprise-grade WLAN market,” the DOJ stated. The DOJ said that if the deal were

Read More »

WTI Slips Below $73 on Mixed Tariff Signals

Oil settled slightly lower after a turbulent session buffeted by conflicting messages about the timing and scope of tariffs on major US crude suppliers Canada and Mexico. West Texas Intermediate edged down below $73 a barrel after White House press secretary Karoline Leavitt reiterated that the planned tariffs on Canada, Mexico and China will start as soon as Feb. 1. Crude had earlier declined as much as 1.1% after Reuters reported the measures would be delayed until March 1 and may allow for exemptions, assuaging concerns about supply disruptions. “I saw that report, and it is false,” Leavitt told reporters on Friday during a press briefing. “I was just with the President in the Oval Office, and I can confirm that tomorrow the Feb. 1 deadline that President Trump put into place” remains, she added. She didn’t clarify whether crude would be exempt from the incoming levies. The inclusion of crude in the tariffs would risk major reverberations across the oil market. Canada ships about 4 million barrels a day to the US, and the countries’ energy markets are closely integrated, with refiners in the Midwest the most vulnerable to disruptions. Valero Energy Corp., the third-biggest US fuelmaker by market value, expects processors to cut production if tariffs hit oil imports. Canadian crude prices have been volatile in the weeks since the tariffs were first floated, while premiums for gasoline and diesel have risen in recent days. “The inclusion of Canada oil in a 25% tariff on Canada and Mexico would likely initially raise gasoline prices in the US Midwest, and eventually weigh on crude prices globally (via weaker demand) and especially in Canada, where producers have limited export options,” Goldman Sachs Group Inc. analysts including Daan Struyven said in a note. Crude was little changed in January after gains

Read More »

Jonathan Cole: Outgoing Corio CEO says now is the time for discipline

Friday marked the last day of Corio Generation CEO Jonathan Cole’s tenure at the top of the company. Speaking to Energy Voice before the announcement of his departure, Cole shared his thoughts on his career and the direction of the offshore wind sector. Corio reported that Cole would step down last week, with his last day on 31 January, as he looks to prioritise his family life. Having spent three years at the head of the company, the announcement comes not long after he received the Outstanding Contribution Award at last year’s Scottish Green Energy Awards. “I was a kid 20 years ago when the awards first started, so it was nice to be recognised,” he says. “But I think the recognition was not just about running two successful businesses in this space, but actually doing a lot more outside of my company to help grow the sector and work collaboratively and constructively with organisations and people who, on the face of it, are competitors. “When you look at the bigger picture, we also have to be collaborators to make the case for renewable energy in the face of quite a lot of opposition sometimes.” Career Cole has served in a variety of leadership roles across the energy sector for the past 20 years. Before moving to Corio, he was involved in creating and running Iberdrola’s global offshore wind business, which was started in Glasgow within Scottish Power. Since taking over the top job at Corio, he has built up the business over three years to the point it is now a global player with 25GW of projects all over the world and more than 250 professional staff. In addition to his leadership roles, he currently serves – and aims to carry on – as chairman of the Global Wind

Read More »

Macquarie Strategists See ‘Very Large’ USA Crude Build in Next EIA Report

In an oil and gas report sent to Rigzone by the Macquarie team late Thursday, Macquarie strategists outlined that they “anticipate a very large U.S. crude build” in the U.S. Energy Information Administration’s (EIA) next weekly petroleum status report. That report is scheduled to be released on February 5 and will include data for the week ending January 31. The EIA’s latest weekly petroleum status report at the time of writing was released on January 29 and included data for the week ending January 24. “Looking ahead to next week’s release, we anticipate a very large U.S. crude build (+11.7 million barrels), with runs falling further (-0.2 million barrels per day), nominal implied supply showing a meaningful recovery from freeze impacts (+0.7 million barrels per day), net imports higher (+0.3 million barrels per day), and a larger increase in SPR inventory (+0.9 million barrels) on the week,” the Macquarie strategists said in the report. “We note potential for volatility in these figures given the incomplete nature of this week’s data. Among products, our preliminary expectations point to draws in gasoline (-0.7 million barrels) and distillate (-3.6 million barrels), with a build in jet (+1.2 million barrels),” they added. In the Macquarie report, the Macquarie strategists highlighted that, this week, the EIA “reported builds in commercial crude (+3.5 million barrels) and at Cushing (+0.3 million barrels), with mixed product stats (gasoline +3.0 million barrels, distillate -5.0 million barrels, jet -0.3 million barrels)”. “While crude and gasoline builds were reasonably close to our expectations, the large distillate draw represented a bullish surprise,” the strategists said in the report. “Within the crude balance, runs realized below our expectation this week (-0.3 million barrels per day). Offsetting this impact, net imports were lower than expected on a nominal basis (-0.2 million barrels per day),”

Read More »

2025 US power sector outlook

The election of President Donald Trump has injected fresh uncertainty into the U.S. clean energy transition, with a bevy of tax credits, grants, federal programs and loan guarantees authorized by the Biden administration or previous administrations now under threat. Observers say momentum and private-sector profits will continue to drive investment in climate solutions, but the path ahead is murkier without certain federal support.  In the stories below, Utility Dive takes a detailed look at how the Trump administration and other factors at play in 2025 may impact the U.S. renewables sector, electricity prices, natural gas infrastructure, power demand, nuclear development, energy siting and more.

Read More »

Senate confirms Doug Burgum as Interior Secretary

The Senate voted to confirm former North Dakota Gov Doug Burgum as the Secretary of the Interior in a 79-18 vote late Thursday, adding him to President Donald Trump’s cabinet. Burgum joins an administration committed to expanding oil and gas production and curtailing wind generation on federal lands, though Burgum increased production of all three during his time as governor and said during his confirmation hearing that he supports an “all of the above” approach to generation. He added, however, that he thinks more generation overall is needed, “and the thing we’re short of right now is baseload [generation].” Trump said in a November Truth Social post that Burgum will also serve as chairman of a new National Energy Council, and through that role will also have a seat on the National Security Council. “This Council will oversee the path to U.S. ENERGY DOMINANCE by cutting red tape, enhancing private sector investments across all sectors of the Economy, and by focusing on INNOVATION over longstanding, but totally unnecessary, regulation,” Trump said. Burgum will also conduct a comprehensive review of offshore wind during a six-month pause on federal issuances of permits and leases. The review will examine “the ecological, economic, and environmental necessity of terminating or amending any existing wind energy leases, identifying any legal bases for such removal,” according to the White House, and will conclude with Burgum submitting a report to Trump. National Rural Electric Cooperative Association CEO Jim Matheson praised Burgum’s confirmation in a Thursday statement, saying that he “has long been a steadfast advocate for safeguarding the reliability of our electric grid, effectively stewarding our nation’s public lands and natural resources, reducing wildfire risk and helping rural communities across America thrive.” The American Clean Power Association’s CEO Jason Grumet also issued a supportive statement Thursday, congratulating Burgum

Read More »

Transmission is the key to American energy dominance

Christina Hayes is the executive director of Americans for a Clean Energy Grid. A lot can change in four years. When it comes to our energy grid, the axiom rings especially true. As the Trump administration gets settled in, new challenges are threatening the reliability and affordability of our energy systems. The rise of artificial intelligence and a domestic manufacturing boom has sent electricity demand skyrocketing after more than a decade of level use. Moreover, electricity demand is projected to continue increasing by 128 GW over the next five years, or five times greater than thought just two years ago. A modern electric grid remains the cornerstone of our nation’s economy, national security and global competitiveness. To their credit, President Trump’s nominees to lead the agencies in charge of ensuring energy dominance clearly understand this reality and have articulated it well. During his confirmation hearing, Energy Secretary nominee Chris Wright said that he would “seek to find the best ways to improve our transmission grid, including expansion and new lines.” Those sentiments were echoed by Interior Secretary Doug Burgum, who said during his confirmation hearing that public lands can help bolster the grid through transmission line development, noting that “if we don’t have the ability to transmit [energy] to the places where it’s needed, that’s going to be a problem.” To remain competitive and win the AI race with China, we will need all the energy we can generate while expanding our ability to transmit those electrons wherever they need to go.  Simply put: More transmission is an essential component of American energy dominance. Global competition for large-scale transmission is accelerating. China is currently building 80 times more high-voltage transmission than the United States. In 2022, China invested $166 billion in its transmission grid, surpassing the combined grid investments of

Read More »

Timeline of HPE’s $14 billion bid for Juniper

June 20, 2024: HPE-Juniper merger faces antitrust inquiry in UK An inquiry into HPE’s $14 billion takeover of Juniper Networks by the UK’s Competition and Markets Authority (CMA), a move that potentially could delay approval of the deal, will have little impact on data center managers, said one analyst with Info-Tech Research Group. Both companies were informed of the inquiry by the CMA, the UK’s principal antitrust regulator, on Wednesday. July 17, 2024: Juniper advances AI networking software Juniper continues to improve its AI-native networking platform while HPE’s $14 billion deal to acquire Juniper continues to advance through the requisite regulatory hurdles. The latest platform upgrades are designed to help enterprise customers better manage and support AI in their data centers. Juniper is also offering a new validated design for enterprise AI clusters and has opened a lab to certify enterprise AI data center projects. Aug. 01, 2024: EU clears HPE’s $14 billion Juniper acquisition Hewlett Packard Enterprise’s proposed acquisition of Juniper Networks took a big step forward this week as the European Commission unconditionally approved the buy. Next up: US and UK regulatory approval? Nov. 21, 2024: AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl HPE’s acquisition of Juniper has been under regulatory scrutiny ever since HPE announced the $14 billion deal in January. The proposed deal has passed muster with a number of world agencies so far, but there is reportedly some concern about it from the US Department of Justice.  Jan. 30, 2025: U.S. Justice Department sues to block HPE’s $14 billion Juniper buy After months of speculation, the U.S. Justice Department sued to block the $14 billion sale of Juniper Networks to HPE. The DOJ said reduced competition in the wireless market is the biggest problem with the proposed buy. “This proposed acquisition risks substantially lessening competition in

Read More »

Verizon brings AI suite to enterprise infrastructure customers

Verizon Business has launched AI Connect, an integrated suite of products designed to let businesses deploy generative artificial intelligence (AI) workloads at scale. Verizon is building its AI ecosystem by repurposing its existing infrastructure assets in its intelligent and programmable network, which consists of fiber, edge networking, and data center assets, along with its metro and long-haul fiber, ILEC and Fios footprint, its metro network build-out, lit and dark fiber services, and 5G network. Verizon believes that the drive toward real-time decision-making using inferencing will be what drives demand for additional computing power.  The company cites a McKinsey report, which states that 60% to 70% of AI workloads are expected to shift to real-time inference by 2030. That will create an urgent need for low-latency connectivity, compute and security at the edge beyond current demand.

Read More »

Trump’s 100% tariff threat on Taiwan chips raises cost, supply chain fears

“I don’t think we will see a near-term impact, as it takes years to build fabs, but by the end of the decade, the US share could rise by a few percentage points,” Gupta said. “It’s hard to give an exact number, but if I were to estimate, I’d say 14-15%. That isn’t a lot, but for the US to gain share, someone else must lose it, and while the US is making efforts, we see similar developments across Asia.” Yet, if Washington imposes smaller tariffs on imports from countries such as India, Japan, or Malaysia, Taiwanese chipmakers may shift production there rather than to the US, according to Stephen Ezell, vice president at the Information Technology and Innovation Foundation (ITIF). “Additionally, if the tariffs applied to Chinese chip exports were lower than for Taiwanese exports, Trump would be helping Chinese semiconductor manufacturers, whose exports to the US market would then be less expensive,” Ezell said in a recent note. “So, for this policy to have any real effect, Trump effectively must raise tariffs on all semiconductors, and that would likely lead to global tit-for-tat.” Enterprise IT faces tough choices If semiconductor tariffs drive up costs, enterprises will be forced to reassess spending priorities, potentially delaying or cutting investments in critical IT infrastructure. Rising chip prices could squeeze budgets for AI, cloud computing, and data center expansions, forcing businesses to make difficult trade-offs. “On the corporate side, hyperscalers and enterprise players need to brace for impact over the next 2-3 years if high tariffs continue along with the erosion of operating margin,” Faruqui said. “In addition, the boards and CEOs have to boldly make heavy CAPEX investment on US Soil via US and Asian partners as soon as possible to realize HVM on US soil and alleviate operating margin erosion due to

Read More »

New tweak to Linux kernel could cut data center power usage by up to 30%

When network traffic is heavy, it is most efficient, and delivers the best performance, to disable interrupts and run in polling mode. But when network traffic is light, interrupt-driven processing works best, he noted. “An implementation using only polling would waste a lot of resources/energy during times of light traffic. An implementation using only interrupts becomes inefficient during times of heavy traffic. … So the biggest energy savings arise when comparing to a high-performance always-polling implementation during times of light traffic,” Karsten said. “Our mechanism automatically detects [the amount of network traffic] and switches between polling and interrupt-driven to get the best of both worlds.” In the patch cover letter, Damato described the implementation of the new parameter in more detail, noting: “this delivery mode is efficient, because it avoids softIRQ execution interfering with application processing during busy periods. It can be used with blocking epoll_wait to conserve CPU cycles during idle periods. The effect of alternating between busy and idle periods is that performance (throughput and latency) is very close to full busy polling, while CPU utilization is lower and very close to interrupt mitigation.” Added Karsten: “At the nuts and bolts level, enabling the feature requires a small tweak to applications and the setting of a system configuration variable.” And although he can’t yet quantify the energy benefits of the technique (the 30% saving cited is best case), he said, “the biggest energy savings arise when comparing to a high-performance always-polling implementation during times of light traffic.”

Read More »

Macquarie’s Big Play in AI and HPC: $17+ Billion Invested Across Two Data Center Titans

Macquarie Asset Management (MAM) is making bold moves to position itself as a dominant force in the rapidly growing sectors of AI and high-performance computing (HPC). In a single week, MAM has made two pivotal investments in Applied Digital and Aligned Data Centers, committing over $17 billion to fuel innovation, growth, and capacity expansion in critical infrastructure markets across the Americas. Both deals highlight the immense demand for AI-ready and HPC-optimized data centers, underscoring the ongoing digitization of the global economy and the insatiable need for computing power to drive artificial intelligence (AI), machine learning (ML), and other resource-intensive workloads. Applied Digital Partners with Macquarie Asset Management for $5 Billion HPC Investment On January 14, Applied Digital Corporation announced what it billed as a transformative partnership with Macquarie to drive growth in HPC infrastructure. This agreement positions Applied Digital as a leading designer, builder, and operator of advanced data centers in the United States, catering to the growing demands of AI and HPC workloads. To account for the $5 billion commitment, funds managed by MAM will invest up to $900 million in Applied Digital’s Ellendale HPC Campus in North Dakota, with an additional $4.1 billion available for future HPC projects. This could support over 2 gigawatts (GW) of HPC data center development. MAM is a global asset manager overseeing approximately $633.7 billion in assets. Part of Australia-based Macquarie Group, it specializes in diverse investment solutions across real assets, real estate, credit, and equities. With its new landmark agreement with Macquarie, Applied Digital feels it is poised to redefine the HPC data center landscape, ensuring its place as a leader in the AI and HPC revolution. In terms of ownership structure, MAM’s investment here includes perpetual preferred equity and a 15% common equity interest in Applied Digital’s HPC business segment, allowing

Read More »

Data Center Frontier Announces Editorial Advisory Board for 2025 DCF Trends Summit

Nashua, NH – Data Center Frontier is excited to announce its Editorial Advisory Board for the second annual Data Center Frontier Trends Summit (DCF Trends Summit), taking place August 26-28, 2025, at the Hyatt Regency Reston in Reston, Virginia.  The 2025 DCF Trends Summit Editorial Advisory Board includes distinguished leaders from hyperscale and colocation operators, power and cooling solutions companies, IT and interconnection providers, and design/build/construction specialists. This year’s board has grown to include 15 esteemed executives, reflecting DCF’s commitment to providing comprehensive and diverse insights for the data center sector.  This visionary group of leaders, representing the critical facets of the data center ecosystem, will guide the event’s content and programming to address the most pressing trends impacting the industry. The group’s unparalleled expertise ensures the Summit will deliver essential insights to help data center stakeholders make informed decisions in the industry’s rapidly evolving landscape.  The Editorial Advisory Board for the 2025 DCF Trends Summit includes:  Scott Bergs, CEO, Dark Fiber & Infrastructure (DF&I) Steven Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Dan Crosby, CEO, Legend Energy Advisors Rob Coyle, Director of Technical Programs, Open Compute Project (OCP) Foundation Chris Downie, CEO, Flexential Sean Farney, VP of Data Centers, Jones Lang LaSalle (JLL) Mark Freeman, VP of Marketing, Vantage Data Centers Steven Lim, SVP of Marketing & GTM Strategy, NTT Global Data Centers David McCall, VP of Innovation, QTS Data Centers Nancy Novak, Chief Innovation Officer, Compass Datacenters Karen Petersburg, VP of Construction & Development, PowerHouse Data Centers Tara Risser, Chief Business Officer, Cologix Stefan Raab, Sr. Director, Business Development – AMER, Equinix Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Brenda Van der Steen, VP of Global Growth Marketing, Digital Realty “The Editorial Advisory Board for the second annual Data Center Frontier Trends Summit is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »