Stay Ahead, Stay ONMINE

Future-proofing business capabilities with AI technologies

In collaboration withCloudera and AWS Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work. “Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak. Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale.  At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs. That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large. “Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim. Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative.  The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening. Watch the webcast now. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

In collaboration withCloudera and AWS

Artificial intelligence has always promised speed, efficiency, and new ways of solving problems. But what’s changed in the past few years is how quickly those promises are becoming reality. From oil and gas to retail, logistics to law, AI is no longer confined to pilot projects or speculative labs. It is being deployed in critical workflows, reducing processes that once took hours to just minutes, and freeing up employees to focus on higher-value work.

“Business process automation has been around a long while. What GenAI and AI agents are allowing us to do is really give superpowers, so to speak, to business process automation.” says chief AI architect at Cloudera, Manasi Vartak.

Much of the momentum is being driven by two related forces: the rise of AI agents and the rapid democratization of AI tools. AI agents, whether designed for automation or assistance, are proving especially powerful at speeding up response times and removing friction from complex workflows. Instead of waiting on humans to interpret a claim form, read a contract, or process a delivery driver’s query, AI agents can now do it in seconds, and at scale. 

At the same time, advances in usability are putting AI into the hands of nontechnical staff, making it easier for employees across various functions to experiment, adopt and adapt these tools for their own needs.

That doesn’t mean the road is without obstacles. Concerns about privacy, security, and the accuracy of LLMs remain pressing. Enterprises are also grappling with the realities of cost management, data quality, and how to build AI systems that are sustainable over the long term. And as companies explore what comes next—including autonomous agents, domain-specific models, and even steps toward artificial general intelligence—questions about trust, governance, and responsible deployment loom large.

“Your leadership is especially critical in making sure that your business has an AI strategy that addresses both the opportunity and the risk while giving the workforce some ability to upskill such that there’s a path to become fluent with these AI tools,” says principal advisor of AI and modern data strategy at Amazon Web Services, Eddie Kim.

Still, the case studies are compelling. A global energy company cutting threat detection times from over an hour to just seven minutes. A Fortune 100 legal team saving millions by automating contract reviews. A humanitarian aid group harnessing AI to respond faster to crises. Long gone are the days of incremental steps forward. These examples illustrate that when data, infrastructure, and AI expertise come together, the impact is transformative. 

The future of enterprise AI will be defined by how effectively organizations can marry innovation with scale, security, and strategy. That’s where the real race is happening.

Watch the webcast now.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

IBM unveils advanced quantum computer in Spain

IBM executives and officials from the Basque Government and regional councils in front of Europe’s first IBM Quantum System Two, located at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain. The Basque Government and IBM unveil the first IBM Quantum System Two in Europe at the IBM-Euskadi Quantum Computational

Read More »

Public disclosures of AI risk surge among S&P 500 companies

More than seven of every 10 public companies on the S&P 500 now flag their use of artificial intelligence as a material risk in their public disclosures, according to a report released Friday by The Conference Board.  That figure represents a sharp increase from just 12% in 2023, reflecting the rapid implementation of AI use among major enterprises.  “This is a powerful reflection of how quickly AI has developed from a niche topic to widely adopted and embedded in the organization,” Andrew Jones, a principal researcher at the Conference Board Governance & Sustainability Center, told Cybersecurity Dive via email.  AI has moved beyond the experimentation phase at major enterprises and is embedded across core business systems, including product design, logistics, credit modeling and customer interfaces, Jones said.  The report shows that corporate boards and C-suite leaders are addressing a range of risk factors in connection with AI deployment.  Reputational risk is the most widely disclosed issue, at 38%, according to the report. This reflects the potential impact of losing trust in a brand in the case of a service breakdown, mishandling of consumer privacy or a customer-facing tool that fails to deliver.  Cybersecurity risk is cited by 20% of firms. AI increases the attack surface, and companies are also at risk from third-party applications.  Legal and regulatory risks are also a major issue, as state and federal governments have rapidly attempted to set up security guardrails to protect the public, while providing enough support for companies to continue innovation.  While AI deployment is rapidly evolving in the enterprise, corporate leaders are still struggling to fully develop the governance structures to manage its use.  The PwC “2025 Annual Corporate Director’s Survey” shows only 35% of corporate boards have formally integrated AI into their oversight responsibilities, an indication that companies are still

Read More »

Oxy CEO Sees Tight Oil Price Range Through 2026

Occidental Petroleum Corp. Chief Executive Officer Vicki Hollub sees oil pricing between $58 and $62 a barrel through 2026, she said Tuesday at the Energy Intelligence Forum in London. Beyond that prices should rise, Hollub said during a session that focused on global crude benchmark Brent. Hollub said she is “very bullish on oil prices, not this year or next, but I’m bullish on oil prices.” Separately, the CEO said that as part of its long-term plan, the Houston-based firm can more than double its share price in about five years, assuming multiples stay the same, mostly by converting more debt to equity. The company “doesn’t need to do any more acquisitions,” she added. US oil supply is likely to peak between 2027 and 2030, Hollub told the audience. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

USA EIA Raises USA Oil Production Forecasts

The U.S. Energy Information Administration (EIA) raised its U.S. crude oil production forecast for 2025 and 2026 in its latest short term energy outlook (STEO), which was released on October 7. According to this STEO, the EIA now sees U.S. crude oil production, including lease condensate, averaging 13.53 million barrels per day in 2025 and 13.51 million barrels per day in 2026. In its previous STEO, which was released in September, the EIA projected that U.S. crude oil production, including lease condensate, would average 13.44 million barrels per day this year and 13.30 million barrels per day next year. The EIA’s October STEO sees U.S. crude oil output coming in at 13.66 million barrels per day in the fourth quarter of 2025, 13.62 million barrels per day in the first quarter of next year, 13.53 million barrels per day in the second quarter, 13.40 million barrels per day in the third quarter, and 13.48 million barrels per day in the fourth quarter. In its September STEO, the EIA projected that U.S. crude oil production would average 13.51 million barrels per day in the fourth quarter of this year, 13.45 million barrels per day in the first quarter of next year, 13.39 million barrels per day in the second quarter, 13.20 million barrels per day in the third quarter, and 13.17 million barrels per day in the fourth quarter. The EIA’s latest STEO projected that the Lower 48 states, excluding the Gulf of America, will contribute 11.22 million barrels per day of the total projected figure for 2025 and 11.10 million barrels per day of the total projected figure for 2026. The Federal Gulf of America is expected to contribute 1.89 million barrels per day to this year’s total projected figure and 1.96 million barrels per day to next year’s total

Read More »

Cenovus Buys Into MEG in Open Market as Takeover Bid Advances

Cenovus Energy Inc said Tuesday it has acquired 8.5 percent of MEG Energy Corp’s common stock through open trading, even as its takeover offer for the pure-play oil sands producer progresses with Strathcona Resources Ltd dropping a competing bid. The open-market acquisition involved about 21.72 million shares out of around 254.38 million MEG common shares issued and outstanding, Toronto- and New York-listed Cenovus said in a statement on its website. Cenovus started buying into Toronto-listed MEG October 8, according to Tuesday’s statement. That day, Cenovus announced it had signed a new agreement with MEG that amended the price and the cash-and-stock allocation for the takeover. The transactions happened “through the facilities of the Toronto Stock Exchange or other Canadian alternative exchanges or markets”, Cenovus said. “The MEG common shares were acquired by Cenovus in furtherance of its previously announced transaction with MEG”, Cenovus said. “To the extent Cenovus is able, the company intends to vote any acquired shares in favor of the transaction”. Under the amended agreement, each MEG shareholder can opt to receive for each MEG common share CAD 29.5 ($21) in cash or 1.24 Cenovus common shares, subject to a maximum of $3.8 billion in cash and 157.7 million Cenovus common shares. “The pro-rated consideration represents a mix of 50 percent cash and 50 percent Cenovus common shares”, Cenovus said in a press release October 8. “On a fully pro-rated basis, the consideration per MEG common share represents approximately CAD 14.75 in cash and 0.62 of a Cenovus common share. “The fully pro-rated consideration for MEG represents a value of approximately CAD 29.8 per MEG share at Cenovus’ closing share price on October 7, 2025, an increase of approximately CAD 1.32 per share based on current market pricing relative to the terms of the original arrangement agreement. “The consideration under

Read More »

Strategists Forecast Week on Week USA Crude Stock Build

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be up by 5.2 million barrels for the week ending October 10. “This follows a 3.7 million barrel build in the prior week, with the crude balance realizing relatively close to our expectations,” the strategists said in the report. “For this week’s balance, from refineries, we model a large reduction in crude runs (-0.6 million barrels per day) following a surprisingly strong print last week,” they added. “Among net imports, we model a large reduction, with exports higher (+0.2 million barrels per day) and imports lower (-0.6 million barrels per day) on a nominal basis,” they continued. The strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a moderate increase (+0.5 MBD) on a nominal basis this week,” the strategists went on to state in the report. “Rounding out the picture, we model a larger increase (+0.5 million barrels) in SPR [Strategic Petroleum Reserve] stocks this week,” they added. Also in the report, the Macquarie strategists noted that, “among products”, they “look for draws in gasoline (-1.4 million barrels) and distillate (-0.6 million barrels), with a build in jet (+1.1 million barrels)”. “We model implied demand for these three products at ~14.2 million barrels per day for the week ending October 10,” the strategists stated in the report. In its latest weekly petroleum status report at the time of writing, which was released on October 8 and includes data for the week ending October 3, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in

Read More »

Greenflash Acquires Planned 200 MW BESS Project in Texas

Greenflash Infrastructure LP said Tuesday it had acquired a proposed 200-megawatt (MW) battery energy storage system (BESS) project in Fort Bend County, Texas, from Advanced Power. “The fully permitted, interconnection-ready project is expected to receive Notice to Proceed in 2026, with commercial operations targeted for mid-2027”, Houston-based power investor Greenflash said in a press release. Greenflash managing partner Vishal Apte said, “This acquisition adds near-term, execution-ready capacity toward our five-gigawatt ERCOT [Electric Reliability Council of Texas market] buildout”. Advanced Power chief executive Tom Spang said, “ERCOT, like other major power markets in the U.S., has an urgent need for projects that enhance grid reliability”. “As a premier developer of thermal, renewable and now, BESS, technology, Advanced Power is committed to bringing these contemporary power solutions to companies like Greenflash, who recognize the region’s urgent and growing energy and capacity needs”, Spang added. Advanced Power’s Rock Rose project “was selected for its interconnection position, transmission access and capacity to support grid reliability and flexible dispatch”, Greenflash said. “The acquisition supports Greenflash’s strategy to deploy utility-scale battery projects across ERCOT”. Rock Rose is Greenflash’s second battery energy storage project. Earlier this month it said it had completed hybrid tax capital and debt financing for Project Soho, a 400-MW standalone battery storage in Brazoria County, Texas. “The project is the largest standalone BESS currently under construction in TX and is ahead of schedule to energize in Q1 2026, and achieve commercial operations in Q2 2026”, Greenflash said in an online statement October 7. “We designed this financing structure to be a scalable, repeatable template for our five-gigawatt near-term ERCOT pipeline”, said Greenflash co-founder and vice president Joel Chisolm. The financing included a preferred equity investment from funds managed by New York City-based Wafra Inc. “Acadia Infrastructure Capital LP, a North American power infrastructure investment

Read More »

Q&A: IBM’s Mikel Díez on hybridizing quantum and classical computing

And, one clarification. Back in 2019, when we launched our first quantum computer, with between 5 and 7 qubits, what we could attempt to do with that capacity could be perfectly simulated on an ordinary laptop. After the advances of these years, being able to simulate problems requiring more than 60 or 70 qubits with classical technology is not possible even on the largest classical computer in the world. That’s why what we do on our current computers, with 156 qubits, is run real quantum circuits. They’re not simulated: they run real circuits to help with artificial intelligence problems, optimization of simulation of materials, emergence of models… all that kind of thing. The Basque Government’s BasQ program includes three types of initiatives or projects. The first are related to the evolution of quantum technology itself: how to continue improving error correction, how to identify components of quantum computers, and how to optimize both these and the performance of these devices. From a more scientific perspective, we are working on how to represent the behavior of materials so that we can improve the resistance of polymers, for example. This is useful in aeronautics to improve aircraft suspension. We are also working on time crystals, which, from a scientific perspective, seek to improve precision, sensor control, and metrology. Finally, a third line relates to the application of this technology in industry; for example, we are exploring how to improve the investment portfolio for the banking sector, how to optimize the energy grid , and how to explore logistics problems. What were the major challenges in launching the machine you’re inaugurating today? Why did you choose the Basque Country to implement your second Quantum System Two? Before implementing a facility of this type in a geographic area, we assess whether it makes sense based on

Read More »

Preparing for 800 VDC Data Centers: ABB, Eaton Support NVIDIA’s AI Infrastructure Evolution

Vendors and operators are already preparing for AI campuses measured in gigawatts. ABB’s announcement underscores the scale of this transition—not incremental retrofits, but entirely new development models for multi-GW AI infrastructure. How ABB Is Supporting the Move to 800-V DC Data Centers ABB says its joint work with NVIDIA will focus on advanced power solutions to enable 800-V DC architectures supporting 1-MW racks. Expect DC-rated breakers, protection relays, busways, and power shelves engineered for higher DC voltages, along with interfaces for liquid-cooled rack busbars. In parallel with the NVIDIA partnership, ABB has introduced an AI-ready refresh of its MNS® low-voltage switchgear, integrating SACE Emax 3 breakers with enhanced sensing and analytics to reduce footprint while improving selectivity and uptime. These components form the foundational building blocks of the higher-density electrical rooms and prefabricated skids that will define next-generation data centers. ABB’s MegaFlex UPS line already targets hyperscale and colocation environments with megawatt-class modules (UL 415/480-V variants), delivering high double-conversion efficiency and seamless integration with ABB’s Ability™ Data Center Automation platform—unifying BMS, EPMS, and DCIM functions. As racks transition to 800-V DC and liquid-cooled buses, continuous thermal-electrical co-optimization becomes essential. In this new paradigm, telemetry and controls will matter as much as copper and coolant. NVIDIA’s technical brief positions 800-V DC as the remedy for today’s inefficiencies—reducing space, cable mass, and conversion losses that accompany rising rack densities of 200 to 600 kW and beyond. The company’s 800-V rollout is targeted for 2027, with ecosystem partners spanning the entire electrical stack. Early signals from the OCP Global Summit 2025 confirm that vendors are moving rapidly to align their products and architectures with this vision. The Demands of Next-Generation GPUs NVIDIA’s Vera Rubin NVL144 rack design previews what the next phase of AI infrastructure will require: 45 °C liquid cooling, liquid-cooled busbars,

Read More »

Nvidia’s DGX Spark desktop supercomputer is on sale now, but hard to find

Industrial demand Nvidia’s DGX chips are in high demand in industry, though, and it’s more likely that Micro Center’s one-Spark limit is to prevent businesses scooping them up by the rack-load to run AI applications in their data centers. The DGX Spark contains an Nvidia GB10 Grace Blackwell chip, 128GB of unified system memory, a ConnectX-7 smart NIC for connecting two Spark’s in parallel, and up to 4TB of storage in a package just 150mm (about 6 inches) square. It consumes 240W of electrical power and delivers 1 petaflop of performance at FP4 precision — that’s one million billion floating point operations with four-bit precision per second. In comparison, Nvidia said, its original DGX-1 supercomputer based on its Pascal chip architecture and launched in 2016 delivered 170 teraflops (170,000 billion operations per second) at FP16 precision, but cost $129,000 and consumed 3,200W. It also weighed 60kg, compared to the Spark’s 1.2kg or 2.65 pounds. Nvidia won’t be the only company selling compact systems based on the DGX Spark design: It said that partner systems will be available from Acer, Asus, Dell Technologies, Gigabyte, HP, Lenovo, and MSI. This article originally appeared on Computerworld.

Read More »

Florida’s Data Center Moment: Power, Policy, and Potential

Florida is rapidly positioning itself as one of the next major frontiers for data center development. With extended tax incentives, proactive utilities, and a strategic geographic advantage, the state is aligning power, policy, and economic development in ways that echo the early playbook of Northern Virginia. In the latest episode of The Data Center Frontier Show, Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission, join DCF to explore the opportunities and lessons shaping Florida’s emergence as a data center powerhouse. Energy and Infrastructure: A Strong Starting Position Unlike regions grappling with grid strain, Florida begins its data center growth story with energy abundance. While Loudoun County, Virginia—home to the world’s largest concentration of data centers—faced a 600 MW power deficit last year and could reach 12 GW of demand by 2030, Florida maintains excess generation capacity and robust renewable energy integration. Utilities like Florida Power & Light (FPL) and Duke Energy are already preparing for hyperscale and AI-driven loads, filing new large-load tariff structures to balance growth with ratepayer protection. Over the past decade, Florida utilities have also invested billions to harden their grids against hurricanes and extreme weather, resulting in some of the most resilient energy infrastructure in the country. Florida’s 10-year generation planning requirement, which ensures a diverse portfolio including nuclear, solar, and battery storage, further positions the state to meet growing digital infrastructure needs through hybrid on-site generation and demand-response capabilities. Economic and Workforce Advantages The state’s renewed sales tax exemptions for data centers through 2037—and the raised 100 MW IT load threshold—signal a strong bid to attract hyperscale operators and large-scale AI campuses. Florida also offers a competitive electricity rate structure comparable to Virginia’s

Read More »

Inside Blackstone’s Electrification Push: From Shermco to the Power Backbone of AI Data Centers

According to the National Electrical Manufacturers Association (NEMA), U.S. energy demand is projected to grow 50% by 2050. Electrical manufacturers have invested more than $10 billion since 2021 in new technologies to expand grid and manufacturing capacity, also reducing reliance on materials from China by 32% since 2018. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built. As we previously reported in Data Center Frontier, investors realized this years ago, viewing these facilities both as technology assets and a unique convergence of real estate, utility infrastructure, and mission-critical systems that can also generate revenue. One of those investors is global asset manager Blackstone, which through its Energy Transition Partners private equity arm, recently acquired Shermco Industries for $1.6 billion. Announced August 21, the deal is part of Blackstone’s strategy to invest in companies that support the growing demand for electrification and a more reliable power grid. The goal is to strengthen data center infrastructure reliability and expand critical electrical services. Founded in 1974, Texas-based Shermco is one of the largest electrical testing organizations accredited by the InterNational Electrical Testing Association (NETA). The company operates in a niche yet important space: providing lifecycle electrical services, including maintenance, testing, commissioning, repair, and design, in support of data centers, utilities, and industrial clients. It has more than 40 service centers in the U.S. and Canada. In addition to helping Blackstone support its electrification and power grid reliability goals, the Shermco purchase is also part of Blackstone’s strategy to increase scale and resources—revenue increases without a substantial increase in resources—thus expanding its footprint and capabilities within the essential energy services sector.  As data centers expand globally, become more energy intensive, and are pressured to incorporate renewables and modernize grids, Blackstone’s leaders plan to leverage Shermco’s

Read More »

Cooling, Compute, and Convergence: How Strategic Alliances Are Informing the AI Data Center Playbook

Schneider Electric and Compass Datacenters: Prefabrication Meets the AI Frontier “We’re removing bottlenecks and setting a new benchmark for AI-ready data centers.” — Aamir Paul, Schneider Electric In another sign of how collaboration is accelerating the next wave of AI infrastructure, Schneider Electric and Compass Datacenters have joined forces to redefine the data center “white space” build-out: the heart of where power, cooling, and compute converge. On September 9, the two companies unveiled the Prefabricated Modular EcoStruxure™ Pod, a factory-built, fully integrated white space module designed to compress construction timelines, reduce CapEx, and simplify installation while meeting the specific demands of AI-ready infrastructure. The traditional fit-out process for the IT floor (i.e. integrating power distribution, cooling systems, busways, cabling, and network components) has long been one of the slowest and most error-prone stages of data center construction. Schneider and Compass’ new approach tackles that head-on, shifting the entire workflow from fragmented on-site assembly to standardized off-site manufacturing. “The traditional design and approach to building out power, cooling, and IT networking equipment has relied on multiple parties installing varied pieces of equipment,” the companies noted. “That process has been slow, inefficient, and prone to errors. Today’s growing demand for AI-ready infrastructure makes traditional build-outs ripe for improvement.” Inside the EcoStruxure Pod: White Space as a Product The EcoStruxure Pod packages every core element of a high-performance white space environment (power, cooling, and IT integration) into a single prefabricated, factory-tested unit. Built for flexibility, it supports hot aisle containment, InRow cooling, and Rear Door Heat Exchanger (RDHx) configurations, alongside high-power busways, complex network cabling, and a technical water loop for hybrid or full liquid-cooled deployments. By manufacturing these pods off-site, Schneider Electric can deliver a complete, ready-to-install white space module that arrives move-in ready. Once delivered to a Compass Datacenters campus, the

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »