Stay Ahead, Stay ONMINE

Anthropic’s chief scientist on 5 ways agents will be even better in 2025

Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry.  “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” Sam Altman claimed in a blog post last week. In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary. In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you. Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana. Computer use is a glimpse of what’s to come for agents. To learn what’s coming next, MIT Technology Review talked to Anthropic’s cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025. (Kaplan’s answers have been lightly edited for length and clarity.) 1/ Agents will get better at using tools “I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, they’re getting better in that direction. But another direction that’s very relevant is what kinds of environments or tools the AI can use.  “So, like, if you go back almost 10 years now to [DeepMind’s Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then that’s a very restrictive environment. It’s not actually useful, even if it’s very smart. With text models, and then multimodal models, and now computer use—and perhaps in the future with robotics—you’re moving toward bringing AI into different situations and tasks, and making it useful.  “We were excited about computer use basically for that reason. Until recently, with large language models, it’s been necessary to give them a very specific prompt, give them very specific tools, and then they’re restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when they’ve made mistakes, or realize when there’s a high-stakes question and it needs to ask the user for feedback.” 2/ Agents will understand context   “Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing or what needs you and your organization have. “I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you. That’s underemphasized a bit with agents. It’s necessary for systems to be not only useful but also safe, doing what you expected. “Another thing is that a lot of tasks won’t require Claude to do much reasoning. You don’t need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what we’ll see is not just more reasoning but the application of reasoning when it’s really useful and important, but also not wasting time when it’s not necessary.” 3/ Agents will make coding assistants better “We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities. “I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI. “My expectation is that we’ll also see further improvements to coding assistants. That’s something that’s been very exciting for developers. There’s just a ton of interest in using Claude 3.5 for coding, where it’s not just autocomplete like it was a couple of years ago. It’s really understanding what’s wrong with code, debugging it—running the code, seeing what happens, and fixing it.” 4/ Agents will need to be made safe “We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think that’s just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection.  [Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.] “Prompt injection is probably one of the No.1 things we’re thinking about in terms of, like, broader usage of agents. I think it’s especially important for computer use, and it’s something we’re working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldn’t do. “And with more advanced models, there’s just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terrorists—that kind of thing. “So I’m really excited about how AI will be useful—it’s actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, there’ll be a lot of challenges as well. It’ll be an interesting year.”

Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry. 

“We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” Sam Altman claimed in a blog post last week.

In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary.

In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you.

Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana.

Computer use is a glimpse of what’s to come for agents. To learn what’s coming next, MIT Technology Review talked to Anthropic’s cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025.

(Kaplan’s answers have been lightly edited for length and clarity.)

1/ Agents will get better at using tools

“I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, they’re getting better in that direction. But another direction that’s very relevant is what kinds of environments or tools the AI can use. 

“So, like, if you go back almost 10 years now to [DeepMind’s Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then that’s a very restrictive environment. It’s not actually useful, even if it’s very smart. With text models, and then multimodal models, and now computer use—and perhaps in the future with robotics—you’re moving toward bringing AI into different situations and tasks, and making it useful. 

“We were excited about computer use basically for that reason. Until recently, with large language models, it’s been necessary to give them a very specific prompt, give them very specific tools, and then they’re restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when they’ve made mistakes, or realize when there’s a high-stakes question and it needs to ask the user for feedback.”

2/ Agents will understand context  

“Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing or what needs you and your organization have.

Jared Kaplan

“I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you. That’s underemphasized a bit with agents. It’s necessary for systems to be not only useful but also safe, doing what you expected.

“Another thing is that a lot of tasks won’t require Claude to do much reasoning. You don’t need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what we’ll see is not just more reasoning but the application of reasoning when it’s really useful and important, but also not wasting time when it’s not necessary.”

3/ Agents will make coding assistants better

“We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities.

“I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI.

“My expectation is that we’ll also see further improvements to coding assistants. That’s something that’s been very exciting for developers. There’s just a ton of interest in using Claude 3.5 for coding, where it’s not just autocomplete like it was a couple of years ago. It’s really understanding what’s wrong with code, debugging it—running the code, seeing what happens, and fixing it.”

4/ Agents will need to be made safe

“We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think that’s just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection. 

[Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.]

“Prompt injection is probably one of the No.1 things we’re thinking about in terms of, like, broader usage of agents. I think it’s especially important for computer use, and it’s something we’re working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldn’t do.

“And with more advanced models, there’s just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terrorists—that kind of thing.

“So I’m really excited about how AI will be useful—it’s actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, there’ll be a lot of challenges as well. It’ll be an interesting year.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Delays in TSMC’s Arizona plant spark supply chain worries

Delays at TSMC’s Arizona plant could compel its customers to rely on Taiwan-based facilities, leaving them vulnerable to geopolitical risks tied to Taiwan’s dominance in semiconductor production. “This situation could also delay the rollout of next-generation products in the US market, affecting timelines for AI, gaming, and high-performance computing innovations,”

Read More »

ADNOC, AIQ Complete Trial Phase of Agentic AI Solution

In a release sent to Rigzone recently by the ADNOC team, the company revealed that it and AIQ had completed the trial phase of an agentic AI solution dubbed ENERGYai, which ADNOC described as “the world’s first of its kind agentic artificial intelligence solution tailored for the energy sector”. “The 90 day proof of concept trial demonstrated that ENERGYai’s agentic AI – AI ‘agents’ that are trained in specific tasks across the energy value chain – can deliver significant improvements in the pace and accuracy of upstream exploration through rapid, precise, and detailed seismic survey analysis, alongside relevant, actionable insights to support production optimization at ADNOC’s existing wells,” ADNOC stated in the release. The company noted that the results of the trial “delivered promising real-world results, including a 70 percent improvement in accuracy in major seismic interpretation aspects and significant improvements in advanced reservoir monitoring and anomaly detection”. ADNOC also said the trial “showed promising results in enhancing data quality, as it vastly improved the reliability and usability of operational data inputs by detecting errors, standardizing formats and enriching datasets”. The company revealed in the release that the first operational, scalable version of ENERGYai is expected to be completed in the first half of this year. This version will include five fully operational AI agents covering tasks within subsurface operations and will be test-deployed across a number of upstream assets, with plans to scale its application to thousands of additional wells, ADNOC stated in the release. “The successful completion of this proof of concept for ENERGYai has shown extremely promising results and has confirmed the potential of the solution to be a powerhouse for value creation and sustainable energy production,” Musabbeh Al Kaabi, ADNOC Upstream CEO, said in the release. “Building on this initial achievement, ENERGYai will leverage petabytes of

Read More »

Advanced reactors, interstate cooperation part of New York’s nuclear future: Gov. Hochul

Dive Brief: New York will develop a “master plan” for advanced nuclear development, participate in a multi-state initiative on nuclear cost-sharing and development risk-reduction, and support Constellation Energy’s efforts to study advanced nuclear reactor siting at its Nine Mile Point nuclear plant, Gov. Kathy Hochul, D, said Jan. 14. The Master Plan for Responsible Advanced Nuclear Development in New York will build on the New York State Energy Research and Development Authority’s final Blueprint for Advanced Nuclear Technologies and is expected to be published by the end of 2026, Hochul said. Hochul’s announcements come amid an ongoing state request for information seeking input from entities pursuing or considering advanced nuclear development in upstate New York. Dive Insight: Hochul discussed New York’s nuclear ambitions as part of a $1 billion slate of climate and clean energy investments detailed during her annual State of the State address on Jan. 14. Characterizing the investment as “a monumental step towards a greener, more affordable future for New York State,” Hochul described the nuclear initiatives as well as plans to run state agencies on 100% renewable energy by 2030, decarbonize New York’s state and city university campuses and advance the state’s cap-and-invest emissions reduction program. Constellation Energy plans to apply for a U.S. Department of Energy grant “to support the company’s efforts to seek an early site permit from the Nuclear Regulatory Commission for one or more advanced nuclear reactors at the Nine Mile Point Clean Energy Center in Oswego, New York,” the publicly traded power producer said Jan. 15.  An early site permit approves a specific site for future nuclear reactor development. Each permit is valid for 10 to 20 years and can be extended for another 10 to 20 years, according to the NRC. DOE is expected to name grant awardees in early

Read More »

Vecino to Build Gas Storage Caverns at Piedras Pintas Salt Dome

Vecino Energy Partners LLC has signed an agreement with DRW Energy Trading LLC to build hydrocarbon storage services at the Piedras Pintas salt dome in Duval County, Texas. Phase 1 is planned to have two natural gas storage caverns, associated infrastructure and a new 40-mile pipeline connecting the project to the gas hub in Agua Dulce. The storage caverns are planned to start service 2028 and 2029. “Vecino controls enough surface and mineral interests to develop both a gas and liquids storage business rivaling the premier facilities on the Gulf Coast with room for over 50 billion cubic feet of high-deliverability natural gas capacity, over 100 million barrels of liquids storage or a combination thereof”, said a joint statement Tuesday. “The project will offer connectivity to Agua Dulce and be the closest salt dome storage to Corpus Christi, Brownsville and Mexico”. The statement said, “DRW will serve as the anchor customer for the project, underwriting the gas caverns in Phase I”. “DRW, with its deep market expertise, will also play a key role in ensuring the project’s success as a major shipper and investor in the region”, the statement added. “This project underscores DRW’s commitment to advancing energy infrastructure along the Texas-Mexico border and its strategic focus on providing innovative, market-driven solutions to the energy industry”. Nelson Ferries, Vecino executive vice president for origination, said, “Natural gas volatility continues to be a real challenge for producers, marketers and industrial users, and this facility is specifically designed to navigate those short-term market dislocations”. “Furthermore, we believe that natural gas and liquid storage at this location will help unlock further industrial development near Corpus Christi and Brownsville”, Ferries added. Jonathan Harms, DRW energy managing director for origination, said, “We believe this project presents an opportunity to help strengthen the energy infrastructure in

Read More »

Scottish and UK government must work together, GB Energy inquiry told

A committee hearing to discuss the future of the government’s flagship energy company GB Energy in Aberdeen on Wednesday emphasised the need for UK and Scottish governments to work together in the transition to net zero. At the Scottish Affairs Committee hearing, Emma Pinchbeck, chief executive of the Climate Change Committee (CCC), stressed the need for collaboration on climate goals and admitted she is “frustrated” communities are so far failing to feel the benefit of energy transition. “There’s an understanding in Westminster that we need to work with Scotland to deliver some of the key things in the government’s policy agenda but also in climate targets more broadly,” she told the hearing. Pinchbeck said the Scottish government will need to increasingly collaborate with regulators and strategic institutions in the UK, such as the National Energy System Operator (NESO). She warned that the planning process for energy infrastructure is “outdated” across both Scotland and the UK. Owen Bellamy, who is head of energy supply decarbonisation and resilience at the climate committee, said that the Scottish electricity system, where more electricity is generated than consumed, is not “completely separate” from the rest of the UK. They raised the need for different energy sectors to work together, for example on offshore site leasing and consenting, which is important for both the offshore wind and oil and gas sectors. “In our view as the committee for net zero overall, the economy in terms of GDP and growth is better under a net zero trajectory than an economy that is still dependent on fossil fuels,” Pinchbeck said. “That’s for a couple of reasons. One, the energy transition is gathering pace globally so electrification technologies, in particular, and renewables, are looking set to be a very dominant fuel in the future energy system.” However, Richard Hardy,

Read More »

Federal policy rollbacks won’t stop electricity growth

Rawley Loken is a Senior Managing Consultant, Amber Mahone is a Managing Partner and Tory Clark is a Parter at Energy and Environmental Economics, also known as E3. The electric sector is at a tipping point, with new demands from data centers, electric vehicles, heat pumps and manufacturing driving demand growth at a pace not seen in decades, highlighting the need for new grid infrastructure investment and integrated resource planning strategies.  With renewed Republican control of the presidency and Congress, questions abound over which of these trends might be slowed or accelerated. Donald Trump campaigned against the Inflation Reduction Act and Biden-era federal regulations that promote electrification, efficiency and emissions reductions, while asserting goals to increase domestic manufacturing through tariff-based trade policies. Our analysis of the cumulative impact of technology adoption trends and expected policies on the electric sector, using E3’s U.S. Pathways model, suggests that the recent growth trends are likely to continue and even accelerate under the next administration. We project that U.S. electricity retail sales will grow rapidly and could increase at a rate of 1.6% to 2.2% per year for the next decade under a range of policy and data center growth outcomes reflected in the E3 Low and E3 High scenarios (Figure 1). For comparison, U.S. retail sales of electricity have been nearly flat since 2007, growing at only 0.2% per year. Figure 1: Total U.S. annual electricity retail sales, historical and projected by scenario Permission granted by E3 While electricity demand is expected to increase across the U.S., the rate of growth will vary widely by state (Figure 2). States with the highest projected electric load growth rates include those with the largest growth in data centers (e.g. Virginia) and those with new grid connected load from the oil and gas sector (e.g. New

Read More »

Liquid Wind to Build eFuel Plant in Finland

Liquid Wind AB has signed an agreement with Turun Seudun Energiantuotanto Oy (TSE) to build an eMethanol facility that can produce up to 100,000 metric tons a year in Naantali, Finland. The facility is planned to be adjacent to TSE’s Naantali 4 power plant, which will supply biogenic carbon dioxide and steam for producing eMethanol. “In addition, the process and waste heat of Liquid Wind’s facility will be used for district heat, reducing the share of incineration-based district heat production by TSE”, Gothenburg, Sweden-based eFuel firm Liquid Wind said in an online statement. The partners plan to make a final investment decision next year. The facility is targeted to be operational 2029. Liquid Wind, as the main project developer, has started the permitting process, the company said. Liquid Wind and TSE, a power and heating utility serving Turku, will “explore the possibilities of ensuring a sufficient supply of renewable electricity to produce eMethanol”, it said. “eMethanol is a very versatile commodity that can replace fossil fuels in hard-to-abate sectors such as shipping and aviation while reducing CO2 [carbon dioxide] emissions”, Liquid Wind said. Claes Fredriksson, founder and chief executive of Liquid Wind, said, “Locally, in the city of Naantali, we will focus on CCU [carbon capture and utilization] and the reuse of CO2 and waste heat”. “Globally, we aim to support the transition by allowing our off-takers to shift from fossil fuels to low-carbon eFuel produced at this facility”, Fredriksson said. TSE managing director Pertti Sundberg commented, “This is an important project for us to achieve the climate goals of our owners and to secure renewable district heating for the Turku region in the future”. The European Union requires member states to migrate their district heating and cooling systems to 100 percent renewable energy, waste heat or a combination of the two

Read More »

US GPU export limits could bring cold war to AI, data center markets

Eighteen countries, including the UK, Canada, Sweden, France, Germany, Japan, and South Korea, are exempted from the AI export caps. The Biden administration had previously banned the export of some powerful AI chips to China, Russia, and other adversaries in rules from 2022 and 2023. But other countries friendly to the US, including Mexico, Israel, India, and Saudi Arabia, would be subject to the quotas. The export limits would take effect 120 days from the Jan. 13 order, and it’s unclear whether the incoming Trump administration will amend or rewrite the rule, although Trump has targeted China as a primary economic competitor of the US. The cost of AI In addition to cutting off most of the world from large AI chip purchases, the rule will force countries such as China and Russia to pump up their own AI capabilities, ultimately reducing US AI leadership, claims Aible’s Sengupta.

Read More »

Sustainability, grid demands, AI workloads will challenge data center growth in 2025

Cloud training for AI models Uptime believes that most AI models will be trained in the cloud rather than on dedicated enterprise infrastructure, as cloud services provide a more cost-effective way to fine-tune foundation models for specific use cases. The incremental training required to fine-tune a foundation model can be done cost-effectively on cloud platforms without the need for a large, expensive on-premises cluster. Enterprises can leverage on-demand cloud resources to customize the foundation model as needed, without investing the capital and operational costs of dedicated hardware. “Because fine-tuning requires only a relatively small amount of training, for many it just wouldn’t make sense to buy a huge, expensive dedicated AI cluster for this purpose. The foundation model, which has already been trained by someone else, has taken the burden of most of the training away from us,” said Dr. Owen Rogers, research director for cloud computing at Uptime. “Instead, we could just use on-demand cloud services to tweak the foundation model for our needs, only paying for the resources we need for as long as we need them.” Data center collaboration with utilities Uptime expects new and expanded data center developers will be asked to provide or store power to support grids. That means data centers will need to actively collaborate with utilities to manage grid demand and stability, potentially shedding load or using local power sources during peak times. Uptime forecasts that data center operators “running non-latency-sensitive workloads, such as specific AI training tasks, could be financially incentivized or mandated to reduce power use when required.” “The context for all of this is that the [power] grid, even if there were no data centers, would have a problem meeting demand over time. They’re having to invest at a rate that is historically off the charts. It’s not just

Read More »

UK Government’s Bold AI Plan: A Game-Changer for Data Centers and Economic Growth?

The UK government has presently announced its comprehensive “AI Opportunities Action Plan,” positioning artificial intelligence as a cornerstone for economic growth and public service transformation over the next decade. The bold initiative, spearheaded by Prime Minister Keir Starmer, aims to make Britain a global leader in AI development and adoption, with significant implications for the data center industry.   Britain’s ambitious AI roadmap taps into the growing synergy between artificial intelligence and data infrastructure. With dedicated AI Growth Zones and a focus on sustainable energy, the UK is setting the stage for an AI-driven economy that aligns with the next generation of data center demands. The data center industry should watch these developments closely, as they signal opportunities for long-term growth in a rapidly evolving market.   AI Infrastructure Prioritization Meets Major Private Sector Investments    The UK government plan introduces “AI Growth Zones,” areas designed to streamline planning approvals for data centers and enhance access to energy infrastructure.  These zones will focus on de-industrialized regions, providing a dual benefit of revitalizing local economies and accelerating the rollout of AI infrastructure. The first such zone will be established in Culham, Oxfordshire, leveraging local expertise in sustainable energy research, including fusion technologies.   Leading tech firms, including Vantage Data Centers, Nscale, and Kyndryl, have committed £14 billion to AI infrastructure development under the plan, creating 13,250 jobs across the UK, according to a press release.  Vantage Data Centers alone plans to invest over £12 billion to establish one of Europe’s largest campuses in Wales and additional facilities nationwide, generating 11,500 jobs.   Plan Harnesses AI for Both Public, Private Sectors  A significant component of the plan is a proposed 20x increase in public compute capacity by 2030, starting with the development of a new supercomputer to support AI innovation. Alongside this supercharging of

Read More »

Prologis and Skybox Advance Warehouse Conversion Strategy with Illinois Data Center Sale

Prologis, among the global leaders in industrial real estate, has taken another major step into the data center market with the sale of a newly developed turnkey data center in Illinois. With the deal for the sale announced last December, partnering with Skybox Datacenters, Prologis had initially converted one of its existing warehouses into a 32 megawatt (MW) facility, demonstrating as far back as 2021 the growing appeal of adaptive reuse for digital infrastructure. As reported by Data Center Dynamics’ Dan Swinhoe: “Skybox said the facility was located in the Elk Grove village area of the city. Images shared by Skybox and Prologis suggest it was Chicago 1, the data center the two companies completed in early 2022 […] DCD reached out for more information. Prologis confirmed Chicago 1 has been sold; the powered shell has been completed, with the turnkey development is in process. The facility spans 190,000 sq ft on a ten-acre site.” The converted facility’s buyer, HMC Capital, sees this acquisition as a marquee asset for its newly launched DigiCo Infrastructure REIT, which targets high-quality data center investments across the United States and Australia. The deal highlights the rapid evolution of Prologis’ data center strategy and the increasing convergence of industrial real estate and digital infrastructure. Prologis’ Growing Presence in Data Centers Prologis is no stranger to data center development, having been featured in prior DCF coverage for its strategic moves into the rapidly burgeoning sector. The Illinois project reflects Prologis’ focus on unlocking higher-value uses for its vast portfolio of warehouses.  According to Dan Letter, President of Prologis, “Warehouse conversions in key markets offer a compelling growth opportunity while delivering outsized returns to our investors and meeting customer demand for digital infrastructure.” To support this strategy, Prologis has aggressively scaled its power procurement capabilities, securing 1.6

Read More »

President Biden’s Executive Order on AI Data Center Construction: Summary and Commentary

Issued this week, President Biden’s “Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure” represents a transformative policy moment for the data center industry if implemented, underscoring the convergence of two equally transformative forces: the AI revolution and the clean energy transition. For the data center industry, the policy marks a clear shift toward a strategic, mission-critical role in national security and economic resilience. The Executive Order’s vision also aligns with definitively emerging trends in the contemporary data center industry, particularly the pivot toward sustainability and energy efficiency. The policy’s emphasis on clean energy infrastructure—whether through nuclear, geothermal, or long-duration storage—addresses the industry’s growing focus on renewable power. However, executing this vision will require massive investments in grid modernization and streamlined permitting processes, which have historically been bottlenecks for large-scale infrastructure projects. The directive to align new AI electricity demands with clean energy sources puts a spotlight on the challenges posed by AI’s computational intensity. Hyperscale operators and colocation providers will need to redouble their rethinking of power procurement strategies, with a renewed focus on distributed energy resources and partnerships with utility providers. Additionally, the Executive Order’s call for high labor standards and community engagement reflects growing federal acknowledgment of data centers’ societal footprint. While the industry has made strides in community outreach, such measures ensure data center developments are not just sustainable but also equitable, creating jobs and fostering goodwill in the communities where they operate. For what it explicitly defines as “frontier AI data centers,” the Executive Order also seeks to provide a regulatory framework to streamline development, while ensuring robust cybersecurity and supply chain integrity. Importantly though, balancing the urgency of AI infrastructure development with the complex demands of energy transition and national security will require unprecedented levels of public-private collaboration. The Executive Order apparently isn’t just

Read More »

Edged Data Centers Builds for the Future On Heels of Innovative Nuclear Power Partnership

MERLIN Properties and Edged Energy to Build Gigawatt-Scale AI Data Center Campuses in Spain To wit, in a furtherance of its groundbreaking partnership in Europe, MERLIN Properties and Edged Energy are collaborating with the regional government of Extremadura, Spain, to establish two state-of-the-art data center campuses. These facilities, designed to support the burgeoning demand for generative AI and advanced computing, promise to set new standards for sustainability and efficiency in the data center industry. A Vision for Sustainability and Growth in Extremadura The data centers, located in Navalmoral de la Mata (Cáceres Province) and Valdecaballeros (Badajoz Province), will each deliver up to 1 GW of IT capacity. Featuring industry-leading innovations, the campuses will boast an average PUE of 1.15, ensuring ultra-efficient operations. Edged says the project represents a significant leap forward in green data center development, aligning with Extremadura’s commitment to leveraging innovation and technology for economic and environmental progress. “Our mission is to create data centers for positive impact, and we are proud to contribute to the Iberian Peninsula’s growing digital economy,” said Jakob Carnemark, CEO of Edged Energy. “The region offers unprecedented fiber connectivity with massive submarine connections worldwide and boasts reliable, abundant, and low-cost renewable energy.” Harnessing Renewable Energy and Cutting-Edge Cooling Technology The Extremadura facilities will operate entirely on electricity from renewable sources, capitalizing on the region’s vast sustainable energy capacity. Extremadura currently produces six times the electricity it consumes, making it an ideal location for gigawatt-scale data centers. The project’s waterless cooling system, ThermalWorks, will enable the facilities to operate without consuming water, a critical innovation for such regions with limited water resources. The system will support ultra-high rack densities of up to 200kW per rack to accommodate the advanced computing demands of AI workloads. Strategic Location and Connectivity The Iberian Peninsula is rapidly becoming

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »