Stay Ahead, Stay ONMINE

Early days for AI: Only 25% of enterprises have deployed, few reap rewards

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is anticipated to be the year AI gets real, bringing specific, tangible benefit to enterprise.  However, according to a new State of AI Development Report from AI development platform Vellum, we’re not quite there yet: […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


2025 is anticipated to be the year AI gets real, bringing specific, tangible benefit to enterprise. 

However, according to a new State of AI Development Report from AI development platform Vellum, we’re not quite there yet: Just 25% of enterprises have deployed AI into production, and only a quarter of those have yet to see measurable impact. 

This seems to indicate that many enterprises have not yet identified viable use cases for AI, keeping them (at least for now) in a pre-build holding pattern. 

“This reinforces that it’s still pretty early days, despite all the hype and discussion that’s been happening,” Akash Sharma, Vellum CEO, told VentureBeat. “There’s a lot of noise in the industry, new models and model providers coming out, new RAG techniques; we just wanted to get a lay of the land on how companies are actually deploying AI to production.”

Enterprises must identify specific use cases to see success

Vellum interviewed more than 1,250 AI developers and builders to get a true sense of what’s happening in the AI trenches. 

According to the report, the majority of companies still in production are in various stages of their AI journeys — building out and evaluating strategies and proofs of concept (PoC) (53%) beta testing (14%) and, at the lowest level, talking to users and gathering requirements (7.9%). 

By far, enterprises are focused on building document parsing and analysis tools and customer service chatbots, according to Vellum. But they are also interested in applications incorporating analytics with natural language, content generation, recommendation systems, code generation and automation and research automation.

So far, developers report competitor advantage (31.6%), cost and time savings (27.1%) and higher user adoption rates (12.6%) as the biggest impacts they’ve seen so far. Interestingly, though, 24.2% have yet to see any meaningful impact from their investments. 

Sharma emphasized the importance of prioritizing use cases from the very start. “We’ve anecdotally heard from people that they just want to use AI for the sake of using AI,” he said. “There’s an experimental budget associated with that.” 

While this makes Wall Street and investors happy, it doesn’t mean AI is actually contributing anything, he pointed out. “Something generally everyone should be thinking about, is, ‘How do we find the right use cases? Usually, once companies are able to identify those use cases, get them into production and see a clear ROI, they get more momentum, they get past the hype. That results in more internal expertise, more investment.” 

OpenAI still at the top, but a mixture of models will be the future

When it comes to models used, OpenAI maintains the lead (no surprise there), notably its GPT 4o and GPT 4o-mini. But Sharma pointed out that 2024 offered more optionality, either directly from model creators or through platform solutions like Azure or AWS Bedrock. And, providers hosting open-source models such as Llama 3.2 70B are gaining traction, too — such as Groq, Fireworks AI and Together AI.

“Open Source models are getting better,” said Sharma. “Closed source competitors to OpenAI are catching up in terms of quality.”

Ultimately, though, enterprises aren’t going to just stick with just one model and that’s it — they will increasingly lean on multi-model systems, he forecasted. 

“People will choose the best model for each task at hand,” said Sharma. “While building an agent, you might have multiple prompts, and for each individual prompt the developer will want to get the best quality, lowest cost and lowest latency, and that may or may not come from OpenAI.”

Similarly, the future of AI is undoubtedly multimodal, with Vellum seeing a surge in adoption of tools that can handle a variety of tasks. Text is the undisputed top use case, followed by file creation (PDFs or Word) images, audio and video. 

Also, retrieval-augmented generation (RAG) is a go-to when it comes to information retrieval, and more than half of developers are using vector databases to simplify search. Top open-source and proprietary models include Pinecone, MongoDB, Quadrant, Elastic Search, PG vector, Weaviate and Chroma. 

Everyone’s getting involved (not just engineering)

Interestingly, AI is moving beyond just IT and becoming democratized across enterprises (akin to the old ‘it takes a village’). Vellum found that while engineering was most involved in AI projects (82.3%), they are being joined by leadership and executives (60.8%), subject matter experts (57.5%), product teams (55.4%) and design departments (38.2%). 

This is largely due to the ease of use of AI (as well as the general excitement around it), Sharma noted. 

“This is the first time we’re seeing software being developed in a very, very cross functional way, especially because prompts can be written in natural language,” he said. “Traditional software usually tends to be more deterministic. This is non-deterministic, which brings more people into the development fold.”

Still, enterprises continue to face big challenges — notably around AI hallucinations and prompts; model speed and performance; data access and security; and getting buy-in from important stakeholders. 

At the same time, while more non-technical users are getting involved, there is still a lack of pure technical expertise in-house, Sharma pointed out. “The way to connect all the different moving parts is still a skill that not that many developers have today,” he said. “So that’s a common challenge.”

However, many existing challenges can be overcome by tooling, or platforms and services that help developers evaluate complex AI systems, Sharma pointed out. Developers can perform tooling internally or with third-party platforms or frameworks; however, Vellum found that nearly 18% of developers are defining prompts and orchestration logic without any tooling at all. 

Sharma pointed out that “lack of technical expertise becomes easier when you have proper tooling that can guide you through the development journey.” In addition to Vellum, frameworks and platforms used by survey participants include Langchain, Llama Index, Langfuse, CrewAI and Voiceflow.

Evaluations and ongoing monitoring are critical

Another way to overcome common issues (including hallucinations) is to perform evaluations, or use specific metrics to test the correctness of a given response. “But despite that, [developers] are not doing evals as consistently as they should be,” said Sharma. 

Particularly when it comes to advanced agentic systems, enterprises need solid evaluation processes, he said. AI agents have a high degree of non-determinism, Sharma pointed out, as they call external systems and perform autonomous actions.

“People are trying to build fairly advanced systems, agentic systems, and that requires a large number of test cases and some sort of automated testing framework to make sure it performs reliably in production,” said Sharma. 

While some developers are taking advantage of automated evaluation tools, A/B testing and open-source evaluation frameworks, Vellum found that more than three-quarters are still doing manual testing and reviews. 

“Manual testing just takes time, right? And the sample size in manual testing is usually much lower than what automated testing can do,” said Sharma. “There might be a challenge in just the awareness of techniques, how to do automated, at-scale evaluations.”

Ultimately, he emphasized the importance of embracing a mix of systems that work symbiotically — from cloud to application programming interfaces (APIs). “Consider treating AI as just a tool in the toolkit and not the magical solution for everything,” he said.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

A CSO’s perspective: 8 cyber predictions for 2025

As we step into 2025, the cyberthreat landscape is once again more dynamic and challenging than the year before. In 2024, we witnessed a remarkable acceleration in cyberattacks of all types, many fueled by advancements in generative AI. For security leaders, the stakes are higher than ever. In this post,

Read More »

Ericsson unveils genAI assistant for 5G network operations

Telecommunications and networking provider Ericsson recently launched its generative AI-based virtual assistant that uses large language model (LLM) technology to read, understand, and generate new content to provide personalized answers for network operators configuring wireless 5G networks, troubleshooting problems, and creating policies. Ericsson’s AI-based NetCloud Assistant, or ANA, is LLM-based

Read More »

Next-gen Ethernet standards set to move forward in 2025

Metz noted that in addition to vendor participation growth there was a lot of technical innovation. Significant developments were made across the physical, link, transport, and software layers, including innovative congestion schemes, built-in security and optimized packet delivery.  “More than 25 individual projects contributed to the development of a finely

Read More »

HPE beats Dell and Supermicro in $1B AI server deal with X

The financial scope of the deal underscores its significance. Analysts see HPE’s landmark $1 billion deal with X as a major endorsement of its AI capabilities, but competition remains fierce in the high-growth AI server market. “HPE’s $1 billion deal with X not only enhances its credibility but also highlights

Read More »

Trump’s Pick for Energy Secretary Vows to ‘Unleash’ USA Expansion

Donald Trump’s choice for Energy secretary said the US must remove bureaucratic barriers and “unleash” production of nuclear power as well as liquified natural gas, according to written testimony before his Senate confirmation hearing on Wednesday.  “The security of our nation begins with energy,” Chris Wright said in remarks prepared for the Energy and Natural Resources Committee. “Previous administrations have viewed energy as a liability instead of the immense national asset that it is.”  Wright, the founder of Liberty Energy Inc., an oil and natural gas fracking services company, said his priorities would also include a focus on innovation and technology breakthroughs. Testifying on his 60th birthday, he said his first priority was to “unleash American energy at home and abroad to restore energy dominance,” a term that resonated throughout the first Trump administration. As Liberty’s chief executive officer, Wright has been an unapologetic advocate for his industry, proclaiming the moral virtues of fossil fuels and even drank fracking fluid to refute opponents who questioned its safety.  The choice of Wright, who has no previous Washington experience, is indicative of the incoming president’s hard pivot toward fossil fuels after years of Biden administration policies that benefited renewable energy and sought to restrict global warming.  He has assailed subsidies for wind and solar power and said fossil fuels were crucial for spreading prosperity and lifting people from poverty. He has called the threat posed by climate change exaggerated. “There is no climate crisis. And we are not in the midst of an energy transition either,” Wright said in a video posted on his LinkedIn page. “Life on earth is simply impossible without carbon dioxide — hence the term carbon pollution is outrageous.” While the Energy Department has little authority over oil and gas development, Wright, if confirmed, would oversee an organization with a vast,

Read More »

US Government Study Supports Further RE Expansion on Federal Lands

An inter-agency study has found that federal lands in the contiguous United States could further support hundreds more gigawatts of renewable energy (RE) generation even when assuming limited siting. The study by the Department of Energy’s (DOE) National Renewable Energy Laboratory (NREL), conducted in cooperation with the agriculture, defense and interior departments, found there is technical potential for 5,750 gigawatts (GW) of utility photovoltaic (UPV) installations on 44 million acres of developable federal land across the contiguous U.S. The term refers to the lower 48 states and the District of Columbia and excludes Alaska, Hawaii and U.S. territories. For onshore wind, the technical potential — the maximum amount of available resource based on the amount of federal land administered, suitability of the land for RE development, and the energy resource availability — is 875 GW on 43 million acres. Meanwhile the technical potential for hydrothermal is 130 GW on 12 million acres while that for enhanced geothermal is 975 GW on 27 million acres. The Bureau of Land Management (BLM) has the highest technical potential of all federal land administrators, followed by the Forest Service and the Defense Department.  “DOE, FWS [Fish and Wildlife Service], and other federal land administrators have relatively modest RE technical potential”, stated the report published on NREL’s website. “When modeling a limited-siting case, we estimate the technical potential on federal lands to decline by 70 percent (to 1,750 GW) for UPV and by 96 percent (to 70 GW) for wind, compared to the reference siting case”, the report said. Out of seven scenarios presented in the report, the three central scenarios estimate 51–84 GW of solar, wind and geothermal capacity are deployed onshore the contiguous U.S. by 2035, when the country should have achieved 100 percent clean power as targeted. The scenarios are based on various factors including technology

Read More »

Load forecasts from data centers risks falling into irrational exuberance territory

Todd Snitchler is president and CEO of the Electric Power Supply Association. Alan Greenspan is quoted as saying in 1996 that the stock market experienced irrational exuberance around the dot com bubble when describing the disconnect between asset valuation and a stock’s value rising to prices that exceed their fundamentals. In many ways, the race to power our artificial intelligence-driven future is showing the same thing. There is no doubt that the growth of AI will be driven or limited by how quickly we can ensure sufficient power to enable the expansion and operation of data centers, but the magnitude of the energy needed is still much more in question than many seem to think. The recent announcements by PJM Interconnection of their load forecast indicate a significant increase in demand. For summer demand, they predict an increase of 42 GW between now and 2029 and, for winter demand, an increase of 43 GW by 2039. The incremental growth expected by 2034 is notable with a 27 GW and 30 GW increase in summer and winter peak demand.  The North American Electric Reliability Corp. also released its 2024 Long-Term Reliability Assessment which projects a 15% summer peak demand increase, and an 18% winter peak demand increase over the next decade. While these timelines and figures have some variance, they all suggest a consistent, meaningful level of load growth in all seasons of the year in large parts of the country. There is also a long history of overestimating load growth which can drive overbuilding and the addition of resources that may exceed the increased demand. A similar panic about load growth overwhelming the grid happened back in 1999, when worries about early data centers led to panicked media stories and rosy investor forecasts for utilities. It didn’t happen. Microprocessors, computers

Read More »

Sherri Evers Departs Imperial for ExxonMobil

Sherri Evers, Senior Vice President for Sustainability, Commercial Development, and Product Solutions at Imperial Oil Ltd., is departing the company for Exxon Mobil Corp. Imperial said in a media release that Evers will become North America Lubes General Manager for ExxonMobil effective March 1, 2025. “On behalf of Imperial, I would like to thank Sherri Evers for her contributions and strong leadership as part of the management team over the last four years”, Brad Corson, Chairman, President, and Chief Executive Officer, said. “Sherri successfully led a diverse portfolio contributing to Imperial’s business and strategic projects, including the Strathcona renewable diesel project in the Downstream and the development of our new Low Carbon Solutions business. In addition, her leadership was critical in plans to reduce greenhouse gas emissions at our operations, and she was instrumental in the formation of the Pathways Alliance to reduce emissions from oil sands operations and has been an active participant on the steering committee”. Evers holds a Bachelor of Commerce in Marketing and General Business from the University of Saskatchewan. She began her career with Imperial in 1998 in Edmonton, Alberta, and held various assignments within the company’s downstream business, Imperial said in its media release. In 2012, Evers began a series of global assignments with ExxonMobil related to product optimization, planning, and supply network before returning to Canada as the Eastern Canada Fuels Manager for Downstream in 2018. Evers was appointed Vice President for Commercial and Corporate Development in 2021 before being appointed to her current role in 2023, Imperial said. Imperial has started the search for Evers’ replacement. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or

Read More »

CNOOC, Shell to Expand Petrochemical Complex in Huizhou, China

Shell PLC and China National Offshore Oil Corp. (CNOOC) have approved an expansion of their petrochemical complex in Daya Bay, Huizhou, south China, primarily aimed at meeting domestic demand. The expansion includes a third ethylene cracker with a planned capacity of 1.6 million metric tons a year and associated downstream derivatives units to produce chemicals including linear alpha olefins. A new facility will also be built to produce 320,000 metric tons per annum of high-performance specialty chemicals such as polycarbonates and carbonate solvents. “The new facilities, primarily aimed at meeting domestic demand in China, will produce a range of chemicals that are widely used in the agriculture, industrial, construction, healthcare and consumer goods sectors”, Shell said in an online statement Wednesday. The partners expect to complete construction 2028. Put into operation 2006, the complex is operated by CNOOC and Shell Petrochemicals Co. Ltd, a 50-50 venture between Shell subsidiary Shell Nanhai BV and CNOOC subsidiary CNOOC Petrochemicals Investment Ltd. Phase 2 of the complex started operation 2018. The complex currently supplies over six million metric tons per year of chemical products to the Chinese market, according to Shell. In May 2020, the CSPC partners and the Huizhou government signed a strategic cooperation agreement to invest in the phase 3 expansion of the complex. “This investment will contribute to CSPC’s competitiveness by extending its value chains, drive further integration with the existing site, and enable greater innovation capability to meet customer demand in the fast-growing Chinese market”, Shell said Wednesday. Huibert Vigeveno, Shell director for downstream, renewables and energy solutions said, “For more than two decades, CSPC has provided high-value products to the market, becoming one of the largest petrochemical joint ventures in China”. “This new investment is a key enabler to realize CSPC’s transformation strategy towards more premium and highly

Read More »

Macquarie Strategists Forecasting USA Crude Inventory Build

In an oil and gas report sent to Rigzone by the Macquarie team this week, Macquarie strategists revealed that they are forecasting that U.S. crude inventories will be up 3.0 million barrels for the week ending January 10. “This compares to our early look for the week which anticipated a 4.3 million barrel build, and a 1.0 million barrel draw realized for the week ending January 3,” the strategists said in the report. “On the product side of the ledger, in aggregate, our expectations for another large build are little changed from our early view,” the strategists added. In the report, the Macquarie strategists noted that, “for this week’s crude balance, from refineries”, they “model crude runs lower (-0.4 million barrels per day)”. “Among net imports, we model a modest decrease, with exports (-0.2 million barrels per day) and imports lower (-0.3 million barrels per day) on a nominal basis,” they added. The strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “Likewise, the shadow of potential year-end/timing effects in the prior week’s balance could insert additional volatility in this week’s stats,” the strategists said in the report. “From implied domestic supply (prod.+adj.+transfers), we look for a moderate increase (+0.4 million barrels per day) following a soft print last week, although here too we note potential volatility due to winter weather,” they noted. “Rounding out the picture, we anticipate another small increase in SPR [Strategic Petroleum Reserve] inventory (+0.5 million barrels) on the week,” they went on to state. The Macquarie strategists also noted in the report that, “among products”, they “look for another large gasoline build (+4.4 million barrels), with distillate (+1.3 million barrels) and jet stocks (+1.9 million barrels) also higher”. “We model implied demand for these

Read More »

8 Trends That Will Shape the Data Center Industry In 2025

What lies ahead for the data center industry in 2025? At Data Center Frontier, our eyes are always on the horizon, and we’re constantly talking with industry thought leaders to get their take on key trends. Our Magic 8 Ball prognostications did pretty well last year, so now it’s time to look ahead at what’s in store for the industry over the next 12 months, as we identify eight themes that stand to shape the data center business going forward. We’ll be writing in more depth about many of these trends, but this list provides a view of the topics that we believe will be most relevant in 2025. A publication about the future frontiers of data centers and AI shouldn’t be afraid to put it’s money where its mouth is, and that’s why we used AI tools to help research and compose this year’s annual industry trends forecast. The article is meant to be a bit encyclopedic in the spirit of a digest, less than an exactly prescriptive forecast – although we try to go there as well. The piece contains some dark horse trends. Do we think immersion cooling is going to explode this year, suddenly giving direct-to-chip a run for its money? Not exactly. But do we think that, given the enormous and rapidly expanding parameters of the AI and HPC boom, the sector for immersion cooling could see some breakthroughs this year? Seems reasonable. Ditto for the trends forecasting natural gas and quantum computing advancements. Such topics are definitely on the horizon and highly visible on the frontier of data centers, so we’d better learn more about them, was our thought. Because as borne out by recent history, data center industry trends that start at the bleeding edge (pun intended – also, on the list) sometimes

Read More »

Podcast: Data Center and AI Sustainability Imperatives with iMasons Climate Accord Executive Director, Miranda Gardiner

Miranda was a featured speaker at last September’s inaugural Data Center Frontier Trends Summit. The call for speakers is now open for this year’s event, which will be held again in Reston, Virginia from Aug. 26-28. DCF Show Podcast Quotes from Miranda Gardiner, Executive Director, iMasons Climate Accord On Her Career Journey and Early Passion for Sustainability:   – “My goals have always been kind of sustainability, affordable housing. I shared a story last week on a panel that my mother even found a yearbook of me from my elementary school years. The question that year was like, what do you hope for the future? And mine was there’d be no pollution and everyone would have a home.” On Transitioning to Data Centers:   – “We started to see this mission-critical focus in facilities like data centers, airports, and healthcare buildings. For me, connecting sustainability into the performance of the building made data centers the perfect match.” Overview of the iMasons Climate Accord:   – “The iMasons Climate Accord is an initiative started in 2022. The primary focus is emission reductions, and the only requirement to join is having an emission reduction strategy.”   – “This year, we refined our roadmap to include objectives such as having a climate strategy, incentivizing low-GHG materials like green concrete, and promoting equity by supporting small, women-owned, and minority-owned businesses.” On Industry Collaboration and Leadership:   – “This year, through the Climate Accord, we issued a call to action on the value of environmental product declarations (EPDs). It was signed by AWS, Digital Realty, Google, Microsoft, Schneider Electric, and Meta—talk about a big initiative and impact!” On EPDs and Carbon Disclosure:   – “EPDs provide third-party verification of materials coming into buildings. Pairing that with the Open Compute Project’s carbon disclosure labels on equipment creates vast opportunities for transparency and

Read More »

Accelsius and iM Data Centers Demo Next-Gen Cooling and Sustainability at Miami Data Center

Miami Data Center Developments Update Miami has recently witnessed several significant developments and investments in its data center sector, underscoring the city’s growing importance as a digital infrastructure hub. Notable projects include: Project Apollo:  A proposed 15-megawatt (MW), two-story, 75,000-square-foot data center in unincorporated Miami-Dade County. With an estimated investment of $150 million, construction is slated to commence between 2026 and 2027. The development team has prior experience with major companies such as Amazon, Meta, and Iron Mountain.  RadiusDC’s Acquisition of Miami I:  In August 2024, RadiusDC acquired the Miami I data center located in the Sweetwater area. Spanning 170,000 square feet across two stories, the facility currently offers 3.2MW of capacity, with plans to expand to 9.2 MW by the first half of 2026. The carrier-neutral facility provides connectivity to 11 fiber optic and network service providers.  Iron Mountain’s MIA-1 Data Center: Iron Mountain is developing a 150,000-square-foot, 16 MW data center on a 3.4-acre campus in Central North West Miami. The facility, known as MIA-1, is scheduled to open in 2026 and aims to serve enterprises, cloud providers, and large-scale users in South Florida. It will feature fiber connections to other Iron Mountain facilities and a robust pipeline of carriers and software-defined networks.  EDGNEX’s Investment Plans:  As of this month, Dubai, UAE-based EDGNEX has announced plans to invest $20 billion in the U.S. data center market, with the potential to double this investment. This plan includes a boutique condo project in Miami, estimated to have a $1 billion gross development value, indicating a significant commitment to the region’s digital infrastructure.  All of these developments highlight Miami’s strategic position as a connectivity hub, particularly serving as a gateway to Latin America and the Caribbean. The city’s data center market is characterized by steady growth, with a focus on retail colocation and

Read More »

Tract Capital Unveils Fleet Data Centers, Specializing In 500 MW+ Build-to-Suit Megacampuses

Tract Capital has announced the launch of Fleet Data Centers, a new platform dedicated to the development of mega-scale data center campuses with capacities of 500 MW or more, specifically designed for single-user customers.  The initiative is led by Grant van Rooyen, CEO of Tract Capital and Executive Chairman of Fleet Data Centers, and Chris Vonderhaar, the newly appointed President of Fleet Data Centers.  Vonderhaar brings extensive experience to the role, having served as Vice President of Demand and Supply Management at Google Cloud and as a senior leader at Amazon Web Services (AWS) for over a decade, where he oversaw the design, planning, construction, and operation of AWS’s global data center platform.  The Fleet leadership team also includes veterans from hyperscalers, wholesale data center providers, network infrastructure firms, and equipment vendors, with a collective track record of deploying dozens of gigawatts of data center capacity across hundreds of facilities globally. A Two Prong Strategy Defining two distinct strategies, Fleet is the mega-campus vertical development arm of Tract Capital, an alternative asset manager specializing in scaling digital infrastructure, which also operates Tract to refine development sites at ground level for data centers in terms of lining up power, fiber, zoning and entitlements.  Fleet Data Centers will aim to address the next phase of hyperscale data center growth by offering customized gigawatt-level campuses that provide predictability, flexibility, and scalability for hyperscalers navigating increasing infrastructure demands. This new venture from Tract Capital underscores the growing need for innovative, large-scale digital infrastructure solutions, particularly as hyperscalers face mounting challenges in scaling their global platforms to meet the demands of the digital age. The unveiling of Fleet is just another example of the way Tract Capital has consistently demonstrated its expertise in accelerating the scaling of responsible technology infrastructure, combining operational capabilities from industry

Read More »

Call for Speakers: Second Annual Data Center Frontier Trends Summit, Aug. 26-28, Reston, VA

Data Center Frontier (DCF) is excited to announce the Call for Speakers for our highly anticipated second annual Data Center Frontier Trends Summit, set to take place from August 26-28, 2025 in Reston, Virginia.  This premier industry event will once again bring together the brightest minds and leaders in the data center and digital infrastructure sectors to explore cutting-edge trends shaping the future of the industry.   Submit Speaking Proposals Here The DCF Trends Summit focuses on delivering deep insights and actionable knowledge for professionals navigating the evolving challenges and opportunities in data center innovation, energy efficiency, sustainability, and advanced technology integration. This year’s event will feature keynote speakers, expert panels, and interactive discussions on topics such as AI workloads, modular and edge computing, renewable energy strategies, and the global expansion of hyperscale facilities.   Call for Papers Details The DCF Trends Summit welcomes paper submissions on a wide range of relevant topics, including but not limited to: Emerging Trends:  AI, machine learning, and edge computing in data center operations. Power: Utility and substation power, renewables and behind-the-meter onsite, battery backup, energy storage. Sustainability:  Innovations in energy efficiency, renewable energy integration, and sustainable design. Technology Innovations:  Next-gen cooling systems, advanced automation, and breakthroughs in network infrastructure. National & Global Perspectives:  Regional market dynamics for site selection and regulation plus strategies for addressing evolving customer needs and workforce development.   View the Full DCF Trends ‘Topics of Interest’ Listing Industry professionals, researchers, and thought leaders are encouraged to submit papers that reflect their expertise, insights, and forward-looking perspectives. Submissions should align with the core themes of the Summit and provide actionable takeaways for attendees.   The deadline for paper submissions is January 29, 2025. All speakers will receive complimentary registration and the opportunity to share their work with a diverse audience

Read More »

UAE company to invest $20B in U.S. AI data centers

A United Arab Emirates investment firm has pledged $20 billion to build new data centers targeting AI across a number of locations across the United States. Billionaire Hussain Sajwani, CEO and founder of the property development company DAMAC Properties in Dubai, made the announcement at president-elect Donald Trump’s Florida home, Mar-a-Lago. Sajwani is a close friend of Trump, according to news reports. Trump said the first phase of the planned investment will take place in Texas, Arizona, Oklahoma, Louisiana, Ohio, Illinois, Michigan and Indiana. And that’s just for starters. “They may go double, or even somewhat more than double, that amount of money,” Trump said of the deal.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »