Stay Ahead, Stay ONMINE

Hugging Face submits open-source blueprint, challenging Big Tech in White House AI policy fight

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In a Washington policy landscape increasingly dominated by calls for minimal AI regulation, Hugging Face is making a distinctly different case to the Trump administration: open-source and collaborative AI development may be America’s strongest competitive advantage. […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


In a Washington policy landscape increasingly dominated by calls for minimal AI regulation, Hugging Face is making a distinctly different case to the Trump administration: open-source and collaborative AI development may be America’s strongest competitive advantage.

The AI platform company, which hosts more than 1.5 million public models across diverse domains, has submitted its recommendations for the White House AI Action Plan, arguing that recent breakthroughs in open-source models demonstrate they can match or exceed the capabilities of closed commercial systems at a fraction of the cost.

In its official submission, Hugging Face highlights recent achievements like OlympicCoder, which outperforms Claude 3.7 on complex coding tasks while using just 7 billion parameters, and AI2’s fully open OLMo 2 models that match OpenAI’s o1-mini performance levels.

The submission comes as part of a broader effort by the Trump administration to gather input for its upcoming AI Action Plan, mandated by Executive Order 14179, officially titled “Removing Barriers to American Leadership in Artificial Intelligence,” which was issued in January. The Order, which replaced the Biden administration’s more regulation-focused approach, emphasizes U.S. competitiveness and reducing regulatory barriers to development.

Hugging Face’s submission stands in stark contrast to those from commercial AI leaders like OpenAI, which has lobbied heavily for light-touch regulation and “the freedom to innovate in the national interest,” while warning about China’s narrowing lead in AI capabilities. OpenAI’s proposal emphasizes a “voluntary partnership between the federal government and the private sector” rather than what it calls “overly burdensome state laws.”

How open source could power America’s AI advantage: Hugging Face’s triple-threat strategy

Hugging Face’s recommendations center on three interconnected pillars that emphasize democratizing AI technology. The company argues that open approaches enhance rather than hinder America’s competitive position.

“The most advanced AI systems to date all stand on a strong foundation of open research and open source software — which shows the critical value of continued support for openness in sustaining further progress,” the company wrote in its submission.

Its first pillar calls for strengthening open and open-source AI ecosystems through investments in research infrastructure like the National AI Research Resource (NAIRR) and ensuring broad access to trusted datasets. This approach contrasts with OpenAI’s emphasis on copyright exemptions that would allow proprietary models to train on copyrighted material without explicit permission.

“Investment in systems that can freely be re-used and adapted has also been shown to have a strong economic impact multiplying effect, driving a significant percentage of countries’ GDP,” Hugging Face noted, arguing that open approaches boost rather than hinder economic growth.

Smaller, faster, better: Why efficient AI models could democratize the technology revolution

The company’s second pillar focuses on addressing resource constraints faced by AI adopters, particularly smaller organizations that can’t afford the computational demands of large-scale models. By supporting more efficient, specialized models that can run on limited resources, Hugging Face argues the U.S. can enable broader participation in the AI ecosystem.

“Smaller models that may even be used on edge devices, techniques to reduce computational requirements at inference, and efforts to facilitate mid-scale training for organizations with modest to moderate computational resources all support the development of models that meet the specific needs of their use context,” the submission explains.

On security—a major focus of the administration’s policy discussions—Hugging Face makes the counterintuitive case that open and transparent AI systems may be more secure in critical applications. The company suggests that “fully transparent models providing access to their training data and procedures can support the most extensive safety certifications,” while “open-weight models that can be run in air-gapped environments can be a critical component in managing information risks.”

Big tech vs. little tech: The growing policy battle that could shape AI’s future

Hugging Face’s approach highlights growing policy divisions in the AI industry. While companies like OpenAI and Google emphasize speeding up regulatory processes and reducing government oversight, venture capital firm Andreessen Horowitz (a16z) has advocated for a middle ground, arguing for federal leadership to prevent a patchwork of state regulations while focusing regulation on specific harms rather than model development itself.

“Little Tech has an important role to play in strengthening America’s ability to compete in AI in the future, just as it has been a driving force of American technological innovation historically,” a16z wrote in its submission, using language that aligns somewhat with Hugging Face’s democratization arguments.

Google’s submission, meanwhile, focused on infrastructure investments, particularly addressing “surging energy needs” for AI deployment—a practical concern shared across industry positions.

Between innovation and access: The race to influence America’s AI future

As the administration weighs competing visions for American AI leadership, the fundamental tension between commercial advancement and democratic access remains unresolved. OpenAI’s vision of AI development prioritizes speed and competitive advantage through a centralized approach, while Hugging Face presents evidence that distributed, open development can deliver comparable results while spreading benefits more broadly.

The economic and security arguments will likely prove decisive. If administration officials accept Hugging Face’s assertion that “a robust AI strategy must leverage open and collaborative development to best drive performance, adoption, and security,” open-source could find a meaningful place in national strategy. But if concerns about China’s AI capabilities dominate, OpenAI’s calls for minimal oversight might prevail.

What’s clear is that the AI Action Plan will set the tone for years of American technological development. As Hugging Face’s submission concludes, both open and proprietary systems have complementary roles to play — suggesting that the wisest policy might be one that harnesses the unique strengths of each approach rather than choosing between them. The question isn’t whether America will lead in AI, but whether that leadership will bring prosperity to the few or innovation for the many.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

What is Nvidia Dynamo and why it matters to enterprises?

It uses disaggregated serving to separate the processing and generation phases of large language models (LLMs) on different GPUs, which allows each phase to be optimized independently for its specific needs and ensures maximum GPU resource utilization, the chipmaker explained.   The efficiency gain is made possible as Dynamo has

Read More »

Nigerian Oil Pipeline Sabotage Threatens Crude Output Revival

Nigeria’s push to revive oil production and encourage investment has been put at risk by sabotage at the heart of its crude pipeline system. Better security has been key to a recovery in the nation’s output, which rose 40% over the past few years after slumping to little more than half its historic peak. In January, Africa’s biggest producer even breached its once-distant OPEC quota.  The vandalism on a segment of the Trans-Niger Pipeline — which handles about 15% of the nation’s exports — is a setback for a government that had already taken measures to increase security in the area. President Bola Tinubu responded by imposing a state of emergency in Rivers State on Tuesday, citing an 18-month political standoff between local officials who he said failed to stop acts of sabotage by militants.  “This is a blow to the Tinubu government’s recent successes on oil output, gains driven in part by improved security measures,” said Clementine Wallop, director for sub-Saharan Africa at political-risk consultant Horizon Engage. “It is also a very difficult investment signal during a period where the government seemed to be turning a corner on energy.” Renaissance Africa Energy, a local consortium that only last week took control of assets including the TNP that it bought from Shell Plc, said it has no plans to issue a force majeure over exports of Bonny Light crude. Two tankers are waiting to load from the Bonny terminal, according to ship tracking data compiled by Bloomberg. In 2022, when Nigeria nearly dipped below a million barrels a day, security on the TNP had deteriorated to such an extent that the pipeline system had been illegally tapped in about 150 places. That meant producers only received a small fraction of the volumes they pumped through. Tightening security on oil pipelines

Read More »

CP2 LNG Gets Conditional Approval for Non-FTA Export

The United States Department of Energy (DOE) has granted a conditional permit for non-FTA exportation to CP2 LNG, a project of Venture Global Inc. under construction in Cameron Parish, Louisiana The project had already received authorization for the FTA portion of its request to export the equivalent of about 1.45 trillion cubic feet a year of natural gas, in a DOE order April 22, 2022. A final permit for the non-FTA portion has been withheld pending a DOE review of permitting considerations concerning greenhouse gas emissions, environmental impact, energy prices and domestic gas supply, according to a department order Wednesday granting the conditional permit. While the Trump administration ended ex-President Joe Biden’s pause of pending decisions on LNG export to countries with no free-trade agreement (FTA) with the U.S., the DOE under Trump indicated it would not junk a study published by the previous government on permitting considerations. In a January 21, 2025, statement the DOE said it was extending the deadline for the comment period for the results of that study from February 18, 2025, to March 20, 2025. “DOE expects to issue a final order to CP2 LNG in the coming months”, the department said in an online statement Wednesday. “We are grateful for the Trump Administration’s return to regular order and regulatory certainty that will allow us to further expand U.S. LNG exports, which have consistently been found to be in the public interest across multiple Administrations”, Venture Global chief executive Mike Sabel said in a company statement Wednesday. “This will enable us to provide our allies around the world with American LNG in just a few years and for decades to come”. Arlington, Virginia-based Venture Global said, “To date, the initial phase of CP2 LNG has been sold through 20-year sales and purchase agreements with ExxonMobil,

Read More »

Labor-industry collaboration is key to US energy security, dominance and job growth

Calvin Butler is president and CEO of Exelon and Kenneth Cooper is the IBEW International president. A resilient, secure and modern power grid is not only a technical achievement, it is a cornerstone of American competitiveness, national security and strength. The grid’s ingenious design has allowed it to operate in much the same way for centuries, delivering life-sustaining electricity to cities, rural towns, small businesses, manufacturing complexes and critical services across the country. Now, this dependable, age-old system is undergoing rapid change. Our country and world are experiencing unprecedented energy demand. Nonprofit energy research center EPRI predicts that data center energy consumption could more than double by 2030. And just as the data centers now powered by the grid support artificial intelligence (AI) and other complex technology, the grid itself is becoming more complex to maintain. We must ensure there is a workforce with the skills and technological know-how to ready the grid for its next iteration, supporting new demand, but built to withstand increasingly extreme weather events and natural disasters. This critical necessity was top of mind earlier this month when representatives from labor and management from across the energy industry convened for the National Labor and Management Public Affairs Committee’s annual meeting in Washington, D.C. As leaders of the nation’s largest energy delivery company and the International Brotherhood of Electrical Workers, which represents 858,000 lineworkers, generation workers and other skilled tradespeople, we represent both sides in a critical, symbiotic relationship: energy infrastructure and those who build and operate it for the benefit of customers. Partnerships between local energy companies and labor demonstrate what a robust, economy-spurring energy system requires. By prioritizing workforce development and innovation, we can continue leading the world in grid reliability and technology. It means industry can adapt and thrive, spurring prosperity through creation of

Read More »

Hawai’ian utility contracts for solar-plus-storage projects from AES

Kauaʻi Island Utility Cooperative intends to buy power from two solar-plus-battery-storage projects planned by AES Hawaiʻi in a move that would bring the utility’s power supply mix close to 80% renewable in 2028. KIUC has asked the Hawaiʻi Public Utilities Commission to approve 25-year, fixed-price power purchase agreements for the 35-MW Mānā and the 43-MW Kaahanui project, each of which include 4-hour storage capacity, the utility said Tuesday. The utility will pay $127/MWh for power from the Mānā project and $133.40/MWh for electricity from the Kaahanui project, according to filings with the PUC. The cost for power under the Mānā contract could be reduced by up to $13.50/MWh if it receives a loan under the U.S. Department of Agriculture’s Powering Affordable Clean Energy Program, according to the PPA application. The project was selected as a potential loan recipient. Also, the Kaahanui PPA could be reduced by $10/MWh if the Internal Revenue Service publishes final regulations on tax credits under the Inflation Reduction Act for renewable energy projects that are in “energy communities,” and AES determines that the facility qualifies for the tax credits, according to the PPA application. If built, the projects will displace oil-fired generation, saving KIUC customers about $13.4 million in the first year and about $800 million over the life of the PPAs, according to the utility. “We’ve already experienced significant rate stabilization over the past five years due to the high percentage of power generation from renewable projects on fixed-price PPAs,” David Bissell, KIUC’s president and CEO, said in the press release. “Our rates have gone from being the highest in the state by a large margin, to among the lowest in just 20 years. With these projects we’ll be essentially buffered from oil-price volatility.” KIUC told the PUC it plans to dispatch the projects’ stored energy

Read More »

Time to share: Energy supply chain awaits UK government to unlock billions of investment

There are more energy projects awaiting final investment decisions (FIDs) in the UK now than there have been in decades, a trade body boss has warned. While this represents a great deal of potential investment across oil and gas as well as carbon capture and storage (CCS), offshore wind and other clean energy schemes, it means development is also pent up with little to feed the UK supply chain until the commitment is made to unleash the cash. Speaking at the annual Share Fair event, Dave Whitehouse, chief executive of the trade body Offshore Energies UK pointed to the firm’s latest research into supply chain sentiment, released ahead of the annual event in Aberdeen, which found almost all companies in the UK offshore energy supply chain are instead looking elsewhere around the world for work rather than in the UK. © Supplied by OEUKOEUK CEO David Whitehouse “Almost 90% of our supply chain members are seeing that their growth opportunities lie overseas,” he said. “I think it’s good to explore their expertise. But the truth is we need to change that. We need to unlock more projects here in the UK, and that means more oil and gas projects alongside our wind, floating wind, carbon storage and hydrogen projects.” He added: “What our members are saying is, actually a significant number of them are seeing less activity. “But the truth is, when you when you look across the UK, you are seeing more projects which are in that pre-final investment decision than probably I’ve seen in my career. The opportunity now is, how do you unlock those? How do you how do we get those projects unlocked? How do we create the certainty that those projects start to move forward?” ‘Don’t get too excited’ OEUK has hosted Share Fair for

Read More »

Oil Prices Likely to be Lower in 2025

Oil prices are likely to be lower in 2025 than last year, Wood Mackenzie said in a statement sent to Rigzone this week. In that statement, Wood Mackenzie revealed that its latest monthly oil market outlook sees Brent crude oil prices averaging $73 per barrel in 2025. That’s down $7 per barrel per barrel from 2024, Wood Mackenzie highlighted in its statement. It pointed out that the $73 per barrel forecast for this year is revised down $0.40 per barrel from an early February monthly report. Wood Mackenzie noted in the statement that the outlook is primarily shaped by two factors – OPEC+ production plans and U.S. tariff policies. Highlighting several “key points” from its forecast, Wood Mackenzie said in the statement that OPEC+ plans to increase production in small monthly increments from April 2025 through September 2026 and stated that postponing this plan would support prices and could offset the impact of additional U.S. tariffs. Pointing out another “key point” from its forecast in the statement, Wood Mackenzie said global economic growth for 2025 is projected at 2.8 percent but added that this could be adjusted downward by around 0.5 percentage points depending on potential trade war scenarios. Wood Mackenzie went on to note in the statement that slower GDP growth could reduce the oil demand increase in 2025 by about 0.4 million barrels per day and said the annual average for Brent crude could be $3 to $5 per barrel lower if oil demand growth weakens. “Wood Mackenzie emphasizes that these projections are subject to change based on global economic conditions, tariff and trade policies, and OPEC+ decisions,” the company highlighted in the statement. Ann-Louise Hittle, Vice President of Oils Research at Wood Mackenzie, said in the statement, “we’re seeing a complex interplay of supply and demand factors”.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model. Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication. AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

HPE, Nvidia broaden AI infrastructure lineup

“Accelerated by 2 NVIDIA H100 NVL, [HPE Private Cloud AI Developer System] includes an integrated control node, end-to-end AI software that includes NVIDIA AI Enterprise and HPE AI Essentials, and 32TB of integrated storage providing everything a developer needs to prove and scale AI workloads,” Corrado wrote. In addition, HPE Private Cloud AI includes support for new Nvidia GPUs and blueprints that deliver proven and functioning AI workloads like data extraction with a single click, Corrado wrote. HPE data fabric software HPE has also extended support for its Data Fabric technology across the Private Cloud offering. The Data Fabric aims to create a unified and consistent data layer that spans across diverse locations, including on-premises data centers, public clouds, and edge environments to provide a single, logical view of data, regardless of where it resides, HPE said. “The new release of Data Fabric Software Fabric is the data backbone of the HPE Private Cloud AI data Lakehouse and provides an iceberg interface for PC-AI users to data hosed throughout their enterprise. This unified data layer allows data scientists to connect to external stores and query that data as iceberg compliant data without moving the data,” wrote HPE’s Ashwin Shetty in a blog post. “Apache Iceberg is the emerging format for AI and analytical workloads. With this new release Data Fabric becomes an Iceberg end point for AI engineering. This makes it simple for AI engineering data scientists to easily point to the data lakehouse data source and run a query directly against it. Data Fabric takes care of metadata management, secure access, joining files or objects across any source on-premises or in the cloud in the global namespace.” In addition, HPE Private Cloud AI now supports pre-validated Nvidia blueprints to help customers implement support for AI workloads.  AI infrastructure optimization  Aiming to help customers

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »