Stay Ahead, Stay ONMINE

HPE, Nvidia broaden AI infrastructure lineup

“Accelerated by 2 NVIDIA H100 NVL, [HPE Private Cloud AI Developer System] includes an integrated control node, end-to-end AI software that includes NVIDIA AI Enterprise and HPE AI Essentials, and 32TB of integrated storage providing everything a developer needs to prove and scale AI workloads,” Corrado wrote. In addition, HPE Private Cloud AI includes support for new Nvidia […]

“Accelerated by 2 NVIDIA H100 NVL, [HPE Private Cloud AI Developer System] includes an integrated control node, end-to-end AI software that includes NVIDIA AI Enterprise and HPE AI Essentials, and 32TB of integrated storage providing everything a developer needs to prove and scale AI workloads,” Corrado wrote.

In addition, HPE Private Cloud AI includes support for new Nvidia GPUs and blueprints that deliver proven and functioning AI workloads like data extraction with a single click, Corrado wrote.

HPE data fabric software

HPE has also extended support for its Data Fabric technology across the Private Cloud offering. The Data Fabric aims to create a unified and consistent data layer that spans across diverse locations, including on-premises data centers, public clouds, and edge environments to provide a single, logical view of data, regardless of where it resides, HPE said.

“The new release of Data Fabric Software Fabric is the data backbone of the HPE Private Cloud AI data Lakehouse and provides an iceberg interface for PC-AI users to data hosed throughout their enterprise. This unified data layer allows data scientists to connect to external stores and query that data as iceberg compliant data without moving the data,” wrote HPE’s Ashwin Shetty in a blog post. “Apache Iceberg is the emerging format for AI and analytical workloads. With this new release Data Fabric becomes an Iceberg end point for AI engineering. This makes it simple for AI engineering data scientists to easily point to the data lakehouse data source and run a query directly against it. Data Fabric takes care of metadata management, secure access, joining files or objects across any source on-premises or in the cloud in the global namespace.”

In addition, HPE Private Cloud AI now supports pre-validated Nvidia blueprints to help customers implement support for AI workloads. 

AI infrastructure optimization 

Aiming to help customers manage their AI infrastructure, HPE enhanced its OpsRamp management package which monitors servers, networks, storage, databases, and applications. To OpsRamp the company added support for GPU optimization which means the platform can now manage AI-native software stacks to deliver full-stack observability to monitor the performance of training and inference workloads running on large Nvidia accelerated computing clusters, HPE stated. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

What is Nvidia Dynamo and why it matters to enterprises?

It uses disaggregated serving to separate the processing and generation phases of large language models (LLMs) on different GPUs, which allows each phase to be optimized independently for its specific needs and ensures maximum GPU resource utilization, the chipmaker explained.   The efficiency gain is made possible as Dynamo has

Read More »

Cisco, Nvidia team to deliver secure AI factory infrastructure

Hypershield uses AI to dynamically refine security policies based on application identity and behavior. It automates policy creation, optimization, and enforcement across workloads. In addition, Hypershield promises to let organizations autonomously segment their networks when threats are a problem, gain exploit protection without having to patch or revamp firewalls, and

Read More »

Offshore service leaders seek leverage after a lost decade

Global services firms have been caught in a storm. Engineering, procurement and construction (EPC) focused companies have been squeezed by tough contract terms, battered by inflation and haunted by legacy commitments. After a lost decade following the 2014 oil price crash, these service providers are seeking ways out of the pit – aiming to reclaim bargaining power and prop up finances. Could the tide be turning in their favour? Rystad Energy senior supply chain analyst Chinmayi Teggi pointed to the 2014 oil price crash as setting service companies up for difficulties. “There was not a lot of negotiation, pricing power essentially always was with operators,” she said. Signs indicate a turnaround. Service companies are having more challenging conversations with operators, including new areas like carbon capture and storage (CCS) and offshore wind. New opportunities are providing new leverage to these established players. Old model The power imbalance often led EPC contractors to agree to lump-sum contracts. These shifted the risk from operators to service providers. “So if anything happens, if anything goes wrong, then it’s all on the EPCs,” Teggi said. “They’ll never be able to get good margins carrying that sort of risk. Those are the kinds of contracts that these companies have been working through.” These contracting challenges worsened other difficulties these companies may face. For instance, Petrofac faced challenges at its Thai Oil Clean Fuels project, won – with Samsung and Saipem – on a lump-sum basis in 2018. A corruption scandal involving Unaoil has also cast a shadow over Petrofac’s plans. In December, the EPC company launched a restructuring process and expects to see results in April. Wood Group is another EPC firm facing challenges. It acquired Amec Foster Wheeler in 2017, but had to settle various issues, including legacy contracts and corruption investigations. The Aberdeen-headquartered

Read More »

Vitol to Acquire Stakes in Eni Assets in Africa for $1.65B

Eni SpA has agreed to bring in Vitol as a partner in several producing and undeveloped oil and gas assets in Cote d’Ivoire and the Republic of the Congo. The $1.65 billion transaction will see Vitol obtain a 30 percent stake in the Ivory Coast’s Baleine field and 25 percent in Congo LNG. Eni owns a 77.25 percent interest in Baleine and 65 percent in Congo LNG. Elsewhere in West Africa, Vitol and Eni are already co-venturers in Ghana’s OCTP and Block 4 projects. “This transaction is in line with Eni’s strategy aimed at optimizing upstream activities, through a rebalancing of the portfolio that provides for the early valorization of exploration discoveries through a reduction of participations in them (the so-called dual exploration model)”, Italy’s state-backed Eni said in an online statement Wednesday.  “Vitol has had an upstream presence in West Africa region for many years. In addition, it has a portfolio of infrastructure and downstream related investments.  The parties target the completion of the transaction “as soon as practicable”, the statement said. Late last year Eni put onstream the second phase of Baleine, raising the field’s capacity to 60,000 barrels of oil per day (bopd) and 70 million cubic feet of associated gas per day (MMcfd). “Phase 2 will see the Floating Production, Storage and Offloading Unit Petrojarl Kong deployed alongside the Floating Storage and Offloading Unit Yamoussoukro for the export of oil, while 100 percent of the processed gas will supply the local energy demand through the connection with the pipeline built during the project’s Phase 1”, Eni said in a press release December 28. “This achievement further consolidates Côte d’Ivoire’s role as a producing country on the global energy scenario, strengthening access to energy on a national scale”. Eni said then phase 3 was under study and expected to

Read More »

‘Purchaser caused’ certificates are key to driving renewable energy growth

Jim Boyle is the CEO and founder of Sustainability Roundtable, a Boston-based strategic corporate sustainability advisory firm. With the new administration turning away from clean energy, the industry must hold itself to the highest standards to achieve the needed level of impact. The credibility and impact of the energy attribute certificate (EAC) market depend on consistently and accurately centering purchaser causation — the idea that certificate purchases directly contribute to new generating sources — in corporate clean energy procurement. In my firm’s work with Fortune 500 companies, global growth companies and leading U.S. cities, we have heard directors of sustainability voice three imperatives: Corporate sustainability practitioners must understand the difference between unbundled, contributing and purchaser-caused EACs. EAC purveyors must avoid any possible deception in their marketing. Regulators and standard-setters should recognize and promote a pre-financing commitment to the bundled cost of the renewable project development and the EACs the project produces as the highest standard of renewable energy procurement. Only by centering purchaser causation can corporate renewable energy procurement contribute a crucial portion of the new renewable energy needed if to meet COP 28’s UAE Consensus goal of tripling global renewable energy capacity by 2030. Label EACs accurately and by impact To maximize corporate energy procurement’s decarbonization impact, the industry must distinguish EACs by the degree to which they cause new renewable energy capacity. Clarifying this foundational confusion begins with the basics. EACs are tradable certificates representing one megawatt-hour of electricity generated from a renewable energy resource. The World Resource Institute’s Greenhouse Gas (GHG) Protocol has enabled companies to mitigate Scope 2 emissions with them for years. But, as the below breakdown shows, they vary widely in how directly they support new renewable energy. Unbundled EACs regularly come from projects developed years before procurement. They contribute zero to the development

Read More »

DTE Energy seeks proposals for 450 MW of energy storage

Dive Brief: DTE Energy is seeking proposals by June 27 for new standalone energy storage projects totaling about 450 MW, the Michigan utility said Wednesday.  Eligible energy storage projects must be interconnected to the Midcontinent Independent System Operator or distribution-level transmission system, located in Michigan, and reach commercial operation by the end of 2028, DTE said. The request for proposals supports DTE’s efforts to deploy nearly 3 GW of energy storage by 2042, as outlined in its most recent integrated resource plan, including 240 MW to be deployed by 2027 and 520 MW more from 2028 to 2032. Dive Insight: DTE expects to execute contracts for projects in its latest energy storage procurement by the first quarter of 2026, it said Wednesday. In May, DTE issued a similar request for proposals for 120 MW of standalone energy storage resources. That RFP also required projects to be located in Michigan and interconnected to the MISO or distribution grids. DTE said the procurements are driven by its growing wind and solar fleet and by Michigan’s carbon-free power law, which includes renewable portfolio targets of 50% by 2030 and 60% by 2035. The law also requires state utilities to submit the necessary applications to the Michigan Public Service Commission to meet their share of the law’s 2,500-MW energy storage target by the end of 2029. “With the growth of DTE’s renewable energy generation fleet, energy storage facilities are imperative to Michigan’s clean energy transformation,” DTE Energy Vice President for Clean Energy and Acquisitions Chuck Conlen said in a statement. DTE already operates two Michigan battery installations colocated with solar power plants and the 2,292-MW Ludington pumped-hydropower plant it co-owns with Consumers Energy, it said Wednesday. In addition, its 14-MW/56-MWh Slocum BESS installation, which DTE describes as a pilot project, recently began commercial operations,

Read More »

ORLEN Commits More LNG for Ukraine

Naftogaz Group has secured an additional supply of about 100 million cubic meters (3.53 billion cubic feet) of natural gas from ORLEN SA. “The gas will be transported to Ukraine in April and will be used to create strategic gas reserves, which are crucial for Ukraine’s energy security and for ensuring the stable passage of the next heating season”, Ukrainian state-owned oil and gas company Naftogaz said in an online statement. The new contract raises committed gas deliveries for Ukraine under a recent liquefied natural gas (LNG) cooperation agreement between the companies to around 200 million cubic meters. For the second contract, Poland’s majority state-owned ORLEN has procured an LNG cargo from the United States and will regasify this for transfer on the Polish-Ukrainian border, ORLEN said separately. “Stable gas supplies remain our top priority”, Naftogaz acting chair Roman Chumak said. “Cooperation with ORLEN expands Ukraine’s LNG import capacity and enhances energy security. “We are diversifying supply sources to ensure a reliable and accessible gas supply, especially amid ongoing Russian attacks on our infrastructure”. On March 7, Naftogaz said its gas production infrastructure had been attacked “for the seventeenth time, causing damage to critical gas production sites”. “We are working to mitigate the aftermath of the strikes and assess the damage”, Chumak said then. “Naftogaz Group is taking all necessary steps to restore operations at the facilities damaged in the attack. We are doing, and will continue to do, everything possible to ensure the country’s gas supply despite ongoing threats”. ORLEN chief executive and president Ireneusz Fąfara said in comments about the new contract, “We continue to develop our trading expertise and leverage our experience in the U.S. market, enabling us to provide attractive commercial terms to our partners”. “At the same time, we are proud to contribute to Ukraine’s

Read More »

Vopak to Invest Additional $1B in Gas, Industrial Infrastructure

Royal Vopak said it has committed an additional $1.09 billion (EUR 1 billion) by 2030 to grow its gas and industrial footprint, “given the positive outlook and market demand for infrastructure”. Majority of the Rotterdam, Netherlands-based company’s investments have been in gas infrastructure, as well as growth markets like India and China. The additional investments will be underpinned by customer commitments, it said. The company’s total investments in gas and industrial infrastructure by 2030 are expected to be around $2.18 billion (EUR 2 billion), it noted. Vopak said it aims to invest in attractive and accretive growth projects in gas, industrial and energy transition infrastructure that support a portfolio operating cash return of above 13 percent. Further, the company said its ambition to invest $1.09 billion (EUR 1 billion) in energy transition infrastructure by 2030 remains unchanged. It plans to focus on infrastructure solutions for low-carbon fuels and sustainable feedstocks, ammonia as a hydrogen carrier, liquid carbon dioxide (CO2), and battery energy storage. Part of the company’s ambition to invest in energy transition infrastructure is to repurpose a portion of the existing oil capacity in the hub locations for low carbon fuels and feedstocks, it said. FID on Thailand Tank Infrastructure Project Meanwhile, Vopak said it has reached a positive final investment to construct 160,000 cubic-meter tank infrastructure in Map Ta Phut, Rayong, to support the import of U.S. ethane into Thailand. Vopak’s joint venture entity Thai Tank Terminal signed a 15-year contract with PTT Global Chemical Public Company Limited (GC) for the storage and handling of ethane in Thailand, it said in a separate news release. The tank infrastructure project is backed by a long-term contract and is expected to be completed in 2029, according to the release. Ethane will serve as a long-term feedstock supply for petrochemical crackers,

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

HPE, Nvidia broaden AI infrastructure lineup

“Accelerated by 2 NVIDIA H100 NVL, [HPE Private Cloud AI Developer System] includes an integrated control node, end-to-end AI software that includes NVIDIA AI Enterprise and HPE AI Essentials, and 32TB of integrated storage providing everything a developer needs to prove and scale AI workloads,” Corrado wrote. In addition, HPE Private Cloud AI includes support for new Nvidia GPUs and blueprints that deliver proven and functioning AI workloads like data extraction with a single click, Corrado wrote. HPE data fabric software HPE has also extended support for its Data Fabric technology across the Private Cloud offering. The Data Fabric aims to create a unified and consistent data layer that spans across diverse locations, including on-premises data centers, public clouds, and edge environments to provide a single, logical view of data, regardless of where it resides, HPE said. “The new release of Data Fabric Software Fabric is the data backbone of the HPE Private Cloud AI data Lakehouse and provides an iceberg interface for PC-AI users to data hosed throughout their enterprise. This unified data layer allows data scientists to connect to external stores and query that data as iceberg compliant data without moving the data,” wrote HPE’s Ashwin Shetty in a blog post. “Apache Iceberg is the emerging format for AI and analytical workloads. With this new release Data Fabric becomes an Iceberg end point for AI engineering. This makes it simple for AI engineering data scientists to easily point to the data lakehouse data source and run a query directly against it. Data Fabric takes care of metadata management, secure access, joining files or objects across any source on-premises or in the cloud in the global namespace.” In addition, HPE Private Cloud AI now supports pre-validated Nvidia blueprints to help customers implement support for AI workloads.  AI infrastructure optimization  Aiming to help customers

Read More »

Schneider Electric Adds Data Center and Microgrid Testing Labs to Andover, MA Global R&D Center

Schneider Electric, a global leader in energy management and automation, has established its Global Innovation Hubs as key centers for technological advancement, collaboration, and sustainable development. These hub facilities serve as ecosystems where cutting-edge solutions in energy efficiency, industrial automation, and digital transformation are designed, tested, and deployed to address the world’s most pressing energy and sustainability challenges. Energy Management and Industrial Automation Focus Strategically located around the world, Schneider Electric’s Global Innovation Hubs are positioned to drive regional and global innovation in energy management and industrial automation. The hubs focus on developing smart, connected, and sustainable solutions across various sectors, including data centers, smart buildings, industrial automation, and renewable energy. Key aspects of the Schneider Global Innovation Hubs include: Collaboration and Co-Innovation: Partnering with startups, industry leaders, and research institutions to accelerate innovation. Fostering an open ecosystem where ideas can be rapidly developed and tested. Digital Transformation and Automation: Leveraging IoT, AI, and cloud technologies to enhance energy efficiency. Implementing digital twin technology for real-time monitoring and predictive maintenance. Sustainability and Energy Efficiency: Developing solutions that contribute to decarbonization and net-zero emissions. Creating energy-efficient systems for buildings, industries, and critical infrastructure. Customer-focused Innovation: Offering live demonstrations, simulation environments, and test labs for customers. Customizing solutions to meet specific industry challenges and regulatory requirements. Schneider’s Andover R&D Lab Highlights While there are 11 hubs worldwide to give the global customer base more convenient locations where they can evaluate Schneider product, the new lab facilities have also been added to one of the company’s five global R&D locations. The selected location is co-located with Schneider’s US research labs in Andover, Massachusetts. With the addition of these two new labs there are now 41 labs located in Andover. Over the last year, Schneider Electric has invested approximately $2.4 billion in R&D. The

Read More »

Executive Roundtable: Probing Data Center Power Infrastructure and Energy Resilience in 2025

Ryan Baumann, Rehlko: Industry leaders are taking bold steps to secure long-term energy availability by embracing innovative backup power solutions, forming strategic partnerships, and exploring alternative energy sources. To overcome the challenges ahead, collaboration is key—operators, utilities, OEMs, and technology providers must come together, share insights, and create customized solutions that keep energy both reliable and sustainable as the landscape evolves. One of the most significant strategies is the growing use of alternative energy sources like hydrogen, natural gas, and even nuclear to ensure a steady supply of power. These options provide a more flexible, reliable backup to grid power, especially in markets with fluctuating energy demands or limited infrastructure. Emergency generator systems, when equipped with proper emissions treatment, can also support the grid through peak shaving or interruptible rate programs with utilities. Hydrogen fuel cells, in particular, are becoming a game-changer for backup power. Offering zero-emission, scalable, and efficient solutions, hydrogen is helping data centers move toward their carbon-neutral goals while addressing energy reliability. When integrated into a microgrid, hydrogen fuel cells create a cohesive energy network that can isolate from the main grid during power outages, ensuring continuous energy security for critical infrastructure like data centers. Additionally, natural gas Central Utility Plants (CUPs) are emerging as a key bridging power source, helping large data centers in grid-constrained regions maintain operations until permanent utility power is available. Smart energy solutions, including customized paralleling systems, allow emergency assets to be grid-intertied, enabling utilities and communities to share power burdens during peak periods. By embracing these innovative solutions and fostering collaboration, the industry not only ensures reliable power for today’s data centers but also paves the way for a more sustainable and resilient energy future. Next:  Cooling Imperatives for Managing High-Density AI Workloads 

Read More »

From Billions to Trillions: Data Centers’ New Scale of Investment

With Apple’s announcement to spend $500 billion over the next four years briefly overshadowing the $500 billion joint venture announcement of the Stargate project with the federal government, you can almost be forgiven for losing track of the billions of dollars in data center and tech spending announced by other industry players. Apple’s Four-Year, $500 Billion Spend Resonates with Tech The company’s data center infrastructure will see some collateral improvement to support future AI efforts, as a percentage of the funding will be dedicated to enhancing their existing data center infrastructure, though as yet there has been no public discussion of new data center facilities. Apple has committed to spending over $500 billion in the U.S. during the next four years.  This investment aims to bolster various sectors, including AI infrastructure, data centers, and research and development (R&D) in silicon engineering. The initiative also encompasses expanding facilities and teams across multiple states, such as Texas, California, Arizona, Nevada, Iowa, Oregon, North Carolina, and Washington. The spend will be a combination of investments in new infrastructure components along with the expansion of existing facilities. What has been publicly discussed includes the following: New AI Server Manufacturing Facility in Houston, Texas A significant portion of this investment is allocated to constructing a 250,000-square-foot manufacturing facility in Houston, Texas. Scheduled to open in 2026, this facility will produce servers designed to power Apple Intelligence, the company’s AI system. These servers, previously manufactured abroad, will now be assembled domestically, enhancing energy efficiency and security for Apple’s data centers. The project is expected to create thousands of jobs in the region. Expansion of Data Center Capacity Apple plans to increase its data center capacity in several states, including North Carolina, Iowa, Oregon, Arizona, and Nevada. This expansion aims to support the growing demands of AI

Read More »

Why Geothermal Energy Could Be a Behind-the-Meter Game Changer for Data Center Power Demand

By colocating data centers with geothermal plants, operators could tap into a clean, baseload power source that aligns with their sustainability goals. Operators could reduce transmission losses and enhance energy efficiency. Meanwhile, the paper points out that one of the most promising aspects of geothermal energy is its scalability. The Rhodium Group estimates that the U.S. has the technical potential to generate up to 5,000 GW of geothermal power—far exceeding the current and projected needs of the data center industry. With the right investments and policy support, Rhodium contends that geothermal could become a cornerstone of the industry’s energy strategy. The researchers project that 55-64% of the anticipated growth in hyperscale data center capacity could be met with behind-the-meter geothermal power, representing 15-17 GW of new capacity. In 13 of the 15 largest data center markets, geothermal could meet 100% of projected demand growth using advanced cooling technologies. Even in less favorable markets, geothermal could still meet at least 15% of power needs. Challenges and Opportunities for Geothermal-Driven Data Center Siting Strategies The Rhodium Group report explores two potential siting strategies for data centers: one that follows historical patterns of clustering near population centers and fiber-optic networks, and another that prioritizes proximity to high-quality geothermal resources. In the latter scenario, geothermal energy could easily meet all projected data center load growth by the early 2030s. Geothermal heat pumps also offer an additional benefit by providing efficient cooling for data centers, further reducing their overall electric load. This dual application of geothermal energy—for both power generation and cooling—could significantly enhance the sustainability and resilience of data center operations. However, despite its potential, geothermal energy faces several challenges that must be addressed to achieve widespread adoption. High drilling costs and technical risks associated with EGS development have historically deterred investment. (The report

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »