Stay Ahead, Stay ONMINE

Eureka’s robotic vacuum can detect liquid stains and cut itself free of tangles

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Eureka, a household cleaning pioneer with over a century of history, has unveiled its Eureka J15 Max Ultra robotic vacuum. While such robotic vacuums are plentiful now, this one has cool features like being able to […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Eureka, a household cleaning pioneer with over a century of history, has unveiled its Eureka J15 Max Ultra robotic vacuum.

While such robotic vacuums are plentiful now, this one has cool features like being able to detect and adapt to transparent liquids. It’s another example of a 115-year-old company showing up at CES 2025 with new technology.

It also introduces an anti-tangle mechanism that can cut it free when it gets caught on carpet tangles. It has an extendable side brush and mop for accessing the tightest corners, enhanced obstacle-crossing capabilities to glide smoothly over thresholds, and a powerful 22,000 Pa suction system for cleaning performance. My family had a Eureka vacuum when I was growing up. But it wasn’t autonomous.

Eureka’s ScrubExtend feature.

The Eureka J15 Max Ultra has a self-cleaning base station and the FlexiRazor technology, which effortlessly cuts through tangles to minimize maintenance.

Pet owners have also been wowed by the series’ pet-friendly design, featuring the ability to avoid pet waste and allow remote video interaction with pets (with user permission), ensuring a stress-free cleaning experience for both pets and owners.

IntelliView AI 2.0 – transparent liquid stain detection

Eureka’s IntelliView AI 2.0

Eureka’s proprietary IntelliView AI technology, first introduced with the earlier J15 Pro Ultra, provides a more intelligent way to manage wet messes. When encountering wet messes, it commands the robot to automatically rotate its body, prioritize mop cleaning, and lift the roller brush to prevent liquid from entering the dustbin. However, transparent liquids could still be missed due to the influence of ambient light on the robot’s vision sensors.

The Eureka J15 Max Ultra overcomes this limitation with the groundbreaking IntelliView AI 2.0, an advanced system that integrates an infrared vision system and an FHD vision sensor. This combination allows the vacuum to generate two types of views in real-time: high-definition images of the objects and their surface structures that are largely unaffected by ambient light or lighting variations.

These images are processed by powerful AI algorithms trained to identify subtle differences in surface reflections and texture, enabling the robot to detect liquids clearly, even transparent ones.

Enhanced side brushes – extendable and anti-tangle tech

Eureka robotic vacuum’s DragonClaw Side Brush.

To tackle even the tightest corners and smallest nooks, the Eureka J15 Max Ultra features an advanced dual extension system. This system combines the widely acclaimed ScrubExtend mop extension technology from the J15 Pro Ultra with the newly introduced SweepExtend.

Together, these innovations enable the mop and side brush to automatically extend when detecting corners and edges, ensuring thorough cleaning coverage, even in the most hard-to-reach spaces.

Eureka takes anti-tangle innovation a step further with the introduction of the DragonClaw Side Brush. Unlike traditional side brushes with equilateral triangle bristles prone to tangling, the DragonClaw features a cutting-edge V-shaped design. This design leverages centrifugal force during rotation to actively untangle hair, offering unparalleled anti-tangle performance and ease of maintenance.

Enhanced obstacle-crossing and suction power

Eureka’s robotic vacuum can cross obstacles.

With enhanced power and agility, the Eureka J15 Max Ultra sets a new standard in performance. Delivering an impressive 22,000 Pa of suction power—a 35% increase over its predecessor—it ensures deep and thorough cleaning. Equipped with advanced ObstaCross Technology, the robot effortlessly navigates standard thresholds up to 1.18 inches and handles complex double-layer thresholds up to 1.57 inches*, allowing it to seamlessly transition across diverse floor types and obstacles with ease.

Availability and pricing

The Eureka J15 Max Ultra is expected to be available in June 2025, priced at $1,300. The initial sales wave will kick off in the United States, Germany, France, Italy, and Spain.

In addition, Eureka also introduced the J15 Ultra, an entry-level model in the Eureka J15 Series, featuring 19,000 Pa suction power, FlexiRazor Technology, ScrubExtend, and the All-in-One Base Station. The J15 Ultra is set to launch in March 2025 at a price of $800. For more information, please visit Eureka’s official website at .

Founded in 1909 in Detroit, Michigan, Eureka offers a full line of vacuum cleaners, including uprights, canisters, sticks, handhelds, cordless, and robot vacuum cleaners.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Anthropic signs billion-dollar deal with Google Cloud

US-based AI company Anthropic has signed a major deal with Google Cloud that is said to be worth tens of billions of dollars. As part of the deal, Anthropic will have access to up to one million of Google’s purpose-built Tensor Processing Unit (TPU) AI accelerators. “Anthropic and Google have

Read More »

Baker Hughes Logs Higher Adjusted Profit

Baker Hughes Co on Thursday reported $678 million in adjusted net income for the third quarter, up nine percent quarter-on-quarter and two percent year-on-year. Its adjusted diluted earnings per share of $0.68 beat the Zacks Consensus Estimate of $0.61. The Houston, Texas-based company maintained its quarterly dividend at $0.23 per share. Before adjustment for nonrecurring items, net profit came at $609 million, down 13 percent compared to the prior three-month period and 20 percent against Q3 2024, according to results published by Baker Hughes online. Revenue rose one percent sequentially and year-over-year to $7.01 billion. The increase was driven by the industrial and energy technology (IET) segment, which saw revenue grow two percent quarter-on-quarter and 15 percent year-on-year to $3.37 billion. The other segment, oilfield services and equipment (OFSE), inched up one percent quarter-on-quarter but fell eight percent year-on-year to $3.64 billion. OFSE revenue from Asia and North America rose both quarter-on-quarter and year-on-year to $1.45 billion and $980 million respectively. OFSE revenue from Latin America and the Europe, Sub-Saharan Africa and CIS grouping dropped both quarter-on-quarter and year-on-year to $603 million and $599 million respectively. OFSE orders in Q3 2025 totaled $4.07 billion, up 16 percent quarter-on-quarter and seven percent year-on-year. IET orders totaled $4.14 billion, up 17 percent quarter-on-quarter and 44 percent year-on-year. Operating activities generated $929 million in cash flow. Free cash flow was $699 million. Adjusted EBITDA landed at $1.24 billion, up two percent quarter-on-quarter and year-on-year. “The sequential increase in adjusted net income and Adjusted EBITDA was primarily driven by favorable mix, favorable foreign exchange rates (FX) and structural cost-out initiatives, partially offset by lower cost productivity. The year-over-year increase in adjusted net income and adjusted EBITDA was driven by structural cost-out initiatives and favorable FX, partially offset by lower volume and cost inflation”, Baker

Read More »

Eni Raises Share Buyback Plan

Eni SpA said it’s raising share buybacks this year on an improved outlook for cash flows, after reporting profit that beat analyst estimates. The Italian energy company’s balance sheet is benefiting from a cost-reduction program introduced earlier this year and asset sales aimed at bringing down debt, while a ramp-up of projects is bringing in more cash.   That has allowed the company to increase its buyback program by 20% to €1.8 billion ($2.1 billion), according to an earnings report Friday. Third-quarter adjusted net income of €1.25 billion exceeded average analyst estimates. “A strong print across the board,” Biraj Borkhataria, an analyst at RBC Europe, said in a note. “The company is seeking to share both its underlying performance and part of the disposal proceeds with investors.” Eni’s bullish outlook comes despite a decline in oil prices, with benchmark Brent dropping almost 20% from its January peak as OPEC+ and other countries boost output. The Italian company has been buoyed by the billions of euros it earned from selling stakes in its renewables and mobility divisions, and by strong oil and gas production. Eni raised full-year production guidance to as much as 1.72 million barrels of oil equivalent. Free cash flow from operations is now forecast at €12 billion this year, up from a previous expectation of €11.5 billion. In the firm’s gas business, it sees proforma adjusted earnings before interest and taxes at more than €1 billion. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Energy Secretary Strengthens Mid-Atlantic Grid Reliability Heading into Winter Months

WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order authorized by Section 202(c) of the Federal Power Act ensuring Americans maintain access to reliable, affordable, and secure energy without interruption. The emergency order permits PJM Interconnection (PJM), in coordination with the Talen Energy Corporation, to run specified units at the Wagner Generating Station to meet anticipated electricity demand heading into the winter months. “On Day One, President Trump declared an energy emergency and began reversing the impacts of the dangerous energy subtraction policies of the previous administration. Unfortunately, the energy emergency continues to impact many regions of our nation,” Secretary Wright said. “To ensure 65 million Americans in 13 states and D.C. do not experience blackouts in the coming winter months, I am issuing an emergency order for PJM Interconnection. Americans deserve reliable power regardless of whether the wind is blowing or the sun is shining and especially during a cold snap. The Trump administration is committed to keeping your families safe and restoring access to affordable, reliable, and secure electricity.” As outlined in the North American Electric Reliability Corporation’s Winter Reliability Assessment, PJM’s service area faces “risks of electricity supply shortfalls during periods of more extreme conditions” this winter. Secretary Wright recently granted PJM’s request to allow for the dispatch and operation of Unit 4 of the Maryland-based Wagner Generating Station in exceedance of its operating limit on July 28. This emergency action ensured that the 65 million Americans in PJM’s service area maintained access to reliable, affordable energy during the summer months. The “growing resource adequacy concern” that PJM cited in their July request still exists today. PJM anticipates a continued need to schedule Unit 4 at the Wagner Generating Station in the final months of the year and submitted a renewal application to alleviate the emergency through the end

Read More »

Crude Pauses Near Two-Week High

Oil steadied, holding near a two-week high, as traders assessed whether fresh US sanctions on Russia’s biggest producers could counter a looming global surplus. West Texas Intermediate ended the day little changed near $61 a barrel, up 7% this week, after the the US blacklisted Russia’s Rosneft PJSC and Lukoil PJSC in an effort to cut off revenue Moscow needs for its ongoing war in Ukraine. Russian oil flows to major purchaser India are expected to plunge while Chinese state-owned companies have canceled some purchases. Trend-following funds are also adding long positions, reinforcing the short squeeze in oil. “Barring a downside shock, every scenario will result in large-scale algorithmic buying activity over the coming trading sessions,” said Dan Ghali, senior commodity strategist at TD Securities. The European Union also piled additional pressure on the Kremlin with a package of sanctions targeting Russia’s energy infrastructure, including a full transaction ban on Rosneft and Gazprom Neft PJSC. The measures come as the oil market faces a significant surplus, with the amount on tankers at sea hitting a record and the International Energy Agency expecting world supply to exceed demand by almost 4 million barrels a day next year. “Overall, we estimate that between 500,000 to 600,000 barrels per day of Russian oil production is at risk of being curtailed,” said Janiv Shah, a vice president at Rystad Energy. Kuwait’s oil minister said OPEC is prepared to increase production if demand requires it. Chinese firms have already halted purchases of some spot cargoes — mostly ESPO, a grade from Russia’s Far East — according to people with knowledge of the situation. President Donald Trump plans to speak to his counterpart Xi Jinping about the China-Russia oil trade during a meeting next week. Meanwhile, India’s Reliance Industries Ltd., a major Russian oil importer, has

Read More »

Reliance Snaps Up Middle East Oil

India’s Reliance Industries Ltd. has bought millions of barrels of crude from the Middle East and US after Washington sanctioned two Russian producers, raising concerns about a disruption to oil flows. The private refiner purchased several grades, including Saudi Arabia’s Khafji, Iraq’s Basrah Medium and Qatar’s Al-Shaheen, along with some US West Texas Intermediate crude, said traders familiar with the matter, who asked not to be identified because they’re not authorized to speak to the media. Cargoes are expected to be delivered in December or January, they added.  Reliance has been India’s biggest importer of Russian oil by volume this year, taking crude via a long-term contract with Rosneft PJSC — one of the blacklisted companies. While the processor also purchases Middle Eastern grades regularly, the recent buying, including some transactions earlier this week prior to the US sanctions, has been more active than usual, the traders said. Overall, Reliance has bought at least 10 million barrels from the spot market this month, with Middle Eastern grades making up the bulk of those purchases, and most of the crude acquired after the US penalties, the traders said. Reliance is currently assessing the implications of various sanctions on Russian oil flows and the export of refined products to Europe, a company spokesperson said in a statement on Friday.  The refiner’s supply contracts will evolve to “reflect changing market and regulatory conditions” and its diversified crude-sourcing strategy will ensure reliability in its operations, the spokesperson said.  Other Indian refiners are also in the market for spot cargoes, particularly from the Middle East, the US and Brazil, the traders said. Prices for grades such as Oman strengthened on Thursday, while prompt timespreads for the region’s benchmark Dubai rose. Global benchmark Brent surged more than 5% following the sanctions announcement.  Flows of Russian oil to major

Read More »

The week in 5 numbers: The electricity price report everyone is talking about

The number of states where overall retail electricity prices fell from 2019 to 2024 when adjusting for inflation, according to the Berkeley Lab report. However, changes in price were not felt evenly across the country or by residential, commercial and industrial customers, respectively. Taking inflation into account, prices fell a small amount in many states but rose more in concentrated population centers in California and New England. Nationally, they also rose faster for residential customers than they did for commercial and industrial ones.

Read More »

Intel sees supply shortage, will prioritize data center technology

“Capacity constraints, especially on Intel 10 and Intel 7 [Intel’s semiconductor manufacturing process], limited our ability to fully meet demand in Q3 for both data center and client products,” said Zinsner, adding that Intel isn’t about to add capacity to Intel 10 and 7 when it has moved beyond those nodes. “Given the current tight capacity environment, which we expect to persist into 2026, we are working closely with customers to maximize our available output, including adjusting pricing and mix to shift demand towards products where we have supply and they have demand,” said Zinsner. For that reason, Zinzner projects that the fourth quarter will be roughly flat versus the third quarter in terms of revenue. “We expect Intel products up modestly sequentially but below customer demand as we continue to navigate supply environment,” said Zinsner. “We expect CCG to be down modestly and PC AI to be up strongly sequentially as we prioritize wafer capacity for server shipments over entry-level client parts.”

Read More »

How to set up an AI data center in 90 days

“Personally, I think that a brownfield is very creative way to deal with what I think is the biggest problem that we’ve got right now, which is time and speed to market,” he said. “On a brownfield, I can go into a building that’s already got power coming into the building. Sometimes they’ve already got chiller plants, like what we’ve got with the building I’m in right now.” Patmos certainly made the most of the liquid facilities in the old printing press building. The facility is built to handle anywhere from 50 to over 140 kilowatts per cabinet, a leap far beyond the 1–2 kW densities typical of legacy data centers. The chips used in the servers are Nvidia’s Grace Blackwell processors, which run extraordinarily hot. To manage this heat load, Patmos employs a multi-loop liquid cooling system. The design separates water sources into distinct, closed loops, each serving a specific function and ensuring that municipal water never directly contacts sensitive IT equipment. “We have five different, completely separated water loops in this building,” said Morgan. “The cooling tower uses city water for evaporation, but that water never mixes with the closed loops serving the data hall. Everything is designed to maximize efficiency and protect the hardware.” The building taps into Kansas City’s district chilled water supply, which is sourced from a nearby utility plant. This provides the primary cooling resource for the facility. Inside the data center, a dedicated loop circulates a specialized glycol-based fluid, filtered to extremely low micron levels and formulated to be electronically safe. Heat exchangers transfer heat from the data hall fluid to the district chilled water, keeping the two fluids separate and preventing corrosion or contamination. Liquid-to-chip and rear-door heat exchangers are used for immediate heat removal.

Read More »

INNIO and VoltaGrid: Landmark 2.3 GW Modular Power Deal Signals New Phase for AI Data Centers

Why This Project Marks a Landmark Shift The deployment of 2.3 GW of modular generation represents utility-scale capacity, but what makes it distinct is the delivery model. Instead of a centralized plant, the project uses modular gas-reciprocating “power packs” that can be phased in step with data-hall readiness. This approach allows staged energization and limits the bottlenecks that often stall AI campuses as they outgrow grid timelines or wait in interconnection queues. AI training loads fluctuate sharply, placing exceptional stress on grid stability and voltage quality. The INNIO/VoltaGrid platform was engineered specifically for these GPU-driven dynamics, emphasizing high transient performance (rapid load acceptance) and grid-grade power quality, all without dependence on batteries. Each power pack is also designed for maximum permitting efficiency and sustainability. Compared with diesel generation, modern gas-reciprocating systems materially reduce both criteria pollutants and CO₂ emissions. VoltaGrid markets the configuration as near-zero criteria air emissions and hydrogen-ready, extending allowable runtimes under air permits and making “prime-as-a-service” viable even in constrained or non-attainment markets. 2025: Momentum for Modular Prime Power INNIO has spent 2025 positioning its Jenbacher platform as a next-generation power solution for data centers: combining fast start, high transient performance, and lower emissions compared with diesel. While the 3 MW J620 fast-start lineage dates back to 2019, this year the company sharpened its data center narrative and booked grid stability and peaking projects in markets where rapid data center growth is stressing local grids. This momentum was exemplified by an 80 MW deployment in Indonesia announced earlier in October. The same year saw surging AI-driven demand and INNIO’s growing push into North American data-center markets. Specifications for the 2.3 GW VoltaGrid package highlight the platform’s heat tolerance, efficiency, and transient response, all key attributes for powering modern AI campuses. VoltaGrid’s 2025 Milestones VoltaGrid’s announcements across 2025 reflect

Read More »

Inside Google’s multi-architecture revolution: Axion Arm joins x86 in production clusters

Matt Kimball, VP and principal analyst with Moor Insights and Strategy, pointed out that AWS and Microsoft have already moved many workloads from x86 to internally designed Arm-based servers. He noted that, when Arm first hit the hyperscale datacenter market, the architecture was used to support more lightweight, cloud-native workloads with an interpretive layer where architectural affinity was “non-existent.” But now there’s much more focus on architecture, and compatibility issues “largely go away” as Arm servers support more and more workloads. “In parallel, we’ve seen CSPs expand their designs to support both scale out (cloud-native) and traditional scale up workloads effectively,” said Kimball. Simply put, CSPs are looking to monetize chip investments, and this migration signals that Google has found its performance-per-dollar (and likely performance-per-watt) better on Axion than x86. Google will likely continue to expand its Arm footprint as it evolves its Axion chip; as a reference point, Kimball pointed to AWS Graviton, which didn’t really support “scale up” performance until its v3 or v4 chip. Arm is coming to enterprise data centers too When looking at architectures, enterprise CIOs should ask themselves questions such as what instance do they use for cloud workloads, and what servers do they deploy in their data center, Kimball noted. “I think there is a lot less concern about putting my workloads on an Arm-based instance on Google Cloud, a little more hesitance to deploy those Arm servers in my datacenter,” he said. But ultimately, he said, “Arm is coming to the enterprise datacenter as a compute platform, and Nvidia will help usher this in.” Info-Tech’s Jain agreed that Nvidia is the “biggest cheerleader” for Arm-based architecture, and Arm is increasingly moving from niche and mobile use to general-purpose and AI workload execution.

Read More »

AMD Scales the AI Factory: 6 GW OpenAI Deal, Korean HBM Push, and Helios Debut

What 6 GW of GPUs Really Means The 6 GW of accelerator load envisioned under the OpenAI–AMD partnership will be distributed across multiple hyperscale AI factory campuses. If OpenAI begins with 1 GW of deployment in 2026, subsequent phases will likely be spread regionally to balance supply chains, latency zones, and power procurement risk. Importantly, this represents entirely new investment in both power infrastructure and GPU capacity. OpenAI and its partners have already outlined multi-GW ambitions under the broader Stargate program; this new initiative adds another major tranche to that roadmap. Designing for the AI Factory Era These upcoming facilities are being purpose-built for next-generation AI factories, where MI450-class clusters could drive rack densities exceeding 100 kW. That level of compute concentration makes advanced power and cooling architectures mandatory, not optional. Expected solutions include: Warm-water liquid cooling (manifold, rear-door, and CDU variants) as standard practice. Facility-scale water loops and heat-reuse systems—including potential district-heating partnerships where feasible. Medium-voltage distribution within buildings, emphasizing busway-first designs and expanded fault-current engineering. While AMD has not yet disclosed thermal design power (TDP) specifications for the MI450, a 1 GW campus target implies tens of thousands of accelerators. That scale assumes liquid cooling, ultra-dense racks, and minimal network latency footprints, pushing architectures decisively toward an “AI-first” orientation. Design considerations for these AI factories will likely include: Liquid-to-liquid cooling plants engineered for step-function capacity adders (200–400 MW blocks). Optics-friendly white space layouts with short-reach topologies, fiber raceways, and aisles optimized for module swaps. Substation adjacency and on-site generation envelopes negotiated during early land-banking phases. Networking, Memory, and Power Integration As compute density scales, networking and memory bottlenecks will define infrastructure design. Expect fat-tree and dragonfly network topologies, 800 G–1.6 T interconnects, and aggressive optical-module roadmaps to minimize collective-operation latency, aligning with recent disclosures from major networking vendors.

Read More »

Study Finds $4B in Data Center Grid Costs Shifted to Consumers Across PJM Region

In a new report spanning 2022 through 2024, the Union of Concerned Scientists (UCS) identifies a significant regulatory gap in the PJM Interconnection’s planning and rate-making process—one that allows most high-voltage (“transmission-level”) interconnection costs for large, especially AI-scale, data centers to be socialized across all utility customers. The result, UCS argues, is a multi-billion-dollar pass-through that is poised to grow as more data center projects move forward, because these assets are routinely classified as ordinary transmission infrastructure rather than customer-specific hookups. According to the report, between 2022 and 2024, utilities initiated more than 150 local transmission projects across seven PJM states specifically to serve data center connections. In 2024 alone, 130 projects were approved with total costs of approximately $4.36 billion. Virginia accounted for nearly half that total—just under $2 billion—followed by Ohio ($1.3 billion) and Pennsylvania ($492 million) in data-center-related interconnection spending. Yet only six of those 130 projects, about 5 percent, were reported as directly paid for by the requesting customer. The remaining 95 percent, representing more than $4 billion in 2024 connection costs, were rolled into general transmission charges and ultimately recovered from all retail ratepayers. How Does This Happen? When data center project costs are discussed, the focus is usually on the price of the power consumed, or megawatts multiplied by rate. What the UCS report isolates, however, is something different: the cost of physically delivering that power: the substations, transmission lines, and related infrastructure needed to connect hyperscale facilities to the grid. So why aren’t these substantial consumer-borne costs more visible? The report identifies several structural reasons for what effectively functions as a regulatory loophole in how development expenses are reported and allocated: Jurisdictional split. High-voltage facilities fall under the Federal Energy Regulatory Commission (FERC), while retail electricity rates are governed by state public utility

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »