Stay Ahead, Stay ONMINE

Over 300 jobs at risk at SSE as renewables bears the brunt

There are 300 jobs at risk in the UK and Ireland as SSE and its renewables business launch a consultation on redundancies. Unite the Union claims that more than 150 of the jobs at risk come from the “extremely profitable” SSE renewables, which operates the Viking onshore wind farm. Among the jobs reportedly at risk are […]

There are 300 jobs at risk in the UK and Ireland as SSE and its renewables business launch a consultation on redundancies.

Unite the Union claims that more than 150 of the jobs at risk come from the “extremely profitable” SSE renewables, which operates the Viking onshore wind farm.

Among the jobs reportedly at risk are critical support staff for control rooms and those working in maintenance.

Simon Coop, national officer for the union, said: “Staffing levels have been a major issue for Unite before these redundancies were announced, and this will make the situation much worse as our members working in the renewables space are already overstretched and being asked to work more and more hours.

“Their voices must be heard and, we will ensure that this happens.

“Unite is calling on SSE to reconsider its decision.”

Additionally, the union claims that workers have alread “complained about already being overworked due to there not being enough staff and have been unable to take proper breaks or time off”.

SSE’s renewables output increased by around 17% year-on-year in 2024. Generation output from SSE Renewables increased 26% in the first nine months to the end of December, compared to the same period in the previous year.

The business expects the first stage of its 3.6GW Dogger Bank wind development to come online this year.

The project is set to be the world’s largest fixed-bottom offshore wind farm.

SSE has a 40% stake in the project alongside Equinor (OSL: EQNR) 40% and Eni (IT: ENI) 20%.

However, its Berwick Bank wind farm is still awaiting approval by the Scottish Government and missed out on the opportunity to take part in last year’s government funding round as a result.

Once operational, Berwick Bank will also be one of the largest projects of its type on the planet.

Sharon Graham, general secretary for the union, said: “SSE’s renewables operation is already extremely profitable and set to become even more so as the demand for renewables increases.

“The threat of job losses is a cynical attempt driven to further boost the company’s profits and not in the interests of workers or consumers.

“Unite will not stand by and watch these workers lose their jobs while shareholders and bosses profit. They have the full support of the union throughout this consultation process.”

The firm is set to release its full-year results to the end of March 2024 on 21 May this year.

Adjusted operating profit for SSE Renewables increased by 287% to £335.6m from £86.8m in the first half of the year, the firm reported in November. 

Throughout the first half of last year SSE Renewables delivered its onshore Viking wind farm in Shetland and its Seagreen offshore wind farm achieved commercial operations, adding to the business’ profits.

An SSE spokesperson responded to Unite’s comments: “After a period of sustained growth, we’re undertaking an efficiency review to ensure we continue to operate in the most efficient and effective way possible into the future.

“We have informed colleagues that this will unfortunately lead to reduced headcount in some parts of our business.

“We understand this process will be difficult for our teams, and we’ll be consulting trade unions and keeping colleagues informed throughout.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco admins urged to patch IOS, IOS XE devices

“It requires an authenticated user, so at least it’s not an unauthenticated RCE (remote code execution),” said Shipley. The vulnerability has a high CVSS score of 7.7, “but [it’s] not the worst we’ve seen of late.”    Ed Dubrovsky, chief operating officer of US-based incident response firm Cypfer, also noted that a

Read More »

BP Retracts View Oil Demand Could Peak 2025

BP Plc said that oil demand is going to keep growing for the rest of this decade, rowing back on its prior projection that the high point could come as soon as this year. Rising consumption in emerging markets, sluggish energy efficiency gains, geopolitical tensions and the persisting use of petrochemicals all point to peak demand in 2030 at the earliest, the oil giant said in its once-a-year Energy Outlook. Consumption is now projected to reach 103.4 million barrels a day in five years, up from 102.2 million this year.   President Donald Trump’s return to the White House has accelerated a global shift away from ambitious energy transition goals, with oil majors again focusing on their core fossil fuel businesses. BP in particular has been under pressure from activist investor Elliott Investment Management to prioritize oil and gas, and its new analysis lends weight to a pivot in that direction. BP said that, because energy efficiency gains have been “lackluster,” it’s resulted in increased demand that’s going to be met by fossil fuels. If prolonged, the situation could add 6 million barrels of oil a day to demand growth through 2035, BP Chief Economist Spencer Dale and his team said in the report. The company’s core expectation is that demand will start to return toward current levels around 2035. BP isn’t alone in backpedaling its view on the prospects for oil demand.  The International Energy Agency is preparing a report this year that will show oil and gas demand will continue to rise beyond this decade, contrary to its previous assumption, Bloomberg Opinion’s Javier Blas said earlier this month. Fossil fuel use will rise out to 2050, he said, citing a draft report from the IEA. BP anticipates a long oil demand tail following 2035 under the world’s current path,

Read More »

ExxonMobil Fires Up New Lubricant and Fuel Units at Singapore Complex

Exxon Mobil Corp has put into operation what it said is “a first-of-its-kind technology in Singapore to increase production of higher-value products, including a range of lubricant base stocks and fuel”. The technology has been deployed at new facilities on Jurong island that are integrated with ExxonMobil’s existing refining and petrochemical complex, which supplies Asia-Pacific, the energy giant said. “The unique combination of technologies converts fuel oil and other bottom-of-the-barrel crude products into higher-value lube base stocks and distillates, improving the competitiveness and profitability of the manufacturing site and helping to meet customer demand”, ExxonMobil said in a statement on its Singapore website. “The new facilities expand our Group II base stocks production capacity by 20,000 barrels per day, including up to 6,000 barrels per day of the new-to-industry EHC 340 MAXTM. The base stocks are for commercial vehicles and industrial sectors, and used in engine oils, gear oils, marine oils and greases. EHC 340 MAXTM improves lubricant performance in these applications”. ExxonMobil has a refining capacity of 592,000 barrels per day (bpd) in the Southeast Asian city-state. The complex also produces up to 1.9 million metric tons per annum (MMtpa) of ethylene, 1.9 MMtpa of polyethylene and one MMtpa of polypropylene, according to company figures. Singapore contributed $15.72 billion in revenue to ExxonMobil last year, according to the company’s annual report. Besides refining and petrochemical production ExxonMobil is pursuing lower-carbon solutions in Singapore.  Last year it launched a SGD 60-million ($46.42 million) partnership with the Agency for Science, Technology and Research (A*STAR) for research on technologies to be used to produce lower-emission products. The ExxonMobil-NTUA*STAR Corporate Lab will help advance global research efforts in the conversion of biomass into lower-emission fuels for use in aviation, maritime transport and the chemical sector. The lab will also study “carbon capture and utilization

Read More »

Netmore Acquires Arson Metering

Netmore Group AB has acquired Arson Metering, a Spanish innovator in remote meter reading and the smart management of water and gas supply networks, strengthening its position in Europe. Arson Metering specializes in remote reading technologies, operating in over 200 municipalities across Spain, Italy, France and Greece, Netmore noted. It manages over 500,000 water and gas meters, with a backlog of about 350,000, Netmore said. “Acquiring Arson Metering is another transformative step for Netmore as we expand our ability to provide end-to-end solutions for utility automation and modernization across the globe”, Ove Anebygd, CEO of Netmore, said. “Together, we’ll help municipalities and utilities tackle pressing challenges like water scarcity, leakage, and resource constraints, delivering measurable value to customers and communities”, he said. “For our customers, this means enhanced connectivity to manage water and gas networks, while opening the door to global markets for Arson Metering’s products and services, now backed by Netmore’s extensive infrastructure and expertise”, Amador Martínez, CEO at Arson Metering, said. “As a Netmore company, Arson Metering will continue to operate from our headquarters in Bilbao, maintaining a commitment to service that has always characterized it”. Arson Metering’s portfolio includes several platforms for utility management. The Metering Control Centre is a dedicated hub for monitoring and analyzing meters to proactively detect anomalies and diagnose network issues for both customers and installation partners, Netmore noted. Its universal remote meter reading platform, AquaCity Platform, integrates major smart meter brands for real-time monitoring and data analysis in urban water management. Complementing this is the GasCity Platform, a smart gas management solution providing automated valve control, anomaly detection, and energy efficiency to ensure safety, optimize billing, and deliver data for gas distributors, Netmore said. To contact the author, email [email protected] What do you think? We’d love to hear from you, join the

Read More »

Petrofac Secures Two Contract Extensions for North Sea Assets

Petrofac said it has secured a two-year, $50 million contract renewal from Ithaca Energy, extending its work for the oil and gas company in the North Sea. Under the integrated services contract, Petrofac will continue to provide operations, maintenance, engineering, construction, and onshore and offshore technical expertise, the company said in a news release. The scope extends across Ithaca Energy’s North Sea operated asset base, which includes Alba, Captain, Erskine and FPF-1, according to the release. John Pearson, chief operating officer of Petrofac’s asset solutions and energy transition projects said, “The continuation of our longstanding relationship with Ithaca Energy is testament to the safe and reliable delivery of operations services by our team, who have been embedded on these assets for well over a decade. The North Sea is one of Asset Solutions’ core markets and this award underlines the commitment from both Petrofac and Ithaca Energy to the region. We remain focused on supporting Ithaca Energy to maximise safe, efficient and responsible production from its assets”. Also in the North Sea, Petrofac secured a contract extension from Shell UK-operated venture ONEgas West. Under the contract, Petrofac will provide services across ONEgas West’s Southern North Sea portfolio, supporting the Clipper South complex, Leman Alpha assets, Bacton Terminal, and OneGas Barge campaigns. Financial terms of the contract extension were not disclosed. Pearson said, “Having supported these assets since 2020, Petrofac is embedded within the delivery team and is uniquely placed to support production enhancement and field life extension. The North Sea remains one of Asset Solutions’ core markets and this award demonstrates confidence held in our team and the value they drive. We look forward to continuing this relationship, delivering safe and reliable operations”. Agreement Reached on Restructuring Meanwhile, Petrofac said it has reached an agreement in principle with Samsung E&A

Read More »

Oil Swings as NATO and Russian Tensions Escalate

Oil fluctuated in choppy trading as tensions between Russia and NATO intensified, with European leaders warning the Kremlin that Western military alliance is ready to respond with force to violations of its airspace. West Texas Intermediate swung between gains and losses to settle near $65, after earlier falling as much as 1.4%, as European diplomats concluded that a recent Russian incursion into Estonia was a deliberate tactic ordered by Russian commanders. Those comments echo earlier remarks by US President Donald Trump, who on Wednesday said that NATO should shoot down Russian aircrafts that cross into their airspace. Russian flows have been in the spotlight over the past few weeks amid global efforts to pressure Moscow to make peace in Ukraine by targeting its energy assets. Crude received a bump earlier Thursday after Trump told Turkey’s Recep Tayyip Erdogan to “stop buying any oil from Russia,” just a day after urging Europe to stop purchasing energy from the OPEC+ member, leading oil investors to cover bearish positions. Elsewhere, US Defense Secretary Pete Hegseth ordered an urgent meeting of top military commanders for an unusual meeting early next week, fueling concerns over wider unrest that could imperil global crude flows. Still, the confluence of bearish inputs didn’t succeed in breaking out of a narrow range since early August as traders balance a bearish outlook against escalating global tensions. Market watchers led by the International Energy Agency are forecasting excess supply later in the year due to increased output from the Organization of the Petroleum Exporting Countries and its partners, as well as from outside the group. The commodity was down earlier as traders assessed future flows from Iraq’s Kurdistan to global supply chains. A landmark agreement to resume exports could return 500,000 barrels a day to the market, Foreign Minister Fuad Hussein

Read More »

Trump, Orban Talk Energy

Donald Trump and Viktor Orban held a phone conversation a day after the US president said he would press the Hungarian premier to stop purchasing Russian oil. The two leaders discussed energy security, in addition to Russia’s war on Ukraine, the global economy and tariffs, Hungarian Foreign Minister Peter Szijjarto said on the sidelines of the United Nations General Assembly in New York late Wednesday.  Their call came as pressure mounts on Hungary to at least reduce purchases as Western allies seek to dent Russia’s oil revenues, a major source of financing for its continuing invasion of Hungary’s eastern neighbor. On Tuesday, at a briefing with Ukrainian President Volodymyr Zelenskiy at the UN, Trump floated calling Orban to ask him to cut Hungary’s Russian oil procurements. Szijjarto publicly gave no indication that Hungary was ready to do that. Hungary, a European Union and NATO member, can’t scrap its Russian oil purchases due to “geographic and physical” reasons and Russia has been a “reliable partner,” Szijjarto told reporters after meeting his Russian counterpart Sergei Lavrov, with whom he’s maintained close contact even after Russia’s 2022 invasion. Slovakia, another landlocked EU nation neighboring Hungary, holds a similar position. Slovak Prime Minister Robert Fico said on Thursday he would send a government emissary to the US, to explain why it’s also not ready to phase out Russian energy. Facing pressure from Trump, the European Commission, the EU’s executive arm, is reviewing trade measures targeting imports of Russian oil via the Druzhba pipeline that feeds Hungary and Slovakia, Bloomberg reported on Sept. 20. EU foreign policy chief Kaja Kallas also told Bloomberg on Wednesday that the bloc should wean itself off Russian energy more quickly. Some member states which continued to buy from Moscow were “good friends of Trump,” she said, asking the US president to talk

Read More »

‘Nomads at the Summit’ Podcasts – Recorded Live at DCF Trends Summit 2025

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #1796c1; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #1796c1 !important; border-color: #1796c1 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #1796c1 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #1796c1 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #1796c1 !important; border-color: #1796c1 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #1796c1 !important; border-color: #1796c1 !important; background-color: undefined !important; } Welcome to Nomads at the Summit, a new podcast series from Data Center Frontier in partnership with the Nomad Futurist Foundation. Recorded live at the 2025 Data Center Frontier Trends Summit (Aug. 26-28), here we sit down with industry leaders, innovators, and change-makers shaping the future of digital infrastructure. Join hosts Nabeel Mahmood and Phillip Koblence of Nomad Futurist, alongside DCF editorial leadership including Editor at Large Melissa Farney and Senior Editor David Chernicoff, for these candid conversations that highlight the ideas, talent, and technologies driving the next chapter of the data center industry. Whether you attended the DCF Trends Summit in person or are just now tuning in from afar, Nomads at the Summit gives you a behind-the-scenes look at the people and innovations defining what’s next in digital infrastructure. <!–> EPISODE LIST ]–> Waste Heat to Water – The Path Towards Water Positive Data Centers In this DCF Trends-Nomads at the Summit Podcast episode, Matt Grandbois, Vice President at AirJoule, introduces a game-changing approach to one of the data center industry’s most pressing challenges: water sustainability. As power-hungry, high-density environments collide with growing water scarcity

Read More »

Equinix unveils distributed AI infrastructure targeting inferencing, cloud connectivity

Data center provider Equinix has launched its Distributed AI infrastructure, which includes a new AI-ready backbone to support high performance distributed AI deployments spanning multiple data center facilities, a global AI Proving Ground to test new solutions, and Fabric Intelligence to better support next generation enterprise workloads. Equinix designed Distributed AI from the ground up to support the scale, speed, and complexity of modern intelligent systems, such as autonomous, agentic AI capable of reasoning, acting, and learning independently. AI is inherently distributed, drawing on multiple data sources in different locations. To effectively train a model, data must be drawn from multiple locations and processed where it lies, not moved around. This requires a new kind of infrastructure that is globally distributed, deeply interconnected, and fully programmable. Distributed AI links more than 270 data centers in over 60 markets, effectively including almost all Equinix’s facilities, according to the vendor.

Read More »

Cisco expands its quantum networking portfolio with new software prototypes

The software stack supports three other prototype applications to help enable quantum networking and the data center. The first is what Pandey describes as a network-aware distributed quantum compiler that lets quantum algorithms run across multiple networked processors. “The compiler is the piece of technology you need to enable practical, pragmatic, distributed quantum computing. It takes a quantum workload, a quantum circuit, and it partitions it so that it runs in a distributed environment, in a connected set of qubits or quantum compute nodes,” Pandey said. Significantly, it’s multivendor; the quantum compute nodes can be from the same vendor or from other vendors, such as IBM: “It could be as messy a brownfield, heterogeneous environment as you want. It doesn’t matter to the compiler, which will take an algorithm, partition it across any heterogeneous, brownfield environment,” Pandey said.  “What makes it unique, and an industry-first, is that it accounts for quantum interconnect requirements between processors and supports distributed quantum error correction. Existing compilers target circuits for only single computers,” Pandey stated. “Ours compiles circuits for network-connected computers potentially made of heterogeneous quantum compute technologies and can distribute that partitioned circuit across an entire data center of processors, all connected through a quantum network.” The distributed quantum error correction is a key feature of the software. Error correction ensures the accuracy and reliability of quantum computations and is a challenge for any distributed or standalone network.  The Cisco software in this case understands the error correction intricacies of each of the quantum computing modalities in the network, and “we can ensure that those are carried over from node to node, giving us a distributed or a holistic view of the entire distributed environment and result,” Pandey said.  In addition, “we are developing our own algorithms [to determine] the best way, using our network, to do a

Read More »

NVIDIA and OpenAI Forge $100B Alliance to Power the Next AI Revolution

The new strategic partnership between OpenAI and NVIDIA, formalized via a letter of intent in September 2025, is designed to both power and finance the next generation of OpenAI’s compute infrastructure, with initial deployments expected in the second half of 2026. According to the joint press release, both parties position this as “the biggest AI infrastructure deployment in history,” explicitly aimed at training and running OpenAI’s next-generation models.  At a high level: The target scale is 10 gigawatts (GW) or more of deployed compute capacity, realized via NVIDIA systems (comprising millions of GPUs).  The first phase (1 GW) is slated for the second half of 2026, built on the forthcoming Vera Rubin platform.  NVIDIA will progressively invest up to $100 billion into OpenAI, contingent on deployment of capacity in stages.  An initial $10 billion investment from NVIDIA is tied to the execution of a definitive purchase agreement for the first gigawatt of systems.  The equity stake NVIDIA will acquire is described as non-voting / non-controlling, meaning it gives financial skin in the game without governance control.  From a strategic standpoint, tying investment to capacity deployment helps OpenAI lock in capital and hardware over a long horizon, mitigating supply-chain and financing risk. With compute frequently cited as a binding constraint on advancing models, this kind of staged, anchored commitment gives OpenAI a more predictable growth path (at least in theory; that said, the precise economic terms and risk-sharing remain to be fully disclosed.) Press statements emphasize that millions of GPUs will ultimately be involved, and that co-optimization of NVIDIA’s hardware with OpenAI’s software/stack will be a key feature of the collaboration.  Importantly, this deal also fits into OpenAI’s broader strategy of diversifying infrastructure partnerships beyond any single cloud provider. Microsoft remains a central backer and collaborator, but this NVIDIA tie-up further

Read More »

Balancing AI’s opportunities and challenges to serve enterprises

AI has taken the technology industry by storm, with enterprises deploying emerging applications to create business value. Amid this shift, operators are leveraging network automation, optical innovation and more to support enterprise AI use cases. Still, the technology ecosystem must balance AI’s opportunities with its challenges. While AI can improve operations, it can also leave companies more vulnerable to cyberattacks. As organizations deploy more AI tools and employees increasingly use them, the overall attack surface expands and opens more security gaps. This article explores how internet carriers are building their networks to support enterprises, while also discussing how operators are establishing trust with customers. Table stakes: reliability, diversity and reach AI’s requirements are similar to content distribution, cloud networking and previous industry shifts, but place even greater pressure on carrier-delivered enterprise network services.  In these services, network diversity is integral, allowing carriers to eliminate single points of failure in the event of an outage, then quickly reroute traffic through the next best available path. This improved reliability is vital for enabling real-time enterprise AI operations amid increased instances of network disruption due to geopolitical sabotage or accidental damage. As more hyperscalers build sprawling AI data center campuses, network reach will also prove even more crucial. By continuously expanding their network footprints, carriers can help enterprises access these sites no matter where they’re located, with operators’ high-capacity connectivity infrastructure facilitating the transfer of massive data volumes between these campuses. Similar to how content distribution networks rely on a robust network underlay, backbone connectivity provides the high-capacity, long-haul transport underpinning the delivery of AI inferencing responses. While the backbone itself does not cache or deliver these responses, its densely interconnected networks ensure that this AI traffic reaches regional and access networks, which then distribute responses to end users. Lightspeed: optical innovation With

Read More »

Microsoft’s new cooling tech targets AI’s thermal bottleneck as hyperscalers hit power ceilings

Rising thermal pressure on AI hardware AI workloads and high-performance computing have placed unprecedented strain on data center infrastructure. Thermal dissipation has emerged as one of the toughest bottlenecks, with traditional methods such as airflow and cold plates increasingly unable to keep pace with new generations of silicon. “Modern accelerators are throwing out thermal loads that air systems simply cannot contain, and even advanced water loops are straining. The immediate issues are not only the soaring TDP of GPUs, but also grid delays, water scarcity, and the inability of legacy air-cooled halls to absorb racks running at 80 or 100 kilowatts,” said Sanchit Vir Gogia, CEO and chief analyst at Greyhound Research. “Cold plates and immersion tanks have extended the runway, but only marginally. They still suffer from the resistance of thermal interfaces that smother heat at the die. The friction lies in the last metre of the thermal path, between junction and package, and that is where performance is being squandered.” Cooling costs: the next data center budget crisis Cooling isn’t just a technical challenge but also an economic one. Data centers spend heavily to manage the immense heat generated by servers, networking gear, and GPUs. Hence, the cost of cooling a data center is also a significant expense. “As per 2025 AI infra buildouts TCO analysis, over 45%-47% of data center power budget typically goes into cooling, which could further expand to 65%-70% without advancement in cooling method efficiency,” said Danish Faruqui, CEO at Fab Economics. “In 2024, Nvidia Hopper H100 had 700 watts of power requirements per GPU, which scaled in 2025 to double with Blackwell B200 and Blackwell Ultra B300 to 1000 W and 1400 watts per GPU. Going forward in 2026, it will again more than double by Rubin and Rubin Ultra GPU to 1800W

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »