Stay Ahead, Stay ONMINE

Saipem’s Castorone Passes Bosphorus Strait on Way to Sakarya Phase 2 Job

Pipelay giant Castorone headed toward the Black Sea where it will work on the second phase of the Sakarya project for the Turkish Petroleum – Offshore Technology Center (TP-OTC). The vessel, owned by Saipem SpA, crossed the Bosphorus Strait in Turkey, the company said in a media release. In the early hours of December 26, […]

Pipelay giant Castorone headed toward the Black Sea where it will work on the second phase of the Sakarya project for the Turkish Petroleum – Offshore Technology Center (TP-OTC). The vessel, owned by Saipem SpA, crossed the Bosphorus Strait in Turkey, the company said in a media release.

In the early hours of December 26, the vessel crossed the Dardanelles Strait, covering a distance of 36 nautical miles in approximately six hours. In the afternoon, it continued its journey through the Sea of Marmara. Finally, it reached and crossed the Bosphorus Strait in the morning of December 28, covering a distance of 18 nautical miles in about three hours, Saipem said.

Saipem, in a consortium, has secured a contract from TP-OTC for the second phase of the Sakarya FEED and EPCI project, the company said. This phase involves the engineering, procurement, construction, and installation of a roughly 98-mile (158-kilometer), 16-inch pipeline, laid at depths reaching 7,218 feet (2,200 meters) in the Turkish Black Sea, along with an approximately 13-mile (21-kilometer), 16-inch intrafield pipeline at the same depth. Saipem previously completed the first phase of the Sakarya pipeline project in the latter half of 2022, laying deepwater pipelines under a 2021 contract with TP-OTC, it said.

Sakarya is the largest natural gas field discovered in Turkey, located approximately 150 kilometers off the coast of Ereğli, Saipem noted in its release.

“Through its involvement in this strategic project, Saipem contributes to strengthening Turkey’s energy independence,” Saipem added.

For phase 2 of the Sakarya project, Castorone will be tasked with offshore pipeline installation operations, the company said in the release.

Built in 2012, the vessel is 330 meters long and 40 meters wide. It is one of the largest and most technologically advanced pipelay vessels in the world, according to Saipem, which highlighted that it is capable of reaching a maximum speed of 12 knots and accommodates over 700 personnel on board.

Saipem describes its Castorone pipelay vessel as highly versatile, capable of “S-lay” pipeline installation in both shallow and deep waters down to nearly 9,843 feet (3,000 meters). Equipped with a Class 3 dynamic positioning system and eight thrusters, the vessel maintains precise control even in challenging weather, Saipem said.

Advanced welding equipment and two remotely operated vehicles (ROVs), developed by Saipem’s Sonsub Center of Excellence, support construction, maintenance, and ultra-deepwater monitoring activities, according to the company. The Castorone has laid approximately 2,175 miles (3,500 kilometers) of pipeline to date, including a record-setting depth of 7,218 feet (2,200 meters) during the first phase of the Sakarya project, Saipem said.

To contact the author, email [email protected]



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

CompTIA training targets workplace AI use

CompTIA AI Essentials (V2) delivers training to help employees, students, and other professionals strengthen the skills they need for effective business use of AI tools such as ChatGPT, Copilot, and Gemini. In its first iteration, CompTIA’s AI Essentials focused on AI fundamentals to help professionals learn how to apply AI technology

Read More »

OPEC Receives Updated Compensation Plans

A statement posted on OPEC’s website this week announced that the OPEC Secretariat has received updated compensation plans from Iraq, the United Arab Emirates (UAE), Kazakhstan, and Oman. A table accompanying this statement showed that these compensation plans amount to a total of 221,000 barrels per day in November, 272,000

Read More »

LogicMonitor closes Catchpoint buy, targets AI observability

The acquisition combines LogicMonitor’s observability platform with Catchpoint’s internet-level intelligence, which monitors performance from thousands of global vantage points. Once integrated, Catchpoint’s synthetic monitoring, network data, and real-user monitoring will feed directly into Edwin AI, LogicMonitor’s intelligence engine. The goal is to let enterprise customers shift from reactive alerting to

Read More »

Akamai acquires Fermyon for edge computing as WebAssembly comes of age

Spin handles compilation from source to WebAssembly bytecode and manages execution on target platforms. The runtime abstracts the underlying technology while preserving WebAssembly’s performance and security characteristics. This bet on WebAssembly standards has paid off as the technology matured.  WebAssembly has evolved significantly beyond its initial browser-focused design to support

Read More »

U.S. Department of Energy Announces New Research, Technology, and Economic Security Framework

The U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy announced the release of a memo by the Deputy Secretary of Energy that describes a framework designed to minimize foreign risks to the scientific enterprise of DOE and the National Nuclear Security Administration (NNSA). The newly published Research, Technology & Economic Security (RTES) Framework highlights DOE’s goals, process, high level risk factors, and commitment to mitigation when assessing RTES risk. This framework outlines a harmonized approach across all DOE/NNSA funding offices that undertakes to protect DOE’s early-stage research and development (R&D) in academic settings, applied R&D stage projects, and demonstration and deployment stage projects while maintaining an open collaborative, and world leading scientific enterprise. The framework also highlights DOE’s commitment to mitigation when assessing RTES risk and outlines its goals and processes. Join an RTES Informational Webinar To Learn More To assist the applicant and recipient community in understanding and adapting to the recently published framework, DOE will host a webinar to introduce the approach and answer questions. Funding awardees and prospective applicants are encouraged to review the framework and attend Monday, December 16, 2024. Register today. About DOE’s RTES Office DOE’s Office of Research, Technology & Economic Security (RTES), situated in DOE’s Office of International Affairs, undertakes several risk mitigation activities that support DOE’s responsibility to protect federal funding from undue foreign influence and to accomplish its mission in ways that protect and further energy security and technological advancement of the United States. Specifically, RTES identifies and addresses potential security risks that threaten the scientific enterprise; establishes best practices for programs; conducts outreach activities for stakeholders; educates Department programs on potential security risks; and conducts or facilitates risk assessments of DOE proposals, loans, and awards. More information about RTES’s mission, activities, events, and ways to get involved

Read More »

@H2Spotlight: Fall 2024

Spotlight on Success: First Megawatt-Scale Demonstration of Hydrogen Fuel Cells for Data Center Backup Power Earlier this year, Caterpillar Inc. announced successful completion of a first-of-a-kind collaboration with Microsoft and Ballard Power Systems to demonstrate the viability of using large-format hydrogen fuel cells to supply reliable backup power for data centers.  The demonstration, hosted at Microsoft’s Cheyenne, Wyoming, data center, simulated a 48-hour power outage, providing critical insights into the capabilities of fuel cell systems to power multi-megawatt data centers, ensuring uninterrupted power supply to meet 99.999% uptime requirements. Caterpillar served as project lead, providing overall system integration, power electronics, and microgrid controls that form the central structure of the hydrogen power solution. Hardware for the demonstration included two Caterpillar power grid stabilization storage systems alongside a 1.5-MW hydrogen fuel cell supplied by Ballard Power Systems. Over the course of the project, researchers evaluated the cost and performance of the fuel cell system including analysis of key performance characteristics such as power transfer time and load acceptance. Launched in 2020 and completed this year, the project was supported and partially funded by DOE under the H2@Scale initiative, which brings stakeholders together to advance affordable hydrogen production, transport, storage, and utilization in multiple energy sectors. During the demonstration, researchers at DOE’s National Renewable Energy Laboratory (NREL) analyzed safety, techno-economics, and greenhouse gas impacts.

Read More »

U.S. Department of Energy Releases Request for Information on Defining Sustainable Maritime Fuels in the United States

To support and advance future maritime fuel technology and investment, the U.S. Department of Energy (DOE) released a Request for Information (RFI) to establish a consistent and reliable definition for sustainable maritime fuel (SMF) that informs and aligns community, industry, governments, and other maritime stakeholders. The Action Plan for Maritime Energy and Emissions Innovation (Action Plan), a summary of which was released in December 2024, builds on the 2023 U.S. National Blueprint for Transportation Decarbonization to define actions that aim to achieve a clean, safe, accessible and affordable U.S. maritime transportation system. The Action Plan calls for the federal government to define “Sustainable Maritime Fuel,” which is critical to evaluating and determining future SMF production volume goals in the Action Plan and alternative fuels that align with the U.S. 2050 net emission goals. “The global maritime sector is pursuing sustainable maritime fuels. The United States is well positioned to be a global leader in producing, distributing, and selling these sustainable fuels that can provide more affordable options to the market,” said Michael Berube, deputy assistant secretary for sustainable transportation and fuels, Office of Energy Efficiency and Renewable Energy. “This Request for Information will help align the industry around common definitions, enabling broader adoption across the economy.” The U.S. maritime sector connects virtually every aspect of American life—from our clothes and food, to our cars, and the oil and natural gas used to heat and cool homes. About 99% of U.S. overseas trade enters or leaves the United States by ship. This waterborne cargo and associated activity contribute more than $500 billion to the U.S. gross domestic product and sustain over 10 million U.S. jobs. However, the Action Plan estimates the total amount of greenhouse (GHG) emissions from fuel sold in the United States for use in maritime applications accounts for 4% of the U.S. transportation sector’s GHG

Read More »

Hydrogen-Powered Heavy-Duty Truck Establishes New Threshold by Traveling 1,800 Miles on a Single Fill

The U.S. Department of Energy’s (DOE’s) Hydrogen and Fuel Cell Technologies Office (HFTO) today highlighted a recent groundbreaking achievement in hydrogen-powered transportation: a prototype H2Rescue truck, built and powered by Accelera with funding support from DOE and other federal agency partners, last month established a new world record by traveling 1,806 miles on a single fill of hydrogen fuel. The truck completed its record-setting journey in California and was closely monitored and validated by an adjudicator from Guinness World Records who confirmed the truck’s hydrogen tank was sealed before the journey began. Powered by a Cummins Accelera fuel cell engine and a 250-kilowatt traction motor, the truck carried 175 kilograms of hydrogen and consumed 168 kilograms while navigating rush hour traffic, between 50 to 55 mph, on public roads, operating in temperatures varying from 60 to 80 degrees Fahrenheit. Accelera researchers confirmed that over the 1,800-mile trip, the hydrogen-filled truck emitted zero pounds of carbon dioxide (CO2), a stark contrast to the 664 pounds of CO2 a standard internal combustion engine vehicle would have emitted over the same distance. Using hydrogen in this type of truck—which is typically used in emergency response, military, and utility applications—can displace approximately 1,825 gallons of fuel and reduce greenhouse gas emissions by 2.5 metric tons annually. This demonstration vehicle, weighing approximately 33,000 pounds, is the result of an innovative collaboration between Accelera, HFTO, DOE’s Vehicle Technologies Office, the U.S. Department of Homeland Security’s Science and Technology Directorate, the Federal Emergency Management Agency, and the U.S. Department of Defense.

Read More »

Energy Department Announces Termination of 223 Projects, Saving Over $7.5 Billion

WASHINGTON—The U.S. Department of Energy (DOE) today announced the termination of 321 financial awards supporting 223 projects, resulting in a savings of approximately $7.56 billion dollars for American taxpayers. Following a thorough, individualized financial review, DOE determined that these projects did not adequately advance the nation’s energy needs, were not economically viable, and would not provide a positive return on investment of taxpayer dollars. The awards were issued by the Offices of Clean Energy Demonstrations (OCED), Energy Efficiency and Renewable Energy (EERE), Grid Deployment (GDO), Manufacturing and Energy Supply Chains (MESC), Advanced Research Projects Agency-Energy (ARPA-E) and Fossil Energy (FE). “On day one, the Energy Department began the critical task of reviewing billions of dollars in financial awards, many rushed through in the final months of the Biden administration with inadequate documentation by any reasonable business standard,” Secretary Wright said. “President Trump promised to protect taxpayer dollars and expand America’s supply of affordable, reliable, and secure energy. Today’s cancellation’s deliver on that commitment. Rest assured, the Energy Department will continue reviewing awards to ensure that every dollar works for the American people.” Of the 321 financial awards terminated, 26% were awarded between Election Day and Inauguration Day. Those awards alone were valued at over $3.1 billion. In May 2025, Secretary Wright issued a Secretarial Memorandum entitled, “Ensuring Responsibility for Financial Assistance,” establishing a new policy for evaluating financial awards. The policy authorized program offices to request additional information from awardees. It also required that awards be reviewed on a case-by-case basis to identify waste, safeguard taxpayer dollars, protect America’s national security, and advance President Trump’s commitment to deliver affordable, reliable, and secure energy for the American people. Using this review process, DOE evaluated each of these awards and determined that they did not meet the economic, national security or energy

Read More »

Energy Department Approves Final Export Authorization for Venture Global CP2 LNG

WASHINGTON— U.S. Secretary of Energy Chris Wright today signed the final export authorization for the Venture Global CP2 LNG Project in Cameron Parish, Louisiana, allowing exports of up to 3.96 billion cubic feet per day of U.S. natural gas as liquefied natural gas (LNG) to non-Free Trade Agreement (FTA) countries.  “In less than ten months, President Trump’s administration is redefining what it means to unleash American energy by approving record new LNG exports,” said Kyle Haustveit, Assistant Secretary of the Office of Fossil Energy. “Finalizing the non-FTA authorization for CP2 LNG will enable secure and reliable American energy access for our allies and trading partners, while also providing well-paid jobs and economic opportunities at home.” Today’s authorization follows the Department’s conditional authorization to CP2 LNG in March 2025 and reflects the Federal Energy Regulatory Commission’s May 2025 decision approving the siting, construction, and operation of the facility. It also incorporates DOE’s May 2025 response to comments on the 2024 LNG Export Study, which reaffirmed that U.S. LNG exports strengthen America’s energy leadership, expand opportunities for American workers, and provide our allies with secure access to reliable U.S. energy. On day one, President Trump directed the Energy Department to end the Biden administration’s LNG export pause and to resume the consideration of pending applications to export LNG to countries without a free trade agreement (FTA). Under President Trump’s leadership, DOE has authorized more than 13.8 Bcf/d of LNG exports—greater than the volume exported today by the world’s second-largest LNG supplier. Today, U.S. exports are approximately 15 billion cubic feet per day (Bcf/d), an increase of approximately 25% from 2024 levels. ###

Read More »

With AI Factories, AWS aims to help enterprises scale AI while respecting data sovereignty

“The AWS AI Factory seeks to resolve the tension between cloud-native innovation velocity and sovereign control. Historically, these objectives lived in opposition. CIOs faced an unsustainable dilemma: choose between on-premises security or public cloud cost and speed benefits,” he said. “This is arguably AWS’s most significant move in the sovereign AI landscape.” On premises GPUs are already a thing AI Factories isn’t the first attempt to put cloud-managed AI accelerators in customers’ data centers. Oracle introduced Nvidia processors to its Cloud@Customer managed on-premises offering in March, while Microsoft announced last month that it will add Nvidia processors to its Azure Local service. Google Distributed Cloud also includes a GPU offering, and even AWS offers lower-powered Nvidia processors in its AWS Outposts. AWS’ AI Factories is also likely to square off against from a range of similar products, such as Nvidia’s AI Factory, Dell’s AI Factory stack, and HPE’s Private Cloud for AI — each tightly coupled with Nvidia GPUs, networking, or software, and all vying to become the default on-premises AI platform. But, said Sopko, AWS will have an advantage over rivals due to its hardware-software integration and operational maturity: “The secret sauce is the software, not the infrastructure,” he said. Omdia principal analyst Alexander Harrowell expects AWS’s AI Factories to combine the on-premises control of Outposts with the flexibility and ability to run a wider variety of services offered by AWS Local Zones, which puts small data centers close to large population centers to reduce service latency. Sopko cautioned that enterprises are likely to face high commitment costs, drawing a parallel with Oracle’s OCI Dedicated Region, one of its Cloud@Customer offerings.

Read More »

HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships

On the hardware front, HPE is targeting the AI data center edge with a new MX router and the scale-out networking delivery with a new QFX switch. Juniper’s MX series is its flagship routing family aimed at carriers, large-scale enterprise data center and WAN customers, while the QFX line services data center customers anchoring spine/leaf networks as well as top-of-rack systems. The new 1U, 1.6Tbps MX301 multiservice edge router, available now, is aimed at bringing AI inferencing closer to the source of data generation and can be positioned in metro, mobile backhaul, and enterprise routing applications, Rahim said. It includes high-density support for 16 x 1/1025/50GbE, 10 x 100Gb and 4 x 400Gb interfaces. “The MX301 is essentially the on-ramp to provide high speed, secure connections from distributed inference cluster users, devices and agents from the edge all the way to the AI data center,” Rami said. “The requirements here are typically around high performance, but also very high logical skills and integrated security.” In the QFX arena, the new QFX5250 switch, available in 1Q 2026, is a fully liquid-cooled box aimed at tying together Nvidia Rubin and/or AMD MI400 GPUs for AI consumption across the data center. It is built on Broadcom Tomahawk 6 silicon and supports up to 102.4Tbps Ethernet bandwidth, Rahim said.  “The QFX5250 combines HPE liquid cooling technology with Juniper networking software (Junos) and integrated AIops intelligence to deliver a high-performance, power-efficient and simplified operations for next-generation AI inference,” Rami said. Partnership expansions Also key to HPE/Juniper’s AI networking plans are its partnerships with Nvidia and AMD. The company announced its relationship with Nvidia now includes HPE Juniper edge onramp and long-haul data center interconnect (DCI) support in its Nvidia AI Computing by HPE portfolio. This extension uses the MX and Junipers PTX hyperscaler routers to support high-scale, secure

Read More »

What is co-packaged optics? A solution for surging capacity in AI data center networks

When it announced its CPO-capable switches, Nvidia said they would improve resiliency by 10 times at scale compared to previous switch generations. Several factors contribute to this claim, including the fact that the optical switches require four times fewer lasers, Shainer says. Whereas the laser source was previously part of the transceiver, the optical engine is now incorporated onto the ASIC, allowing multiple optical channels to share a single laser. Additionally, in Nvidia’s implementation, the laser source is located outside of the switch. “We want to keep the ability to replace a laser source in case it has failed and needs to be replaced,” he says. “They are completely hot-swappable, so you don’t need to shut down the switch.” Nonetheless, you may often hear that when something fails in a CPO box, you need to replace the entire box. That may be true if it’s the photonics engine embedded in silicon inside the box. “But they shouldn’t fail that often. There are not a lot of moving parts in there,” Wilkinson says. While he understands the argument around failures, he doesn’t expect it to pan out as CPO gets deployed. “It’s a fallacy,” he says. There’s also a simple workaround to the resiliency issue, which hyperscalers are already talking about, Karavalas says: overbuild. “Have 10% more ports than you need or 5%,” he says. “If you lose a port because the optic goes bad, you just move it and plug it in somewhere else.” Which vendors are backing co-packaged optics? In terms of vendors that have or plan to have CPO offerings, the list is not long, unless you include various component players like TSMC. But in terms of major switch vendors, here’s a rundown: Broadcom has been making steady progress on CPO since 2021. It is now shipping “to

Read More »

Nvidia’s $2B Synopsys stake tests independence of open AI interconnect standard

But the concern for enterprise IT leaders is whether Nvidia’s financial stakes in UALink consortium members could influence the development of an open standard specifically designed to compete with Nvidia’s proprietary technology and to give enterprises more choices in the datacenter. Organizations planning major AI infrastructure investments view such open standards as critical to avoiding vendor lock-in and maintaining competitive pricing. “This does put more pressure on UALink since Intel is also a member and also took investment from Nvidia,” Sag said. UALink and Synopsys’s critical role UALink represents the industry’s most significant effort to prevent vendor lock-in for AI infrastructure. The consortium ratified its UALink 200G 1.0 Specification in April, defining an open standard for connecting up to 1,024 AI accelerators within computing pods at 200 Gbps per lane — directly competing with Nvidia’s NVLink for scale-up applications. Synopsys plays a critical role. The company joined UALink’s board in January and in December announced the industry’s first UALink design components, enabling chip designers to build UALink-compatible accelerators. Analysts flag governance concerns Gaurav Gupta, VP analyst at Gartner, acknowledged the tension. “The Nvidia-Synopsys deal does raise questions around the future of UALink as Synopsys is a key partner of the consortium and holds critical IP for UALink, which competes with Nvidia’s proprietary NVLink,” he said. Sanchit Vir Gogia, chief analyst at Greyhound Research, sees deeper structural concerns. “Synopsys is not a peripheral player in this standard; it is the primary supplier of UALink IP and a board member within the UALink Consortium,” he said. “Nvidia’s entry into Synopsys’ shareholder structure risks contaminating that neutrality.”

Read More »

Cooling crisis at CME: A wakeup call for modern infrastructure governance

Organizations should reassess redundancy However, he pointed out, “the deeper concern is that CME had a secondary data center ready to take the load, yet the failover threshold was set too high, and the activation sequence remained manually gated. The decision to wait for the cooling issue to self-correct rather than trigger the backup site immediately revealed a governance model that had not evolved to keep pace with the operational tempo of modern markets.” Thermal failures, he said, “do not unfold on the timelines assumed in traditional disaster recovery playbooks. They escalate within minutes and demand automated responses that do not depend on human certainty about whether a facility will recover in time.” Matt Kimball, VP and principal analyst at Moor Insights & Strategy, said that to some degree what happened in Aurora highlights an issue that may arise on occasion: “the communications gap that can exist between IT executives and data center operators. Think of ‘rack in versus rack out’ mindsets.” Often, he said, the operational elements of that data center environment, such as cooling, power, fire hazards, physical security, and so forth, fall outside the realm of an IT executive focused on delivering IT services to the business. “And even if they don’t fall outside the realm, these elements are certainly not a primary focus,” he noted. “This was certainly true when I was living in the IT world.” Additionally, said Kimball, “this highlights the need for organizations to reassess redundancy and resilience in a new light. Again, in IT, we tend to focus on resilience and redundancy at the app, server, and workload layers. Maybe even cluster level. But as we continue to place more and more of a premium on data, and the terms ‘business critical’ or ‘mission critical’ have real relevance, we have to zoom out

Read More »

Microsoft loses two senior AI infrastructure leaders as data center pressures mount

Microsoft did not immediately respond to a request for comment. Microsoft’s constraints Analysts say the twin departures mark a significant setback for Microsoft at a critical moment in the AI data center race, with pressure mounting from both OpenAI’s model demands and Google’s infrastructure scale. “Losing some of the best professionals working on this challenge could set Microsoft back,” said Neil Shah, partner and co-founder at Counterpoint Research. “Solving the energy wall is not trivial, and there may have been friction or strategic differences that contributed to their decision to move on, especially if they saw an opportunity to make a broader impact and do so more lucratively at a company like Nvidia.” Even so, Microsoft has the depth and ecosystem strength to continue doubling down on AI data centers, said Prabhu Ram, VP for industry research at Cybermedia Research. According to Sanchit Gogia, chief analyst at Greyhound Research, the departures come at a sensitive moment because Microsoft is trying to expand its AI infrastructure faster than physical constraints allow. “The executives who have left were central to GPU cluster design, data center engineering, energy procurement, and the experimental power and cooling approaches Microsoft has been pursuing to support dense AI workloads,” Gogia said. “Their exit coincides with pressures the company has already acknowledged publicly. GPUs are arriving faster than the company can energize the facilities that will house them, and power availability has overtaken chip availability as the real bottleneck.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »