Stay Ahead, Stay ONMINE

Bowman Bags Geodetic Survey Job from Energy Infrastructure Major

Bowman Consulting Group Ltd. has secured a multi-state geodetic survey and monitoring contract for an American energy infrastructure major. The company said in a media release the contract includes quarterly surveys to assess infrastructure stability, evaluate geohazard risks, and ensure compliance with regulatory requirements. The multi-disciplinary consulting firm said this work involves monitoring and maintaining […]

Bowman Consulting Group Ltd. has secured a multi-state geodetic survey and monitoring contract for an American energy infrastructure major. The company said in a media release the contract includes quarterly surveys to assess infrastructure stability, evaluate geohazard risks, and ensure compliance with regulatory requirements.

The multi-disciplinary consulting firm said this work involves monitoring and maintaining over 2,000 existing geodetic survey points and installing approximately 34 new points along pipeline rights-of-way, providing comprehensive support for the long-term integrity of these essential systems.

The contract also covers conducting precise geodetic measurements, installing durable survey markers, and performing repeat monitoring using advanced measurement techniques and technologies.

These techniques allow Bowman to identify even minor ground shifts, enabling early detection of potential threats to pipeline integrity. The resulting data is then compiled and incorporated into the client’s geodatabase, improving decision-making and informing proactive maintenance and risk mitigation efforts, the company said.

“Our extensive national geospatial services footprint allows us to quickly deploy skilled technical resources wherever and whenever our clients need them”, Gary Bowman, chairman and CEO of Bowman, said. “By strategically investing in geospatial capabilities and technology, we can proactively address the unique challenges of the oil and gas industry and help safeguard vital infrastructure with the highest level of precision and care”.

Bowman will oversee project coordination, collaborating with clients, local personnel, and land agents to ensure safety, obtain necessary permits, and conduct pre-installation surveys, including utility locate requests and electronic sweeps to identify buried infrastructure, it said.

Bowman earlier secured an on-call preliminary engineering contract from the Nebraska Department of Transportation. This two-year, up to $1.5 million agreement tasks Bowman with providing PE services for State Recreation Roads and Local Public Agencies federal-aid projects, encompassing roadway and bridge design, land surveys, stormwater management, right-of-way planning, environmental compliance, and public engagement, the company said.

These services will support various projects, including bridge replacements, improvements to state recreation roads, safety enhancements, alternative transportation initiatives, and emergency relief efforts, Bowman said.

To contact the author, email [email protected]



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Extreme plots enterprise marketplace for AI agents, tools, apps

Extreme Networks this week previewed an AI marketplace where it plans to offer a curated catalog of AI tools, agents and applications. Called Extreme Exchange, it’s designed to give enterprise customers a way to discover, deploy, and create AI agents, microapps, and workflows in minutes rather than developing such components

Read More »

Top quantum breakthroughs of 2025

The Helios quantum computing platform is available to customers through Quantinuum’s cloud service and on-premises offering. HSBC is using IBM’s Heron quantum computer to improve their bond trading predictions by 34% compared to classical computing. Caltech physicists create 6,100-qubit array. Kon H. Leung is seen working on the apparatus used

Read More »

Ukraine Drones Hit Russian Black Sea Oil Terminal

(Update) November 14, 2025, 9:45 AM GMT+1: Article updated with additional details. Ukrainian drones attacked Russia’s giant Black Sea port of Novorossiysk overnight, prompting a state of emergency, as Moscow launched a massive air strike on Kyiv that killed four and damaged several residential buildings. Falling drone debris caused a fire at the Russian depot located at Transneft PJSC’s Sheskharis oil terminal, the regional emergency service said on Telegram early Friday. The blaze was put out after more than 50 units of firefighting equipment were deployed at the site, authorities said, but provided no details on the damage. Novorossiysk Mayor Andrey Kravchenko announced the state of emergency on Telegram. Transneft didn’t immediately respond to a request for comment on the situation at the facility. Global benchmark Brent spiked as much as 3 percent in a rapid move toward $65 a barrel, before paring gains. A container terminal located in the port of Novorossiysk was damaged by falling debris, but continued to operate normally, Delo Group, which runs that facility, said in a statement on Telegram. Russia’s largest grain terminal, also operated by Delo Group, was impacted by drone debris, but continues to function, the Interfax news service reported, citing the terminal’s chief executive officer. Drones hit an unidentified civilian ship in the port of Novorossiysk as well, regional emergency services said, without specifying the type of the vessel. The city’s mayor reported damage to at least three residential buildings in separate statements on Telegram.  In Ukraine, four people were killed after Russia launched about 430 drones and 18 missiles – including ballistic ones – in the strike, President Volodymyr Zelenskiy said on the X platform Friday. Dozens of apartment buildings were damaged in the capital Kyiv, he said. At least 26 people were injured, including two children, and several residential buildings were damaged,

Read More »

Repsol Mulls Merger for $19B Upstream Unit

Repsol SA is considering a reverse merger of its upstream unit with potential partners including US energy producer APA Corp., people with knowledge of the matter said, as it seeks ways to list the business in New York. The Spanish oil and gas company has held exploratory discussions with APA, formerly known as Apache Corp., about the possibility of a deal, according to the people. It has also held initial talks with other potential merger partners for the business, they said.  Any deal could help Repsol bulk up the portfolio of its upstream business and provide it a faster route to becoming publicly traded.  APA shares surged as much as 7.3 percent in New York. The stock has gained about 16 percent over the past 12 months, giving the company a market value of roughly $9 billion. Repsol shares gained as much as 2.2 percent.  Repsol agreed in 2022 to sell a 25 percent stake in the upstream division to private equity firm EIG Global Energy Partners LLC in a deal valuing the business at $19 billion including debt. The transaction was aimed at helping the unit further expand in the US, while also raising funds for Repsol to invest in low-carbon activities.  Executives have said they’re preparing the upstream unit for a potential “liquidity event,” such as a public listing, in 2026. Repsol Chief Executive Officer Josu Jon Imaz told analysts last month that company is considering options including an IPO of the business, a reverse merger with a US-listed group or the introduction of a new private investor.  Deliberations are ongoing and there’s no certainty they will lead to a transaction, the people said, asking not to be identified because the information is private. Repsol continues to study a variety of options for the business and it may still opt for an

Read More »

Trump Lifts More Arctic Drilling Curbs

The Trump administration rescinded restrictions on oil drilling in Alaska’s mammoth state petroleum reserve, reversing a move by former President Joe Biden that put an estimated 8.7 billion barrels of recoverable oil off limits. The policy reversal finalized Thursday applies to the 23 million-acre National Petroleum Reserve-Alaska.  Biden in 2024, designated 13 million acres of the reserve as “special areas,” limiting future oil and gas leasing, while maintaining leasing prohibitions on 10.6 million acres of the NPR-A. The move complicated future oil drilling and production in the reserve, where ConocoPhillips is pushing to explore for more oil near its Willow project. Other active companies have included Santos Ltd., Repsol SA and Armstrong Oil & Gas Inc. The US Interior Department had already reopened the nearby Arctic National Wildlife Refuge to oil and gas leasing, following a directive Donald Trump issued after his inauguration. Increasing US production of fossil fuels has been at the center of Trump’s energy agenda, starting with an early executive order compelling a host of policy changes meant to expand Alaska’s oil, natural gas and mineral development. “This action restores common-sense management and ensures responsible development benefits both Alaska and the nation,” Interior Secretary Doug Burgum said in a statement, adding that the latest move would “strengthen American Energy Dominance and reduce reliance on foreign oil.” Alaska has forecast that crude production from the reserve will climb to 139,600 barrels per day in fiscal 2033, up from 15,800 barrels per day in fiscal 2023. The Interior Department announced last month it was opening the entire coastal plain of Alaska’s Arctic National Wildlife Refuge, some 1.56 million acres, to oil and gas leasing and planned to hold a lease sale this winter in the state petroleum reserve. What do you think? We’d love to hear from you, join the conversation

Read More »

TotalEnergies Wins 15-Year Google Contract to Supply Renewable Power

TotalEnergies SE has signed a deal to supply Google a total of 1.5 terawatt hours (TWh) of certified green electricity for 15 years to support the tech giant’s data center operations in Ohio. The power will come from the Montpelier solar project in Ohio, which is “nearing completion” and will be connected to the PJM grid system, a joint statement said. “The deal reflects Google’s strategy of enabling new, carbon-free energy to the grid systems where they operate”, the statement said. “It also aligns with TotalEnergies’ strategy to deliver tailored energy solutions for data centers, which accounted for almost three percent of the world’s energy demand in 2024”. “TotalEnergies is deploying a 10-GW portfolio in the United States, with onshore solar, wind and battery storage projects, one GW of which is located in the PJM market in the northeast of the country, and four GW on the ERCOT market in Texas”, the statement added. Stephane Michel, TotalEnergies president for gas, renewables and power at TotalEnergies, said, “This agreement illustrates TotalEnergies’ ability to meet the growing energy demands of major tech companies by leveraging its integrated portfolio of renewable and flexible assets. It also contributes to achieving our target of 12 percent profitability in the power sector”. This is the second data-center green power supply agreement announced by TotalEnergies this month. On November 4 it said it had bagged a 10-year contract to supply Data4 data centers in Spain with a total of 610 gigawatt hours (GWh) of renewable electricity starting 2026. The power will come from Spanish wind and solar farms with a combined capacity of 30 MW. The plants “are about to start production”, a joint statement said. “As European leader in the data center industry, Data4 is now established in six countries, and announced its plan to invest nearly EUR 2 billion [$2.32 billion] by 2030 to

Read More »

Meren Bumps Up Production Guidance

Meren Energy Inc on Thursday raised its projected entitlement output for 2025 from 32,000-37,000 barrels of oil equivalent per day (boepd) to 34,500-37,500 boepd. The Vancouver, Canada-based company, which explores and develops oil and gas in Africa, also revised up its forecast for working-interest production from 28,000-33,000 boepd to 30,000-33,000 boepd. Meren, which currently derives its production offshore Nigeria, defines entitlement production as “calculated using the economic interest methodology and includes cost recovery oil, royalty oil and profit oil”. Working-interest production, according to Meren, is derived by multiplying project volumes by the company’s effective working interest in each license. In the third quarter, Meren, which this year rebranded from Africa Oil Corp, produced 35,600 boepd, down from 41,200 boepd in Q3 2024. Meren derives its production from Akpo and Egina, both operated by TotalEnergies SE, and Chevron Corp-operated Agbami. Production enhancement and exploration activities are progressing in the fields. “Following the break to the Akpo/Egina (PPL 2/3) drilling campaign in Q3 2025, efforts are underway to recommence the campaign”, Meren said. “As previously communicated, this break will allow for the interpretation of 4D seismic data to enhance the maturation of future infill well opportunities. Accordingly, the aim is to secure a deepwater drilling rig within the gap and start with the drilling of the Akpo Far East near-field prospect, followed by the drilling of further development wells on Akpo and Egina fields. “Akpo Far East is an infrastructure-led exploration opportunity that in case of commercial exploration success, presents an attractive short cycle, high-return investment opportunity that would utilize the existing Akpo facilities. Akpo Far East prospect has an unrisked, best estimate, gross field prospective resource volume of 143.6 MMboe. The targeted hydrocarbons are predicted to be light, high gas-oil-ratio oil equivalent to those found in the Akpo field. If successful,

Read More »

Jade Secures Preliminary Funding Deal for Mongolian CBM-to-LNG Project

Zhengzhou Langrun Intelligent Equipment Co Ltd has signed a non-binding letter of intent to provide up to $46 million (AUD 70 million) in financing for a coal bed methane (CBM)-to-liquefied natural gas (LNG) project by Jade Gas Holdings Ltd in Mongolia. The agreement is for the Red Lake gas field, part of the Australian company’s flagship project with the Mongolian government’s Erdenes Methane LLC to develop the Tavantolgoi XXXIII unconventional oil basin (TTCBM Project). Red Lake has 246 billion cubic feet of 2C gross unrisked contingent resources, according to Jade. The Chinese CBM-focused gas equipment manufacturer would fund drilling and production for the next 18 wells in the field, Jade said in a stock filing. Jade has already drilled seven Red Lake wells according to the company. The “non-dilutive financing” would also cover surface facilities for gathering, processing and liquefying gas produced from the field into LNG. The deal also includes “a low upfront capital outlay option, to be funded by future Jade revenue”, Jade said. The parties agreed to consider expanding the terms to accommodate all 175 gas production wells in Red Lake’s first-phase development. Phase 1 involves 20 production wells, including two that came online June, according to Jade. “Langrun’s expertise in the gas industry in China and in particular in CBM offers a great fit for Jade as the company seeks options to fast-track development of the Red Lake gas field and to optimize gas production for faster access to customer markets and ultimately early revenue”, Jade said. “Subject to agreement of definitive documentation, and government and regulator cooperation and other approvals, the Red Lake gas field could potentially be developed to cover purification, pipeline and other transport, compression (for potential production of CNG), liquefaction (for production of LNG), refueling station construction, enabling gas sales for vehicle,

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

AI networking demand fueled Cisco’s upbeat Q1 financials

Customers are very focused on modernizing their network infrastructure in the enterprise in preparation for inferencing and AI workloads, Robbins said. “These things are always multi-year efforts,” and this is only the beginning, Robbins said. The AI opportunity “As we look at the AI opportunity, we see customer use cases growing across training, inferencing, and connectivity, with secure networking increasingly critical as workloads move from the data center to end users, devices, and agents at the edge,” Robbins said. “Agents are transforming network traffic from predictable bursts to persistent high-intensity loads, with agentic AI queries generating up to 25 times more network traffic than chatbots.” “Instead of pulling data to and from the data center, AI workloads require models and infrastructure to be closer to where data is created and decisions are made, particularly in industries such as retail, healthcare, and manufacturing.” Robbins pointed to last week’s introduction of Cisco Unified Edge, a converged platform that integrates networking, compute and storage to help enterprise customers more efficiently handle data from AI and other workloads at the edge. “Unified Edge enables real-time inferencing for agentic and physical AI workloads, so enterprises can confidently deploy and manage AI at scale,” Robbins said. On the hyperscaler front, “we see a lot of solid pipeline throughout the rest of the year. The use cases, we see it expanding,” Robbins said. “Obviously, we’ve been selling networking infrastructure under the training models. We’ve been selling scale-out. We launched the P200-based router that will begin to address some of the scale-across opportunities.” Cisco has also seen great success with its pluggable optics, Robbins said. “All of the hyperscalers now are officially customers of our pluggable optics, so we feel like that’s a great opportunity. They not only plug into our products, but they can be used with other companies’

Read More »

When the Cloud Leaves Earth: Google and NVIDIA Test Space Data Centers for the Orbital AI Era

On November 4, 2025, Google unveiled Project Suncatcher, a moonshot research initiative exploring the feasibility of AI data centers in space. The concept envisions constellations of solar-powered satellites in Low Earth Orbit (LEO), each equipped with Tensor Processing Units (TPUs) and interconnected via free-space optical laser links. Google’s stated objective is to launch prototype satellites by early 2027 to test the idea and evaluate scaling paths if the technology proves viable. Rather than a commitment to move production AI workloads off-planet, Suncatcher represents a time-bound research program designed to validate whether solar-powered, laser-linked LEO constellations can augment terrestrial AI factories, particularly for power-intensive, latency-tolerant tasks. The 2025–2027 window effectively serves as a go/no-go phase to assess key technical hurdles including thermal management, radiation resilience, launch economics, and optical-link reliability. If these milestones are met, Suncatcher could signal the emergence of a new cloud tier: one that scales AI with solar energy rather than substations. Inside Google’s Suncatcher Vision Google has released a detailed technical paper titled “Towards a Future Space-Based, Highly Scalable AI Infrastructure Design.” The accompanying Google Research blog describes Project Suncatcher as “a moonshot exploring a new frontier” – an early-stage effort to test whether AI compute clusters in orbit can become a viable complement to terrestrial data centers. The paper outlines several foundational design concepts: Orbit and Power Project Suncatcher targets Low Earth Orbit (LEO), where solar irradiance is significantly higher and can remain continuous in specific orbital paths. Google emphasizes that space-based solar generation will serve as the primary power source for the TPU-equipped satellites. Compute and Interconnect Each satellite would host Tensor Processing Unit (TPU) accelerators, forming a constellation connected through free-space optical inter-satellite links (ISLs). Together, these would function as a disaggregated orbital AI cluster, capable of executing large-scale batch and training workloads. Downlink

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »