Stay Ahead, Stay ONMINE

Inside Monday’s AI pivot: Building digital workforces through modular AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The Monday.com work platform has been steadily growing over the past decade, in a quest to achieve its goal of helping empower teams at organizations small and large to be more efficient and productive. According to […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The Monday.com work platform has been steadily growing over the past decade, in a quest to achieve its goal of helping empower teams at organizations small and large to be more efficient and productive.

According to co-founder Roy Mann, AI has been a part of the company for much of its history. The initial use cases supported its own performance marketing. (Who among us has not seen a Monday advertisement somewhere over the last 10 years?) A large part of that effort has benefited from AI and machine learning (ML).

With the advent and popularity of generative AI in the last three years, particularly since the debut of ChatGPT, Monday — much like every other enterprise on the planet — began to consider and integrate the technology.

The initial deployment of gen AI at Monday didn’t quite generate the return on investment users wanted, however. That realization led to a bit of a rethink and pivot as the company looked to give its users AI-powered tools that actually help to improve enterprise workflows. That pivot has now manifested itself with the company’s “AI blocks” technology and the preview of its agentic AI technology that it calls “digital workforce.”

Monday’s AI journey, for the most part, is all about realizing the company’s founding vision.

“We wanted to do two things, one is give people the power we had as developers,” Mann told VentureBeat in an exclusive interview. “So they can build whatever they want, and they feel the power that we feel, and the other end is to build something they really love.”

Any type of vendor, particularly an enterprise software vendor, is always trying to improve and help its users. Monday’s AI adoption fits securely into that pattern.

The company’s public AI strategy has evolved through several distinct phases:

  1. AI assistant: Initial platform-wide integration;
  2. AI blocks: Modular AI capabilities for workflow customization;
  3. Digital workforce: Agentic AI.

Much like many other vendors, the first public foray into gen AI involved an assistant technology. The basic idea with any AI assistant is that it provides a natural language interface for queries. Mann explained that the Monday AI assistant was initially part of the company’s formula builder, giving non-technical users the confidence and ability to build things they couldn’t before. While the service is useful, there is still much more that organizations need and want to do.

Or Fridman, AI product group lead at Monday, explained that the main lesson learned from deploying the AI assistant is that customers want AI to be integrated into their workflows. That’s what led the company to develop AI blocks.

Building the foundation for enterprise workflows with AI blocks

Monday realized the limitations of the AI assistant approach and what users really wanted. 

Simply put, AI functionality needs to be in the right context for users — directly in a column, component or service automation. 

AI blocks are pre-built AI functions that Monday has made accessible and integrated directly into its workflow and automation tools. For example, in project management, the AI can provide risk mapping and predictability analysis, helping users better manage their projects. This allows them to focus on higher-level tasks and decision-making, while the AI handles the more repetitive or data-intensive work.

This approach has particular significance for the platform’s user base, 70% of which consists of non-technical companies. The modular nature allows businesses to implement AI capabilities without requiring deep technical expertise or major workflow disruptions.

Monday is taking a model agnostic approach to integrating AI

An early approach taken by many vendors on their AI journeys was to use a single vendor large language model (LLM). From there, they could build a wrapper around it or fine tune for a specific use case.

Mann explained that Monday is taking a very agnostic approach. In his view, models are increasingly becoming a commodity. The company builds products and solutions on top of available models, rather than creating its own proprietary models.

Looking a bit deeper, Assaf Elovic, Monday’s AI director, noted that the company uses a variety of AI models. That includes OpenAI models such as GPT-4o via Azure, and others through Amazon Bedrock, ensuring flexibility and strong performance. Elovic noted that the company’s usage follows the same data residency standards as all Monday features. That includes multi-region support and encryption, to ensure the privacy and security of customer data.

Agentic AI and the path to the digital workforce

The latest step in Monday’s AI journey is in the same direction as the rest of the industry — the adoption of agentic AI.

The promise of agentic AI is more autonomous operations that can enable an entire workflow. Some organizations build agentic AI on top of frameworks such as LangChain or Crew AI. But that’s not the specific direction that Monday is taking with its digital workforce platform.

Elovic explained that Monday’s agentic flow is deeply connected to its own AI blocks infrastructure. The same tools that power its agents are built on AI blocks like sentiment analysis, information extraction and summarization. 

Mann noted that digital workforce isn’t so much about using a specific agentic AI tool or framework, but about creating better automation and flow across the integrated components on the Monday platform. Digital workforce agents are tightly integrated into the platform and workflows. This allows the agents to have contextual awareness of the user’s data, processes and existing setups within Monday.

The first digital workforce agent is set to become available in March. Mann said it will be called the monday “expert” designed to build solutions for specific users. Users describe their problems and needs to the agent, and the AI will provide them relevant workflows, boards and automations to address those challenges.

AI specialization and integration provides differentiation in a commoditized market

There is no shortage of competition across the markets that Monday serves.

As a workflow platform, it crosses multiple industry verticals including customer relationship management (CRM) and project management. There are big players across these industries including Salesforce and Atlassian, which have both deeply invested in AI.

Mann said the deep integration with AI blocks across various Monday tools differentiate the company from its rivals. At a more basic level, he said, it’s really all about meeting users where they are and embedding useful AI capabilities in the context of workflow.

Monday’s evolution suggests a model for enterprise software development where AI capabilities are deeply integrated yet highly customizable. This approach addresses a crucial challenge in enterprise AI adoption: The need for solutions that are both powerful and accessible to non-technical users.

The company’s strategy also points to a future where AI implementation focuses on empowerment rather than replacement. 

“If a technology makes large companies more efficient, what does it do for SMBs?” said Mann, highlighting how AI democratization could level the playing field between large and small enterprises.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

F5 to acquire CalypsoAI for advanced AI security capabilities

CalypsoAI’s platform creates what the company calls an Inference Perimeter that protects across models, vendors, and environments. The offers several products including Inference Red Team, Inference Defend, and Inference Observe, which deliver adversarial testing, threat detection and prevention, and enterprise oversight, respectively, among other capabilities. CalypsoAI says its platform proactively

Read More »

HomeLM: A foundation model for ambient AI

Capabilities of a HomeLM What makes a foundation model like HomeLM powerful is its ability to learn generalizable representations of sensor streams, allowing them to be reused, recombined and adapted across diverse tasks. This fundamentally differs from traditional signal processing and machine learning pipelines in RF sensing, which are typically

Read More »

Cisco’s Splunk embeds agentic AI into security and observability products

AI-powered observability enhancements Cisco also announced it has updated Splunk Observability to use Cisco AgenticOps, which deploys AI agents to automate telemetry collection, detect issues, identify root causes, and apply fixes. The agentic AI updates help enterprise customers automate incident detection, root-cause analysis, and routine fixes. “We are making sure

Read More »

Australia Approves Extension for Woodside-Operated NWS Project

The Australian government has granted environmental approval to the Woodside-operated North West Shelf (NWS) project extension. Minister for the Environment and Water Murray Watt said in a statement that the approval is subject to “48 strict conditions” to avoid and mitigate significant impacts on the Murujuga rock art, which forms part of Western Australia’s Dampier Archipelago. “Specifically, I have imposed conditions that will require a reduction in certain gas emissions below their current levels, in some cases by 60 percent by 2030 with ongoing reductions beyond that,” Watt said. The conditions should account for any new science achieved through the Murujuga Rock Art Monitoring Program and require the joint venture for the asset to comply with any air quality objectives and standards that are derived from the program, according to the statement. The project will be required to reduce its emissions every year and reach net zero greenhouse gas emissions by 2050. Woodside and the NWS joint venture said they welcomed the Australian government’s final decision to grant environmental approval for the project. The final government approval “followed an extensive assessment and appeal process and included rigorous conditions to manage the protection of cultural heritage,” Woodside COO Australia Liz Westcott said in a separate statement. “This final approval provides certainty for the ongoing operation of the North West Shelf Project, so it can continue to provide reliable energy supplies as it has for more than 40 years,” Westcott said. “Over this time, the North West Shelf Project has paid more than [AUD 40 billion] in royalties and excise, supported thousands of Australian jobs and contributed well over [AUD 300 million] to communities in the Pilbara through social investment initiatives and infrastructure support”. According to Woodside, the NWS project, one of the largest liquefied natural gas (LNG) projects in the world,

Read More »

India LNG Demand Set to Fall in 2025

India’s annual liquefied natural gas demand is set to contract in 2025 for the first time in years, as buyers hold out for a surge in production that is expected to push down prices.  The world’s fourth-biggest LNG importer bought about 16 million tons of the super-chilled gas in the eight months through August, down 10 percent from a year earlier, according to ship-tracking data compiled by Bloomberg.  Purchases slowed as elevated spot prices made LNG less competitive against alternative fuels, while monsoon rains brought cooler weather and reduced power demand. The pullback offers some relief to a global gas market that’s remained tight since Russia’s 2022 invasion of Ukraine forced Europe to pivot to LNG, boosting competition with Asia. India’s imports are expected to rebound as soon as next year, helped by a looming supply glut that should drag prices lower. Projects coming online from the US to Qatar starting in 2026 are set to add volumes that will outstrip demand growth through the rest of the decade. “We expect the dip in 2025 is a temporary price-driven phenomenon,” said Kaushal Ramesh, vice president for gas & LNG research at Rystad Energy. “The years ahead will see more contracts ramp up and also lower spot prices.” Demand for gas from industries, refineries and the fertilizer sector in the South Asian nation has plunged this year, according to oil ministry data, mainly due to high prices. Asian spot LNG has traded at more than $11 per million British thermal units this year – above the level at which price-sensitive Indian companies typically step in to buy. Still, Rystad sees India’s annual LNG demand exceeding 40 million tons by 2030, compared with about 26 million tons last year. The government has promoted gas for industries and households as a way to reduce the country’s dependence

Read More »

Strategists Expect ‘Meaningful’ USA Crude Draw This Week

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, Macquarie strategists, including Walt Chancellor, revealed that they are expecting a “meaningful [U.S.] crude draw alongside product builds” this week. “We are forecasting U.S. crude inventories down 6.4 million barrels for the week ending September 12,” the strategists noted in the report. “This follows a 3.9 million barrel build in the prior week, with the crude balance again realizing looser than our expectations,” they added. “For this week’s balance, from refineries, we model another slight reduction in crude runs (-0.1 million barrels per day). Among net imports, we model a very large reduction, with exports sharply higher (+1.7 million barrels per day) and imports lower (-0.5 million barrels per day) on a nominal basis,” they continued. The Macquarie strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod. +adj.+transfers), we look for a bounce (+0.6 million barrels per day) on a nominal basis this week,” the strategists said in the report. “Rounding out the picture, we anticipate a similar increase (+0.5 MM BBL) in SPR [Strategic Petroleum Reserve] stocks this week,” they added. The strategists went on to state in the report that, “among products”, they “look for a modest gasoline draw (-0.9 million barrels) with builds in distillate (+1.8 million barrels) and jet (+1.3 million barrels)”. “We model implied demand for these three products at ~14.5 million barrels per day for the week ending September 12,” the strategists noted. In its latest weekly petroleum status report at the time of writing, which was released on September 10 and included data for the week ending September 5, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories,

Read More »

Allseas Orders New Heavy Transport Vessel

Offshore engineering and construction company Allseas Group SA has signed a construction contract for a purpose-built semi-submersible heavy transport vessel with Guangzhou Shipyard International in China. Allseas said in a media release that the vessel is scheduled for delivery in the first quarter of 2028. The vessel, to be named Grand Tour, will have a 40,000-tonne load capacity, specifically built to transport the world’s largest offshore structures across oceans and seamlessly transfer them to Pioneering Spirit for installation. Allseas said Grand Tour is designed to fit precisely into the bow slot of Pioneering Spirit. This integration aims to simplify offshore installation, providing clients with a single solution for transporting and installing large structures fabricated away from the installation site, it said. Allseas said Grand Tour boasts a semi-submersible hull with a 57-meter (187.01 feet) beam to enhance stability and enable shallow-draft access at global yards. The vessel has an advanced ballast system capable of pumping 24,000 cubic meters (847,552 cubic feet) per hour for precise load transfers. According to Allseas, its methanol-ready 24 MW propulsion system is capable of transitioning to e-methanol, while an air lubrication system and podded propulsion will reduce drag and improve fuel efficiency. The 180 x 57-meter cargo deck is designed for direct skidding, roll-on/roll-off, and float-on/float-off operations. “Grand Tour will play a key role in Allseas’ execution of TenneT’s landmark 2 GW offshore wind program, which will deliver 28 GW of clean offshore wind power to European homes and businesses by 2032”, Allseas said. The vessel will carry converter stations from fabrication facilities in Asia and Europe to installation locations in the North Sea off the coasts of the Netherlands and Germany, where Pioneering Spirit will take over for the installation using a single lift, Allseas said. “This addition to our fleet is more than an expansion; it’s a

Read More »

Afreximbank, MDGIF Eye $500MM Investment in Nigerian Gas Infrastructure

Nigeria’s Midstream and Downstream Gas Infrastructure Fund (MDGIF) and the African Export-Import Bank (Afreximbank) have signed a memorandum of understanding (MoU) for a four-year debt and equity plan of up to $500 million to expand and modernize Nigeria’s natural gas infrastructure. “Afreximbank will consider providing direct financing and credit risk guarantees to support project finance transactions, working alongside local financial institutions”, Afreximbank and MDGIF said in a joint statement. “MDGIF will consider equity contributions to complement Afreximbank’s senior debt, enabling full capital structuring for eligible projects”, the statement said. The partners also eye “a structured program to enhance MDGIF’s institutional capabilities in project structuring, risk management and innovative financing”. MDGIF, established under the West African country’s Petroleum Industry Act, says its primary purpose is to make equity investments “in infrastructure related to midstream and downstream gas operations aimed at increasing the domestic consumption of natural gas in Nigeria in projects which are financed in part by private investment”. Nigerian Petroleum Resources (Gas) Minister Ekperikpe Ekpo was quoted as saying in the statement, “Through this partnership, we are unlocking the potential to mobilize up to $500 million over the next four years for Nigeria’s gas infrastructure. More importantly, we are creating a pipeline of bankable projects, supported by feasibility studies, project preparation and risk-sharing mechanisms, that will accelerate the pace of investment in pipelines, processing”. Kanayo Awani, executive vice president for intra-African trade and export development at Afreximbank, said, “By combining Afreximbank’s deep expertise in trade and project finance with MDGIF’s national investment reach, we are poised to unlock new opportunities for inclusive growth and sustainable development across Nigeria and, potentially, across the West Africa sub-region”. MDGIF executive director Oluwole Adama commented, “[T]his partnership with Afreximbank enables MDGIF to mobilize capital, expand critical midstream and downstream infrastructure, reduce flaring and deliver

Read More »

GreenIT Secures $434MM for Renewable Energy Projects

GreenIT SpA has signed a new project finance agreement for EUR 370 million ($434.2 million) to support its renewable energy projects. GreenIT, a joint venture between Eni SpA’s Plenitude and CDP Equity, plans to invest the funds in the development of a portfolio of greenfield projects onshore Italy. In a media release, Eni said that the construction of the projects is expected to be completed by 2028, in line with GreenIT’s industrial plan, which targets 1 gigawatt (GW) of installed renewable capacity by 2030. “The completion of this strategic transaction strengthens GreenIT’s financial structure, providing new resources to support the investments planned for the next few years by our ambitious industrial plan. The strong confidence shown by the lending institutions reinforces GreenIT’s strategic vision to play a key role in Italy’s energy transition”, Paolo Bellucci, CEO of GreenIT, said. The European Investment Bank has committed $258 million, including $211 million in direct loans and $46.9 million through financial intermediaries. The remainder was sourced from prominent European financial institutions, such as BNP Paribas, Credit Agricole Corporate & Investment Bank, ING Bank NV, and Societe Generale. GreenIT enhances the offerings of Plenitude. Plenitude operates in more than 15 countries worldwide, utilizing a business model that combines electricity generation from renewable sources, with over 4 GW of installed capacity, alongside providing energy and energy solutions to more than 10 million customers across Europe, according to Eni. Plenitude also has an extensive network of 21,500 electric vehicle charging stations, according to Eni. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR

Read More »

Arista touts liquid cooling, optical tech to reduce power consumption for AI networking

Both technologies will likely find a role in future AI and optical networks, experts say, as both promise to reduce power consumption and support improved bandwidth density. Both have advantages and disadvantages as well – CPOs are more complex to deploy given the amount of technology included in a CPO package, whereas LPOs promise more simplicity.  Bechtolsheim said that LPO can provide an additional 20% power savings over other optical forms. Early tests show good receiver performance even under degraded conditions, though transmit paths remain sensitive to reflections and crosstalk at the connector level, Bechtolsheim added. At the recent Hot Interconnects conference, he said: “The path to energy-efficient optics is constrained by high-volume manufacturing,” stressing that advanced optics packaging remains difficult and risky without proven production scale.  “We are nonreligious about CPO, LPO, whatever it is. But we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion,” Bechtolsheim said at the investor event. “So, to put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, okay? So going from zero to 50 million is just not possible. The supply chain doesn’t exist. So, even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort.” “We’re all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case,” Bechtolsheim added. “So what we call the ‘purpose-built AI data center fabric’ around Ethernet

Read More »

Network and cloud implications of agentic AI

The chain analogy is critical here. Realistic uses of AI agents will require core database access; what can possibly make an AI business case that isn’t tied to a company’s critical data? The four critical elements of these applications—the agent, the MCP server, the tools, and the data— are all dragged along with each other, and traffic on the network is the linkage in the chain. How much traffic is generated? Here, enterprises had another surprise. Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network. Vendors who told enterprises that AI networking would have a profound impact are proving correct. You can run a query or perform a task with an agent and have that task parse an entire database of thousands or millions of records. Someone not aware of what an agent application implies in terms of data usage can easily create as much traffic as a whole week’s normal access-and-update would create. Enough, they say, to impact network capacity and the QoE of other applications. And, enterprises remind us, if that traffic crosses in/out of the cloud, the cloud costs could skyrocket. About a third of the enterprises said that issues with AI agents generated enough traffic to create local congestion on the network or a blip in cloud costs large enough to trigger a financial review. MCP tool use by agents is also a major security and governance headache. Enterprises point out that MCP standards haven’t always required strong authentication, and they also

Read More »

There are 121 AI processor companies. How many will succeed?

The US currently leads in AI hardware and software, but China’s DeepSeek and Huawei continue to push advanced chips, India has announced an indigenous GPU program targeting production by 2029, and policy shifts in Washington are reshaping the playing field. In Q2, the rollback of export restrictions allowed US companies like Nvidia and AMD to strike multibillion-dollar deals in Saudi Arabia.  JPR categorizes vendors into five segments: IoT (ultra-low-power inference in microcontrollers or small SoCs); Edge (on-device or near-device inference in 1–100W range, used outside data centers); Automotive (distinct enough to break out from Edge); data center training; and data center inference. There is some overlap between segments as many vendors play in multiple segments. Of the five categories, inference has the most startups with 90. Peddie says the inference application list is “humongous,” with everything from wearable health monitors to smart vehicle sensor arrays, to personal items in the home, and every imaginable machine in every imaginable manufacturing and production line, plus robotic box movers and surgeons.  Inference also offers the most versatility. “Smart devices” in the past, like washing machines or coffee makers, could do basically one thing and couldn’t adapt to any changes. “Inference-based systems will be able to duck and weave, adjust in real time, and find alternative solutions, quickly,” said Peddie. Peddie said despite his apparent cynicism, this is an exciting time. “There are really novel ideas being tried like analog neuron processors, and in-memory processors,” he said.

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist (and coming soon free Data Center Intern listing). Data Center Critical Facility Manager Impact, TX There position is also available in: Cheyenne, WY; Ashburn, VA or Manassas, VA. This opportunity is working directly with a leading mission-critical data center developer / wholesaler / colo provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations (enterprise and hyperscale customers). This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer New Albany, OH This traveling position is also available in: Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Cedar Rapids, IA; Phoenix, AZ; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits.  Data Center Engineering Design ManagerAshburn, VA This opportunity is working directly with a leading mission-critical data center developer /

Read More »

Modernizing Legacy Data Centers for the AI Revolution with Schneider Electric’s Steven Carlini

As artificial intelligence workloads drive unprecedented compute density, the U.S. data center industry faces a formidable challenge: modernizing aging facilities that were never designed to support today’s high-density AI servers. In a recent Data Center Frontier podcast, Steven Carlini, Vice President of Innovation and Data Centers at Schneider Electric, shared his insights on how operators are confronting these transformative pressures. “Many of these data centers were built with the expectation they would go through three, four, five IT refresh cycles,” Carlini explains. “Back then, growth in rack density was moderate. Facilities were designed for 10, 12 kilowatts per rack. Now with systems like Nvidia’s Blackwell, we’re seeing 132 kilowatts per rack, and each rack can weigh 5,000 pounds.” The implications are seismic. Legacy racks, floor layouts, power distribution systems, and cooling infrastructure were simply not engineered for such extreme densities. “With densification, a lot of the power distribution, cooling systems, even the rack systems — the new servers don’t fit in those racks. You need more room behind the racks for power and cooling. Almost everything needs to be changed,” Carlini notes. For operators, the first questions are inevitably about power availability. At 132 kilowatts per rack, even a single cluster can challenge the limits of older infrastructure. Many facilities are conducting rigorous evaluations to decide whether retrofitting is feasible or whether building new sites is the more practical solution. Carlini adds, “You may have transformers spaced every hundred yards, twenty of them. Now, one larger transformer can replace that footprint, and power distribution units feed busways that supply each accelerated compute rack. The scale and complexity are unlike anything we’ve seen before.” Safety considerations also intensify with these densifications. “At 132 kilowatts, maintenance is still feasible,” Carlini says, “but as voltages rise, data centers are moving toward environments where

Read More »

Google Backs Advanced Nuclear at TVA’s Clinch River as ORNL Pushes Quantum Frontiers

Inside the Hermes Reactor Design Kairos Power’s Hermes reactor is based on its KP-FHR architecture — short for fluoride salt–cooled, high-temperature reactor. Unlike conventional water-cooled reactors, Hermes uses a molten salt mixture called FLiBe (lithium fluoride and beryllium fluoride) as a coolant. Because FLiBe operates at atmospheric pressure, the design eliminates the risk of high-pressure ruptures and allows for inherently safer operation. Fuel for Hermes comes in the form of TRISO particles rather than traditional enriched uranium fuel rods. Each TRISO particle is encapsulated within ceramic layers that function like miniature containment vessels. These particles can withstand temperatures above 1,600 °C — far beyond the reactor’s normal operating range of about 700 °C. In combination with the salt coolant, Hermes achieves outlet temperatures between 650–750 °C, enabling efficient power generation and potential industrial applications such as hydrogen production. Because the salt coolant is chemically stable and requires no pressurization, the reactor can shut down and dissipate heat passively, without external power or operator intervention. This passive safety profile differentiates Hermes from traditional light-water reactors and reflects the Generation IV industry focus on safer, modular designs. From Hermes-1 to Hermes-2: Iterative Nuclear Development The first step in Kairos’ roadmap is Hermes-1, a 35 MW thermal demonstration reactor now under construction at TVA’s Clinch River site under a 2023 NRC license. Hermes-1 is not designed to generate electricity but will validate reactor physics, fuel handling, licensing strategies, and construction techniques. Building on that experience, Hermes-2 will be a 50 MW electric reactor connected to TVA’s grid, with operations targeted for 2030. Under the agreement, TVA will purchase electricity from Hermes-2 and supply it to Google’s data centers in Tennessee and Alabama. Kairos describes its development philosophy as “iterative,” scaling incrementally rather than attempting to deploy large fleets of units at once. By

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »