Stay Ahead, Stay ONMINE

Meta proposes new scalable memory layers that improve knowledge, reduce hallucinations

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As enterprises continue to adopt large language models (LLMs) in various applications, one of the key challenges they face is improving the factual knowledge of models and reducing hallucinations. In a new paper, researchers at Meta […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


As enterprises continue to adopt large language models (LLMs) in various applications, one of the key challenges they face is improving the factual knowledge of models and reducing hallucinations. In a new paper, researchers at Meta AI propose “scalable memory layers,” which could be one of several possible solutions to this problem.

Scalable memory layers add more parameters to LLMs to increase their learning capacity without requiring additional compute resources. The architecture is useful for applications where you can spare extra memory for factual knowledge but also want the inference speed of nimbler models.

Dense and memory layers

Traditional language models use “dense layers” to encode vast amounts of information in their parameters. In dense layers, all parameters are used at their full capacity and are mostly activated at the same time during inference. Dense layers can learn complex functions, and increasing their requires additional computational and energy resources. 

In contrast, for simple factual knowledge, much simpler layers with associative memory architectures would be more efficient and interpretable. This is what memory layers do. They use simple sparse activations and key-value lookup mechanisms to encode and retrieve knowledge. Sparse layers take up more memory than dense layers but only use a small portion of the parameters at once, which makes them much more compute-efficient.

Memory layers have existed for several years but are rarely used in modern deep learning architectures. They are not optimized for current hardware accelerators. 

Current frontier LLMs usually use some form of “mixture of experts” (MoE) architecture, which uses a mechanism vaguely similar to memory layers. MoE models are composed of many smaller expert components that specialize in specific tasks. At inference time, a routing mechanism determines which expert becomes activated based on the input sequence. PEER, an architecture recently developed by Google DeepMind, extends MoE to millions of experts, providing more granular control over the parameters that become activated during inference.

Upgrading memory layers

Memory layers are light on compute but heavy on memory, which presents specific challenges for current hardware and software frameworks. In their paper, the Meta researchers propose several modifications that solve these challenges and make it possible to use them at scale.

Memory layers
Memory layers can store knowledge in parallel across several GPUs without slowing down the model (source: arXiv)

First, the researchers configured the memory layers for parallelization, distributing them across several GPUs to store millions of key-value pairs without changing other layers in the model. They also implemented a special CUDA kernel for handling high-memory bandwidth operations. And, they developed a parameter-sharing mechanism that supports a single set of memory parameters across multiple memory layers within a model. This means that the keys and values used for lookups are shared across layers.

These modifications make it possible to implement memory layers within LLMs without slowing down the model.

“Memory layers with their sparse activations nicely complement dense networks, providing increased capacity for knowledge acquisition while being light on compute,” the researchers write. “They can be efficiently scaled, and provide practitioners with an attractive new direction to trade-off memory with compute.”

To test memory layers, the researchers modified Llama models by replacing one or more dense layers with a shared memory layer. They compared the memory-enhanced models against the dense LLMs as well as MoE and PEER models on several tasks, including factual question answering, scientific and common-sense world knowledge and coding.

Memory model vs dense layers
A 1.3B memory model (solid line) trained on 1 trillion tokens approaches the performance of a 7B model (dashed line) on factual question-answering tasks as it is given more memory parameters (source: arxiv)

Their findings show that memory models improve significantly over dense baselines and compete with models that use 2X to 4X more compute. They also match the performance of MoE models that have the same compute budget and parameter count. The model’s performance is especially notable on tasks that require factual knowledge. For example, on factual question-answering, a memory model with 1.3 billion parameters approaches the performance of Llama-2-7B, which has been trained on twice as many tokens and 10X more compute. 

Moreover, the researchers found that the benefits of memory models remain consistent with model size as they scaled their experiments from 134 million to 8 billion parameters.

“Given these findings, we strongly advocate that memory layers should be integrated into all next generation AI architectures,” the researchers write, while adding that there is still a lot more room for improvement. “In particular, we hope that new learning methods can be developed to push the effectiveness of these layers even further, enabling less forgetting, fewer hallucinations and continual learning.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia launches blueprints to help jump start AI projects

Give Nvidia credit, it’s not just talking up big ideas, it’s helping its customers achieve them. The vendor recently issued designs for AI factories after hyping up the idea for several months. Now it has come out with AI blueprints, essentially pre-built templates that give developers a jump start on

Read More »

Examining disk space on Linux

$ alias bysize=”ls -lhS” Using the fdisk command The fdisk command can provide useful stats on your disk partitions. Here’s an example: $ sudo fdiskfdisk: bad usageTry ‘fdisk –help’ for more information.$ sudo fdisk -lDisk /dev/sda: 14.91 GiB, 16013942784 bytes, 31277232 sectorsDisk model: KINGSTON SNS4151Units: sectors of 1 * 512

Read More »

2025 global network outage report and internet health check

Two notable outages On December 30, Neustar, a U.S. based technology service provider headquartered in Sterling, VA, experienced an outage that impacted multiple downstream providers, as well as Neustar customers within multiple regions, including the U.S., Mexico, Taiwan, Singapore, Canada, the U.K., Spain, Romania, Germany, Luxembourg, France, Costa Rica, Ireland,

Read More »

Morson Group acquires Manchester engineering firm Orange Solutions

Morson Group has expanded its portfolio with the acquisition of Orange Solutions, a Manchester-based safety and control systems engineering firm. The group confirmed that Orange Solution’s founder and managing director, Tony Hynes, will retain his position ” along with the company’s experienced team of engineers.” Morson told Energy Voice that it is looking to create more than 10 jobs at the firm over the next year and that its “intention is to increase the footprint” of Orange Solutions in the UK and overseas. Currently, Orange Solutions has one base in Irlam. The deal, agreed for an undisclosed sum, brings Oange Solutions under the group’s engineering division which includes Morson Projects, Waldeck and Ematics. Morson claims that the work of Orange Solutions will complement fellow Manchester firm Ematics. Orange Solutions is already well acquainted with its new owners, having worked with Ematics and Morson Projects on “a number of past and current projects,” Morson Group wrote. Morson Group chief executive, Ged Mason commented: “The synergy between Orange Solutions’ track record and client base, and our Ematics business will enable us to offer clients even more expertise from within the Morson Group, with additional capacity and capabilities to resource projects effectively.” © Supplied by Morson GroupMorson Group Acquired Orange Solutions for an undisclosed figure as it looks to expand the firm’s UK and international footprint. Orange Solutions has worked across oil and gas operations, renewables projects and transmission line rollout. Hynes added: “Orange Solutions has grown by providing the knowledge and experience needed to deliver complex projects on time. “With so much potential in our sectors, the time was right for us to seek a strategic partner that can support our continued growth. The synergy and existing relationship with Morson, which is based locally to our existing HQ, made them an ideal

Read More »

Shell Flags Lower Gas Earnings as 2024 Ended on a Weak Note

Shell Plc said its natural gas divisions saw lower sales volumes and trading earnings, the latest sign that 2024 ended on a weak note for major energy companies.  The warning on Shell’s crucial natural gas division, in a fourth-quarter trading update published on Wednesday, was accompanied by figures showing a slight rise in operating expenses across the company and lower profits from buying and selling oil products.  Shares of Shell fell as much as 2% in London trading. The figures were weaker than expected on “a combination of softness in both oil and gas trading” and “continued depressed margins in chemicals,” Citigroup Inc. Managing Director Alastair Syme said in a note. It’s another indication of a fourth-quarter dip in earnings for the world’s largest energy companies. Late on Tuesday, Shell’s largest rival Exxon Mobil Corp. said its profit for the period took a $700 million hit from lower crude prices and narrowing refining margins. In the closing months of 2024, unexpected weakness in what is typically one of the strongest seasons for oil demand forced big players in the market to adjust, with the Organization of Petroleum Exporting Countries and its allies delaying the planned restart of some idle production. Brent crude futures have risen more than 3% so far this year, although many analysts see OPEC+ having little room to revive output with a supply glut looming.  The outlook for natural gas in the coming months is stronger, especially in Europe, where prices surged to a 14-month high after Russian gas flows through Ukraine halted following the expiration of a transit agreement. Shell’s natural gas production in the fourth quarter is seen at 880,000 to 920,000 barrels of oil equivalent a day, down from 941,000 a day in the third quarter due to maintenance at the Pearl Gas-to-Liquids plant in Qatar,

Read More »

USA Gas Prices Were Put Through the Wringer in 2024

U.S. gas prices were put through the wringer in 2024. That’s what a BofA Global Research report sent to Rigzone on Tuesday by the BofA Global Research team stated, highlighting that Henry Hub natural gas prices averaged “just $2.41 per million British thermal units (MMBtu)”. That was “the lowest level since 2020 and second lowest level in at least 25 years”, the report pointed out. “Loose balances pushed storage to a multi-year high of 678 billion cubic feet (Bcf) above five-year average levels in March, but agile producers and strong power sector demand helped whittle that surplus down to just 154 Bcf by year end,” the report said. “Tightening storage, the first forecast for significant cold, and news that the Plaquemines terminal will begin shipping LNG cargos in January helped push Henry Hub gas up to $4 per MMBtu by late December,” it added, noting that “since then, weather forecasts have shed HDDs, causing gas to ease lower”. The BofA Global Research report stated that U.S. output has likely peaked seasonally and predicted that this “should fall back below 104 Bcf per day (Bcfpd) by spring, before regaining its footing into year-end and registering 2.3 Bcfpd of growth year on year”. “Most year on year growth will come from the Permian (1.9 Bcfpd) due to the ramp up of supply into year-end 2024 as the Matterhorn pipeline allowed supply to rise, and the Northeast (0.5Bcfpd), where the mid-2024 MVP start-up should support higher output this year,” the report added. According to the U.S. Energy Information Administration’s (EIA) latest short term energy outlook (STEO), which was released last month, U.S. dry natural gas production will average 103.3 Bcfpd in the first quarter of 2025, 104.0 Bcfpd in the second quarter, 103.6 Bcfpd in the third quarter, 103.9 Bcfpd in the fourth

Read More »

Data centers drive growth, risk for PG&E, Constellation, NorthWestern: BofA

Electric load growth associated with artificial intelligence and data center expansion will drive U.S. utility risk and opportunity in the coming years, with a few power companies in particular poised to profit in 2025, according to a Tuesday research note from Bank of America Global Research. “Data centers and other large industrial loads should play an important role in 2025, continuing the trend from 2024,” BofA analysts said. “A differentiation in valuation should occur between those companies that can increase earnings growth related to large load related capital investments versus those who do not.” U.S. electricity demand could rise 128 GW over the next five years, driven by data centers and manufacturing growth, according to a December report from Grid Strategies. And Bain & Co., in an October analysis, warned U.S. utilities are facing “potentially overwhelming demand” from the trend. BOA’s research note highlighted near-term opportunities for Pacific Gas & Electric, NorthWestern Energy Group and Constellation Energy Group. For PG&E, “data center opportunities are real in 2025, with a release of the current cluster study for 4.4GWs of large load early in the new year,” BOA said. “We believe the northern [California] model of building normal back up generation to serve the grid at non-peak times is unique, given limited peak resource usage within [the utility’s] service territory.” “Customer bills remain a priority and we see the California regulatory environment as constructive,” analysts added. PG&E is expected to make a cost of capital filing in March and submit its next general rate case in May, according to BOA. NorthWestern Energy Group, in December, announced a letter of intent to provide energy services to a developer planning new data centers in Montana, BOA noted. “The new load from the data centers is expected to be a minimum of 50 MWs in 2027, with growth to 250

Read More »

Former Lloyds Register consultants find success in Cyprus

A firm set up by former members of the Lloyds Register upstream consulting team has landed a contract with the Cyprus state-backed oil and gas management firm. BorchWix Energy Advisors (BWEA) secured the deal with Cyprus Hydrocarbon Company (CHC) for geological and engineering work within the island nation’s waters. The firm, set up by those who lost their jobs in Aberdeen and London with Lloyds Register in 2020, won a similar contract with the seabed manager in 2023. The consultants will continue to work with consortium partner Badley Ashton & Associates (BA) to carry our work on exploration, appraisal and field development. Carsten Borch, BWEA’s commercial and technical director and project lead of the BWEA-BA consortium said: “The award of this second, larger, follow on contract, is testament to our consortium’s excellent working relationship with CHC and the successful delivery of a number of value-adding projects, which utilise our strong technical capabilities and unique Eastern Mediterranean experience.” The firm has 19 workers earmarked to cover the contract scope of services with further BWEA staff able to assist. “Therefore, for the time being, we do not expect to grow our staff further,” the consultancy firm told Energy Voice. Setting up shop following redundancy © Supplied by BorchWix Energy AdviBorchWix Energy Advisors Founding Partner and Geoscience Director Jon Wix. Nearly 50 jobs were lost between Aberdeen and London in 2020 when Lloyds Register closed the doors of its subsurface business. Following this, Jon Wix and Carsten Borch – who were both senior members of the Lloyds Register team – set up BWEA. The pair teamed up as they thought it was unlikely that they would land well-paid jobs when the energy sector was facing strong headwinds. Wix and Borch decided to offer services to clients whose contracts had to be terminated as

Read More »

Wind power was largest source of electricity in 2024

Wind power became the largest source of UK electricity generation in 2024, with today’s blackout warning from the system operator bringing the grid-balancing act into sharp focus. The National Electricity System Operator (NESO) issued a system warning of a 1,700 MW (1.7 GW) “shortfall” in power that, it warned, could lead to blackouts. “An electricity margin notice (EMN) has been issued to the market,” NESO said in a statement. “Our forecasts are showing tight margins on the electricity system for today between 16:00-19:00. “This is a routine tool that we use most winters, and means we are asking market participants to make any additional generation capacity they may have available. “The EMN does not mean electricity supply is at risk.” ‘Outages’ The warning followed interconnector grid outages affecting imports of electricity from certain countries in Europe. The network operator indicated that outages had affected imports of electricity from some European countries through interconnectors in France (ElecLink) and the Netherlands. The GreenLink interconnector in Ireland, while not yet commissioned, could bring extra power including electricity produced from offshore wind. Shivam Malhotra, head of power trading at LCP Delta, said that “high energy demand clashes with periods of dwindling low wind output”, adding that the warning notice from NESO provides strong signal to the market to “respond”. “Wind power is forecast to drop to just 1.6-2.6 GW late afternoon and early evening, while demand is expected to peak at 46.5 GW,” said Malhotra. “This leaves the NESO with a thin margin of only 510 MW at 17:00, with a current loss of load probability of 29%, which is the chance of power shortages or ‘load shedding,’ where electricity supply may need to be restricted in certain areas. This is very unlikely to happen, and this number will likely drop later in the

Read More »

Cisco in 2025: Lots of hard work ahead

Hypershield is comprised of AI-based software, virtual machines, and other technology that will ultimately be baked into networking components such as switches, routers and servers. It promises to let organizations autonomously segment their networks when threats are a problem, gain rapid exploit protection without having to patch or revamp firewalls, and automatically upgrade software without interrupting computing resources, Cisco said. Networking, AI and platformization goals Looking ahead, Cisco needs to refocus on enterprise networking and work to make the data center an all-inclusive home for AI applications, industry watchers say. Security technologies must continue to be a priority as well. “2025 will be an important year for Cisco as the company executes ambitious internal changes while looking to capitalize on a dynamic external environment driven by the AI opportunity,” said Brandon Butler, senior research manager, enterprise networks, with IDC.  Revamped leadership will play a role: In August 2024, Cisco announced plans to reconfigure its networking, security and collaboration business units as part of a restructuring that included a 7% global workforce reduction and established Jeetu Patel as chief product officer. “As for the internal changes, the ascension of Jeetu Patel to executive vice president and chief product officer is a significant move for the company. Patel has an opportunity to more closely unify Cisco’s broad product portfolio while ensuring it aligns with top growth areas,” Butler said. A key part of this strategy will be Cisco’s vision for a platform approach to networking and security, which enables more unified experiences and management across Cisco’s products and allows integrated features, like AI, observability and security, to be baked into each one, Butler said.

Read More »

Point2 aims to cut data center power consumption through smart cabling

The P1B121 is suitable for a range of data center configurations, including in-rack and adjacent rack setups such as top-of-rack switch-to-server connectivity, rack-to-rack connectivity, and accelerator-to-accelerator compute fabric connectivity. The 112G PAM4 Smart Retimer requires only 3.0W of power consumption per chip, so 6 W total for each cable. That’s half of the 25 W of traditional networking cables. It reduces cable power and cooling demands while achieving an impressive chip latency of 3ns, which is 20 times lower than DSP-based PAM4 Retimers currently available. That can add up, Kuo notes, as a rack can have anywhere from 30 to 150 cables in it. Now multiply each cable by 12 W instead of 25 W and you’ve got a significant savings. There is also savings on weight. To compensate for signal loss, some cable makers simply use more copper, making cabling thicker. Having retimer chips allows you to extend the cable link without having to go to a thicker gauge copper wiring. The Point2 retimer supports the current speeds of 400 Gb/s as well as the upcoming 800 Gb products coming to market and the 1.6 Tb in the coming years, said Kuo. Point2 customers are designing cables now and will be delivering them in the first half of 2025, he added.

Read More »

How adding capacity to a network could reduce IT costs

Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it. Congestion is one, but the other is “serialization delay.” This complex-sounding term means that you can’t switch a packet if you don’t have it all, and so every data packet is delayed until it’s all received. The length of that delay is determined by the speed of the connection it arrives on, so fast interfaces always offer better latency, and the delay a given packet experiences is the sum of the serialization delay of each interface it passes through. Application designs, component costs and AI reshape views on network capacity You might wonder why enterprises are starting to look at this capacity-solves-problems point now, versus years or decades earlier. They say there’s both a demand and supply-side answer. On the demand side, increased componentization of applications, including the division of component hosting between data center and cloud, has radically increased the complexity of application workflows. Monolithic applications have simple workflows—input, process, output. Componentized ones have to move messages among the components, and each

Read More »

Scorecard: Looking Back at Data Center Frontier’s 2024 Industry Predictions

2.  Rethinking Power on Every Level  PREDICTION:  Utilities are struggling to upgrade transmission networks to support the surging requirement for electricity to power data centers. CBRE recently said that data center construction completion timelines have been extended by 24 to 72 months due to power supply delays. Although the constraints in Northern Virginia have made headlines, power availability has quickly become a global challenge, impacting major markets in Europe and Asia as well as U.S. hubs like Ashburn, Santa Clara, and sections of Dallas and Suburban Chicago. Last year we predicted the rise of on-site power generation, but we’ve yet to truly see this at scale. But data center operators are working on a range of new approaches to power. Expect to see innovations in power continue as data centers seek better visibility into their power sourcing. MASSIVE HIT:  This prediction was a huge “Hit,” as evidenced by 2024 data from leading commercial real estate firms CBRE, JLL, and Cushman & Wakefield, and other sources. Throughout the year, data center operators reported facing significant challenges in securing adequate power from utilities, leading to increased interest in adoption of on-site power generation solutions, as reflected by many industry discussions this year. The bottom line on this prediction might be the release of this year’s DOE-backed report indicating that U.S. data center power demand could nearly triple in the next three years, potentially consuming up to 12% of the country’s electricity, underscoring the urgency for alternative power solutions. In terms of the largest data center markets, VPM and others noted how Dominion Energy is projecting unprecedented energy demand from data centers in Virginia, posing significant challenges for accommodating this industry growth in the coming decades. In a noteable effort to shore up that gap, Dominion Energy, American Electric Power (AEP), and FirstEnergy

Read More »

How 2024, the Year That Re-Energized Nuclear Power, Foretells Ongoing ‘New Nuclear’ Developments for Data Centers in 2025

In a world increasingly focused on advanced nuclear technologies and their integration with energy-intensive sectors like data centers, nuclear power could change the way that the world gets its electricity and finally take its place as a clean, renewable, source of power. Evidence of this shift toward nuclear energy and data centers’ role in it came in abundance last year, as the U.S. nuclear energy sector was observed undergoing a sea change with regard to the data center industry. We saw Microsoft, Constellation, AWS, Talen, and Meta with major data center nuclear energy announcements in the Second Half of 2024. With the surge in nuclear stakes has also come a wave of landmark PPAs representing the “new nuclear” industry’s ascendance. To wit, in the latter half of 2024, the data center industry witnessed significant developments concerning “new nuclear” energy integration, specifically in the area of plans for forthcoming nuclear small modular reactor (SMR) deployments by cloud hyperscalers.  Some of the most notable announcements included: Amazon’s Investment in Nuclear Small Modular Reactors (SMRs): October 2024 saw Amazon reveal partnerships with Dominion Energy and X-energy to develop and deploy 5 gigawatts (GW) of nuclear energy, in a bid for future powering of its data centers with carbon-free energy. Google’s SMR Pact with Kairos Power: Also in October 2024, Google announced plans to collaborate with Kairos Power to build up to seven SMRs, providing up to 500 megawatts of power. The first unit is expected to come online by 2030, with the entire project slated for completion by 2035. Oracle’s Gigawatt-Scale SMR Plans: In September 2024, Oracle announced plans to construct a gigawatt-scale data center powered by three small modular reactors (SMRs). Company Founder and CTO Larry Ellison revealed that building permits for these reactors have been secured, and that the project was currently in its design phase. The company said

Read More »

Data Center Jobs: Sales and Engineering Jobs Available in Major Markets

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Critical Facilities Operations Electrician Sumner, WAWe also have critical facilities engineer positions available in: Ashburn, VA; Elk Grove Village, IL and Sumner, WA (non-electrician role). This opportunity is working directly with a leading mission-critical data center colo provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer New Albany, OH (Contract or Perm in New Albany Only) This traveling position is also available as a FTE in: Boydton, VA; Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Orlando, FL; Nashville, TN; Des Moines, IA; San Diego, CA; San Jose, CA; Portland, OR; Boardman, OR; Boise, ID; Salt Lake City, UT; Phoenix, AZ; Santa Fe, NM; Dallas, TX; Reno, NV; Chicago, IL or Toronto, ON. *** ALSO looking for a LEAD EE and ME CxA Agents.*** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »