Stay Ahead, Stay ONMINE

ChatGPT’s memory can now reference all past conversations, not just what you tell it to

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI is slowly rolling out better memory on ChatGPT, making it a default for ChatGPT to reference past conversations. This has raised the fear that the platform is proactively “listening” to users, making them uncomfortable with […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI is slowly rolling out better memory on ChatGPT, making it a default for ChatGPT to reference past conversations. This has raised the fear that the platform is proactively “listening” to users, making them uncomfortable with how much the platform knows. 

ChatGPT already logs information from previous interactions through its Memory feature, ensuring preferences are saved and conversations can seamlessly continue from where the user left off. 

This new update allows ChatGPT to “draw on past conversations to deliver more relevant and useful responses” and go across all modalities in the platform. Improvements in Memory allow future conversations, not just current chat windows, to reference previous chats. It will only be available for ChatGPT Plus and Pro users. ChatGPT Enterprise, Team and Edu will get access to the feature later.

OpenAI added Memory to ChatGPT in February last year to make talking to ChatGPT more helpful. Memory is a feature for most chat platforms and large language models (LLMs). Gemini 2.0 Flash Thinking added Memory, while frameworks like A-Mem improve long-context memory for more complicated tasks.   

Proactive memory

Improvements to memory will let ChatGPT “naturally build” on earlier chat, and over time, OpenAI said interactions on ChatGPT will be more tailored to the user. 

OpenAI offers two ways to control Memory through settings. The first is Reference Saved Memories, where the user can direct ChatGPT to remember facts like names or preferences. The company said people usually add this information by explicitly telling ChatGPT to remember something. The model will figure out which information will be helpful in future conversations. 

The second control is Reference Chat History. This setting permits ChatGPT to draw context from previous discussions and “adapt to your tone, goals, interests, or other recurring topics.” However, the context will not be stored or shown in the settings page like saved memories are. 

“You can choose to have both settings on or off, or just turn on reference saved memories,” OpenAI said. “The settings are flexible, and you can change them anytime, including managing specific saved memories. If you opt out, ChatGPT won’t draw on past conversations. You can also ask what it remembers or switch to Temporary Chat for memory‑free sessions.”

Concerns from some users

Remembering conversations and taking details for future conversations not only makes it easy to continue a chat, but ideally, for enterprise tasks, having access to preferences and context makes AI models more useful. 

AI investor Allie K. Miller said in a post on X that this update makes ChatGPT “listening all the time. It’s cutting across all of your conversations, whether you have explicitly asked it to remember something or not.”

“As I mentioned a few weeks ago, memory is the best feature inside these platforms. As models and features get commoditized, it’s going to come down to personalization, collaboration and network effects. Memory is the key. Memory is the moat,” Miller said. 

However, after OpenAI announced the Memory update, some users expressed concern that it might change how the model interacts with you. 

Prominent AI commenter and Wharton professor Ethan Mollick noted it’s not a feature he will turn on. 

“I totally get why AI long-term memory is useful and, based on my testing, think many people will love it… but I actually don’t want my LLMs I use for work to chime in with personal details or subtly change its answers as a result of my past interactions. Boundaries are good,” Mollick said. 

OpenAI cofounder Andrej Karpathy worried ChatGPT “think worse of me based on that noob bash question I asked 7 months ago.” 

Memory on ChatGPT will be helpful, but it will be up to the user to determine how much they want the chat platform to know about them and how crucial past information will be for future conversations. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

VMware (quietly) brings back its free ESXi hypervisor

By many accounts, Broadcom’s handling of the VMware acquisition was clumsy and caused many enterprises to reevaluate their relationship with the vendor The move to subscription models was tilted in favor of larger customers and longer, three-year licenses. Because the string of bad publicity and VMware’s competitors pounced, offering migration

Read More »

CoreWeave offers cloud-based Grace Blackwell GPUs for AI training

Cloud services provider CoreWeave has announced it is offering Nvidia’s GB200 NVL72 systems, otherwise known as “Grace Blackwell,” to customers looking to do intensive AI training. CoreWeave said its portfolio of cloud services are optimized for the GB200 NVL72, including CoreWeave’s Kubernetes Service, Slurm on Kubernetes (SUNK), Mission Control, and

Read More »

Kyndryl launches private cloud services for enterprise AI deployments

Kyndryl’s AI Private Cloud environment includes services and capabilities around containerization, data science tools, and microservices to deploy and manage AI applications on the private cloud. The service supports AI data foundations and MLOps/LLMOps services, letting customers manage their AI data pipelines and machine learning operation, Kyndryl stated. These tools facilitate

Read More »

Eni Signs MoU with YPF for Argentina LNG Project

Eni SpA has signed a memorandum of understanding (MoU) with Argentina’s state energy company YPF for the joint evaluation of the Argentina LNG project. Argentina LNG is a large-scale upstream and midstream integrated gas development project, designed to develop the resources of the Vaca Muerta onshore gas field and serve international markets. The project is expected to be developed in multiple phases and export up to 30 million tons per year (mtpa) of liquefied natural gas (LNG) by the end of the decade, Eni said in a news release. The project phase covered by the MoU between the two companies covers the development of upstream, transportation, and gas liquefaction facilities through two floating LNG units of 6 mtpa each, for a total of 12 mtpa, according to the release. Eni CEO Claudio Descalzi said, “YPF’s choice of Eni as a strategic partner stems from the specific and distinctive know-how we have developed in FLNG projects in Congo and Mozambique, and from the recognition of our global leadership in implementing projects with this technology”. YPF President and CEO Horacio Daniel Marín said, “We are very pleased to sign this agreement with Eni, which would allow us to accelerate the timeline for the Argentina LNG project. We see great interest worldwide, both from large production companies and from countries seeking to purchase gas from Vaca Muerta”. Plenitude Signs PPA in Italy Last week, Eni-controlled Plenitude signed a 10-year power purchase agreement with Autostrade per l’Italia. The agreement is for the entire output of its wind power plant owned in the municipality of Banzi in Basilicata, Italy. The plant has a capacity of 16 megawatts and an estimated electricity production of about 390 gigawatt-hours over the entire period, Eni said in a separate news release.  The agreement also includes Autostrade per l’Italia’s purchase

Read More »

How Low Can Oil Go? Heritage Foundation Talks to Rigzone

How low can the Brent oil price go in 2025? That was the question Rigzone asked Diana Furchtgott-Roth, Director of the Center for Climate, Energy, and the Environment at The Heritage Foundation, in an exclusive interview recently. Responding to the question, Furchtgott-Roth told Rigzone, “I don’t see the price of oil going much below $55 per barrel for any length of time” but warned that “the situation is volatile and could change”. “The price of oil can decline because the supply increases or the demand goes down. The supply could increase as America ramps up production. Demand could go down if America or the rest of the world go into a recession,” the Heritage Foundation expert added. Furchtgott-Roth went on to tell Rigzone that forecasting the price of oil depends on the macroeconomic outlook. “The Blue Chip consensus predicts slower growth in the United States this year as the economy adjusts to new tariffs, but not a recession,” Furchtgott-Roth said. “Job creation in the United States is healthy and the unemployment rate is below 4.5 percent,” the Heritage Foundation expert added. In a Skandinaviska Enskilda Banken AB (SEB) report sent to Rigzone on April 9 by the SEB team, Ole R. Hvalbye, a commodities analyst at the company, highlighted that Brent “crashe[d]… to [a] four year low”. “Since markets opened yesterday, Brent crude prices have tumbled another $4 per barrel, falling from already depressed levels to the current $60.9 per barrel – marking the lowest level in over four years (since early February 2021),” Hvalbye noted in that report. In a Stratas Advisors report sent to Rigzone by the Stratas team late Monday, the company pointed out that the price of Brent crude ended the week at $64.59 after closing the previous week at $66.01. “Last week, oil prices continued to

Read More »

UK Industry Body Invites Public to Join Live Debates

In a release posted on its website this week, industry body Offshore Energies UK (OEUK) announced that it is delivering a series of live debates across the UK this spring and said it is inviting the public “to join important conversations about the future of the UK’s offshore energy industry”. The debates are set to take place in Aberdeen, Falkirk, and Newcastle, the release revealed. OEUK said they will feature a range of voices, including industry leaders and the local community. The debates are free to attend and are aimed at anyone interested in the future of the UK’s energy supply, OEUK noted in the release. The industry body went on to state that it is encouraging people from all walks of life to attend and actively participate in discussions on the UK’s energy future. “The UK’s offshore energy sector plays an important role in providing energy to homes and businesses,” OEUK stated in the release. “With the UK government currently consulting on key energy policies, and domestic energy production at record lows, these events offer a timely opportunity for local communities to learn more about offshore energies including oil, gas, and renewables, ask questions, and engage in a transparent dialogue about how the country can achieve a secure, affordable, and sustainable energy future,” it added. “These debates offer a valuable opportunity to ask questions, hear diverse viewpoints, and engage directly with those influencing the country’s energy landscape,” it continued. In the release, OEUK Chief Executive David Whitehouse said, “we’re hosting these events to open up the conversation on energy production”. “Whether you work in the offshore energy sector or not, these debates are a chance for everyone to have their say on what the UK’s energy future should look like,” he added. “We want to hear from local communities,

Read More »

Trump Moves to Levy Chinese Vessels in Widening Trade War

The Trump administration took steps to impose levies on Chinese vessels docking at US ports, threatening to shake up global shipping routes and escalate the trade war between the world’s two biggest economies. Under a plan put forward by the US Trade Representative on Thursday, all Chinese-built and -owned ships docking in the US would be subject to a fee based on the volume of goods carried, on a per-voyage basis. The proposal follows a months-long investigation ordered by the Biden administration into whether Chinese shipbuilding threatens US national security. The plan also hits non-Chinese shipbuilders, adding a levy to any vehicle carriers not made in America calling at US ports. The so-called 301 petition ordered the fee to go into effect in six months, with another phase restricting foreign-built vessels that transport liquefied natural gas to begin in three years. After six months, the fee for Chinese vessels would be set at $50 per net ton, or the volume of a ship’s revenue-earning space, and then increase incrementally over three years.  Chinese-built vessels would be assessed based on net tonnage or per container. Funds from the docking fees would be used to help revitalize the waning US shipbuilding industry, which long ago pivoted from building commercial ships to focusing on naval contracts.  Chinese Foreign Ministry spokesman Lin Jian slammed the actions at a daily press briefing in Beijing on Friday, saying they will hurt US consumers and businesses in addition to disrupting global supply chains, while also failing to revitalize the US shipbuilding industry. “Measures such as imposing port fees and levying tariffs on cargo-handling facilities hurts the US itself as well as others,” Lin said.  Thursday’s proposal is a departure from its initial iteration, which suggested charging fees of at least $1 million per ship each time it called at a

Read More »

Question time for North Sea debate events launched

The trade body representing the offshore energy industry including the oil and gas sector has launched a series of debate events across the UK. Events in Aberdeen, Falkirk and Newcastle will feature a Question Time-style debate format that brings together politicians, industry leaders, unions, and the local community. Offshore Energies UK (OEUK) has launched a ballot registration system for people who want to attend and will be “selecting attendees to ensure a broad range of perspectives”. The body said it wants to encouraging “people from all walks of life” to attend and actively participate in discussions on the UK’s energy future. The debates are free to attend and are aimed at anyone interested in the future of the UK’s energy supply, from workers in the sector, to people simply curious about how the nation will tackle its energy challenges in the years ahead, OEUK said. OEUK chief executive David Whitehouse said:  “We’re hosting these events to open up the conversation on energy production. “Whether you work in the offshore energy sector or not, these debates are a chance for everyone to have their say on what the UK’s energy future should look like. “We want to hear from local communities, businesses and workers who will be affected by these decisions.” The body has highlighted the sector faces a critical time period. The UK government is currently mulling a number of existential issues in the sector including the so-called windfall tax on North Sea oil and gas producers, new legal requirements on the environmental impact of oil and gas as well as support for clean energy production such as carbon capture and storage (CCS) and hydrogen. OEUK said with the UK government currently consulting on key energy policies, and domestic energy production at record lows, these events offer a “timely opportunity for local

Read More »

Trump tariff oil price slump ‘painful but not causing injury’ to North Sea operators

The fall in global oil prices caused by uncertainty around US trade tariffs is currently “painful but not causing injury” for North Sea operators, according to an analyst. US President Donald Trump imposed sweeping tariffs on trading partners around the world on 3 April, leading to widespread economic uncertainty. Fears of a global slowdown in trade has seen a steep drop in oil prices, with Brent crude dropping from around $75 on 2 April to around $65 on 16 April. European natural gas prices have also seen similar falls since the tariff uncertainty began. As a result, the International Energy Agency (IEA) has forecast that the world will use less oil this year, and warned oil markets are “in for a bumpy ride” as multiple countries scramble to enter trade negotiations with the White House. North Sea impacts of oil price slump But Wood Mackenzie research director Gail Anderson told Energy Voice that the fall in prices is not currently causing major problems for North Sea producers. However, Anderson said that could change if oil and gas prices drop further, with possible impacts on North Sea exploration. “If [oil prices] were to go down below $60 per barrel then we could see operators revising near term capex guidance to preserve cash flow and canning any discretionary spend like exploration, etc.,” Anderson said. © ShutterstockAn offshore oil and gas platform in the North Sea. While all North Sea operators are “being impacted to a similar degree,” Anderson said “the most oil-weighted players are feeling the biggest impact”. Rosebank ‘probably won’t be affected’ But amidst the near-term uncertainty for 2025 and 2026, Anderson said Wood Mackenzie’s “long-term assumption for prices remains unchanged”. This means that the current oil price levels should not change the economic outlook for North Sea projects which are

Read More »

Intel sells off majority stake in its FPGA business

Altera will continue offering field-programmable gate array (FPGA) products across a wide range of use cases, including automotive, communications, data centers, embedded systems, industrial, and aerospace.  “People were a bit surprised at Intel’s sale of the majority stake in Altera, but they shouldn’t have been. Lip-Bu indicated that shoring up Intel’s balance sheet was important,” said Jim McGregor, chief analyst with Tirias Research. The Altera has been in the works for a while and is a relic of past mistakes by Intel to try to acquire its way into AI, whether it was through FPGAs or other accelerators like Habana or Nervana, note Anshel Sag, principal analyst with Moor Insight and Research. “Ultimately, the 50% haircut on the valuation of Altera is unfortunate, but again is a demonstration of Intel’s past mistakes. I do believe that finishing the process of spinning it out does give Intel back some capital and narrows the company’s focus,” he said. So where did it go wrong? It wasn’t with FPGAs because AMD is making a good run of it with its Xilinx acquisition. The fault, analysts say, lies with Intel, which has a terrible track record when it comes to acquisitions. “Altera could have been a great asset to Intel, just as Xilinx has become a valuable asset to AMD. However, like most of its acquisitions, Intel did not manage Altera well,” said McGregor.

Read More »

Intelligence at the edge opens up more risks: how unified SASE can solve it

In an increasingly mobile and modern workforce, smart technologies such as AI-driven edge solutions and the Internet of Things (IoT) can help enterprises improve productivity and efficiency—whether to address operational roadblocks or respond faster to market demands. However, new solutions also come with new challenges, mainly in cybersecurity. The decentralized nature of edge computing—where data is processed, transmitted, and secured closer to the source rather than in a data center—has presented new risks for businesses and their everyday operations. This shift to the edge increases the number of exposed endpoints and creates new vulnerabilities as the attack surface expands. Enterprises will need to ensure their security is watertight in today’s threat landscape if they want to reap the full benefits of smart technologies at the edge. Bypassing the limitations of traditional network security  For the longest time, enterprises have relied on traditional network security approaches to protect their edge solutions. However, these methods are becoming increasingly insufficient as they typically rely on static rules and assumptions, making them inflexible and predictable for malicious actors to circumvent.  While effective in centralized infrastructures like data centers, traditional network security models fall short when applied to the distributed nature of edge computing. Instead, organizations need to adopt more adaptive, decentralized, and intelligent security frameworks built with edge deployments in mind.  Traditional network security typically focuses on keeping out external threats. But today’s threat landscape has evolved significantly, with threat actors leveraging AI to launch advanced attacks such as genAI-driven phishing, sophisticated social engineering attacks, and malicious GPTs. Combined with the lack of visibility with traditional network security, a cybersecurity breach could remain undetected until it’s too late, resulting in consequences extending far beyond IT infrastructures.  Next generation of enterprise security with SASE As organizations look into implementing new technologies to spearhead their business, they

Read More »

Keysight tools tackle data center deployment efficiency

Test and performance measurement vendor Keysight Technologies has developed Keysight Artificial Intelligence (KAI) to identify performance inhibitors affecting large GPU deployments. It emulates workload profiles, rather than using actual resources, to pinpoint performance bottlenecks. Scaling AI data centers requires testing throughout the design and build process – every chip, cable, interconnect, switch, server, and GPU needs to be validated, Keysight says. From the physical layer through the application layer, KAI is designed to identify weak links that degrade the performance of AI data centers, and it validates and optimizes system-level performance for optimal scaling and throughput. AI providers, semiconductor fabricators, and network equipment manufacturers can use KAI to accelerate design, development, deployment, and operations by pinpointing performance issues before deploying in production.

Read More »

U.S. Advances AI Data Center Push with RFI for Infrastructure on DOE Lands

ORNL is also the home of the Center for Artificial Intelligence Security Research (CAISER), which Edmon Begoli, CAISER founding director, described as being in place to build the security necessary by defining a new field of AI research targeted at fighting future AI security risks. Also, at the end of 2024, Google partner Kairos Power started construction of their Hermes demonstration SMR in Oak Ridge. Hermes is a high-temperature gas-cooled reactor (HTGR) that uses triso-fueled pebbles and a molten fluoride salt coolant (specifically Flibe, a mix of lithium fluoride and beryllium fluoride). This demonstration reactor is expected to be online by 2027, with a production level system becoming available in the 2030 timeframe. Also located in a remote area of Oak Ridge is the Tennessee Valley Clinch River project, where the TVA announced a signed agreement with GE-Hitachi to plan and license a BWRX-300 small modular reactor (SMR). On Integrating AI and Energy Production The foregoing are just examples of ongoing projects at the sites named by the DOE’s RFI. Presuming that additional industry power, utility, and data center providers get on board with these locations, any of the 16 could be the future home of AI data centers and on-site power generation. The RFI marks a pivotal step in the U.S. government’s strategy to solidify its global dominance in AI development and energy innovation. By leveraging the vast resources and infrastructure of its national labs and research sites, the DOE is positioning the country to meet the enormous power and security demands of next-generation AI technologies. The selected locations, already home to critical energy research and cutting-edge supercomputing, present a compelling opportunity for industry stakeholders to collaborate on building integrated, sustainable AI data centers with dedicated energy production capabilities. With projects like Oak Ridge’s pioneering SMRs and advanced AI security

Read More »

Generac Sharpens Focus on Data Center Power with Scalable Diesel and Natural Gas Generators

In a digital economy defined by constant uptime and explosive compute demand, power reliability is more than a design criterion—it’s a strategic imperative. In response to such demand, Generac Power Systems, a company long associated with residential backup and industrial emergency power, is making an assertive move into the heart of the digital infrastructure sector with a new portfolio of high-capacity generators engineered for the data center market. Unveiled this week, Generac’s new lineup includes five generators ranging from 2.25 MW to 3.25 MW. These units are available in both diesel and natural gas configurations, and form part of a broader suite of multi-asset energy systems tailored to hyperscale, colocation, enterprise, and edge environments. The product introductions expand Generac’s commercial and industrial capabilities, building on decades of experience with mission-critical power in hospitals, telecom, and manufacturing, now optimized for the scale and complexity of modern data centers. “Coupled with our expertise in designing generators specific to a wide variety of industries and uses, this new line of generators is designed to meet the most rigorous standards for performance, packaging, and after-treatment specific to the data center market,” said Ricardo Navarro, SVP & GM, Global Telecom and Data Centers, Generac. Engineering for the Demands of Digital Infrastructure Each of the five new generators is designed for seamless integration into complex energy ecosystems. Generac is emphasizing modularity, emissions compliance, and high-ambient operability as central to the offering, reflecting a deep understanding of the real-world challenges facing data center operators today. The systems are built around the Baudouin M55 engine platform, which is engineered for fast transient response and high operating temperatures—key for data center loads that swing sharply under AI and cloud workloads. The M55’s high-pressure common rail fuel system supports low NOx emissions and Tier 4 readiness, aligning with the most

Read More »

CoolIT and Accelsius Push Data Center Liquid Cooling Limits Amid Soaring Rack Densities

The CHx1500’s construction reflects CoolIT’s 24 years of DLC experience, using stainless-steel piping and high-grade wetted materials to meet the rigors of enterprise and hyperscale data centers. It’s also designed to scale: not just for today’s most power-hungry processors, but for future platforms expected to surpass today’s limits. Now available for global orders, CoolIT is offering full lifecycle support in over 75 countries, including system design, installation, CDU-to-server certification, and maintenance services—critical ingredients as liquid cooling shifts from high-performance niche to a requirement for AI infrastructure at scale. Capex Follows Thermals: Dell’Oro Forecast Signals Surge In Cooling and Rack Power Infrastructure Between Accelsius and CoolIT, the message is clear: direct liquid cooling is stepping into its maturity phase, with products engineered not just for performance, but for mass deployment. Still, technology alone doesn’t determine the pace of adoption. The surge in thermal innovation from Accelsius and CoolIT isn’t happening in a vacuum. As the capital demands of AI infrastructure rise, the industry is turning a sharper eye toward how data center operators account for, prioritize, and report their AI-driven investments. To wit: According to new market data from Dell’Oro Group, the transition toward high-power, high-density AI racks is now translating into long-term investment shifts across the data center physical layer. Dell’Oro has raised its forecast for the Data Center Physical Infrastructure (DCPI) market, predicting a 14% CAGR through 2029, with total revenue reaching $61 billion. That revision stems from stronger-than-expected 2024 results, particularly in the adoption of accelerated computing by both Tier 1 and Tier 2 cloud service providers. The research firm cited three catalysts for the upward adjustment: Accelerated server shipments outpaced expectations. Demand for high-power infrastructure is spreading to smaller hyperscalers and regional clouds. Governments and Tier 1 telecoms are joining the buildout effort, reinforcing AI as a

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »