Stay Ahead, Stay ONMINE

Sam Altman at TED 2025: Inside the most uncomfortable — and important — AI interview of the year

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI CEO Sam Altman revealed that his company has grown to 800 million weekly active users and is experiencing “unbelievable” growth rates, during a sometimes tense interview at the TED 2025 conference in Vancouver last week. […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI CEO Sam Altman revealed that his company has grown to 800 million weekly active users and is experiencing “unbelievable” growth rates, during a sometimes tense interview at the TED 2025 conference in Vancouver last week.

“I have never seen growth in any company, one that I’ve been involved with or not, like this,” Altman told TED head Chris Anderson during their on-stage conversation. “The growth of ChatGPT — it is really fun. I feel deeply honored. But it is crazy to live through, and our teams are exhausted and stressed.”

The interview, which closed out the final day of TED 2025: Humanity Reimagined, showcased not just OpenAI’s skyrocketing success but also the increasing scrutiny the company faces as its technology transforms society at a pace that alarms even some of its supporters.

‘Our GPUs are melting’: OpenAI struggles to scale amid unprecedented demand

Altman painted a picture of a company struggling to keep up with its own success, noting that OpenAI’s GPUs are “melting” due to the popularity of its new image generation features. “All day long, I call people and beg them to give us their GPUs. We are so incredibly constrained,” he said.

This exponential growth comes as OpenAI is reportedly considering launching its own social network to compete with Elon Musk’s X, according to CNBC. Altman neither confirmed nor denied these reports during the TED interview.

The company recently closed a $40 billion funding round, valuing it at $300 billion — the largest private tech funding in history — and this influx of capital will likely help address some of these infrastructure challenges.

From non-profit to $300 billion giant: Altman responds to ‘Ring of Power’ accusations

Throughout the 47-minute conversation, Anderson repeatedly pressed Altman on OpenAI’s transformation from a non-profit research lab to a for-profit company with a $300 billion valuation. Anderson voiced concerns shared by critics, including Elon Musk, who has suggested Altman has been “corrupted by the Ring of Power,” referencing “The Lord of the Rings.”

Altman defended OpenAI’s path: “Our goal is to make AGI and distribute it, make it safe for the broad benefit of humanity. I think by all accounts, we have done a lot in that direction. Clearly, our tactics have shifted over time… We didn’t think we would have to build a company around this. We learned a lot about how it goes and the realities of what these systems were going to take from capital.”

When asked how he personally handles the enormous power he now wields, Altman responded: “Shockingly, the same as before. I think you can get used to anything step by step… You’re the same person. I’m sure I’m not in all sorts of ways, but I don’t feel any different.”

‘Divvying up revenue’: OpenAI plans to pay artists whose styles are used by AI

One of the most concrete policy announcements from the interview was Altman’s acknowledgment that OpenAI is working on a system to compensate artists whose styles are emulated by AI.

“I think there are incredible new business models that we and others are excited to explore,” Altman said when pressed about apparent IP theft in AI-generated images. “If you say, ‘I want to generate art in the style of these seven people, all of whom have consented to that,’ how do you divvy up how much money goes to each one?”

Currently, OpenAI’s image generator refuses requests to mimic the style of living artists without consent, but will generate art in the style of movements, genres, or studios. Altman suggested a revenue-sharing model could be forthcoming, though details remain scarce.

Autonomous AI agents: The ‘most consequential safety challenge’ OpenAI has faced

The conversation grew particularly tense when discussing “agentic AI” — autonomous systems that can take actions on the internet on a user’s behalf. OpenAI’s new “Operator” tool allows AI to perform tasks like booking restaurants, raising concerns about safety and accountability.

Anderson challenged Altman: “A single person could let that agent out there, and the agent could decide, ‘Well, in order to execute on that function, I got to copy myself everywhere.’ Are there red lines that you have clearly drawn internally, where you know what the danger moments are?”

Altman referenced OpenAI’s “preparedness framework” but provided few specifics about how the company would prevent misuse of autonomous agents.

“AI that you give access to your systems, your information, the ability to click around on your computer… when they make a mistake, it’s much higher stakes,” Altman acknowledged. “You will not use our agents if you do not trust that they’re not going to empty your bank account or delete your data.”

’14 definitions from 10 researchers’: Inside OpenAI’s struggle to define AGI

In a revealing moment, Altman admitted that even within OpenAI, there’s no consensus on what constitutes artificial general intelligence (AGI) — the company’s stated goal.

“It’s like the joke, if you’ve got 10 OpenAI researchers in a room and asked to define AGI, you’d get 14 definitions,” Altman said.

He suggested that rather than focusing on a specific moment when AGI arrives, we should recognize that “the models are just going to get smarter and more capable and smarter and more capable on this long exponential… We’re going to have to contend and get wonderful benefits from this incredible system.”

Loosening the guardrails: OpenAI’s new approach to content moderation

Altman also disclosed a significant policy change regarding content moderation, revealing that OpenAI has loosened restrictions on its image generation models.

“We’ve given the users much more freedom on what we would traditionally think about as speech harms,” he explained. “I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides.”

This shift could signal a broader move toward giving users more control over AI outputs, potentially aligning with Altman’s expressed preference for letting the hundreds of millions of users — rather than “small elite summits” — determine appropriate guardrails.

“One of the cool new things about AI is our AI can talk to everybody on Earth, and we can learn the collective value preference of what everybody wants, rather than have a bunch of people who are blessed by society to sit in a room and make these decisions,” Altman said.

‘My kid will never be smarter than AI’: Altman’s vision of an AI-powered future

The interview concluded with Altman reflecting on the world his newborn son will inherit — one where AI will exceed human intelligence.

“My kid will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable,” he said. “It’ll be a world of incredible material abundance… where the rate of change is incredibly fast and amazing new things are happening.”

Anderson closed with a sobering observation: “Over the next few years, you’re going to have some of the biggest opportunities, the biggest moral challenges, the biggest decisions to make of perhaps any human in history.”

The billion-user balancing act: How OpenAI navigates power, profit, and purpose

Altman’s TED appearance comes at a critical juncture for OpenAI and the broader AI industry. The company faces mounting legal challenges, including copyright lawsuits from authors and publishers, while simultaneously pushing the boundaries of what AI can do.

Recent advancements like ChatGPT’s viral image generation feature and video generation tool Sora have demonstrated capabilities that seemed impossible just months ago. At the same time, these tools have sparked debates about copyright, authenticity, and the future of creative work.

Altman’s willingness to engage with difficult questions about safety, ethics, and the societal impact of AI shows an awareness of the stakes involved. However, critics may note that concrete answers on specific safeguards and policies remained elusive throughout the conversation.

The interview also revealed the competing tensions at the heart of OpenAI’s mission: moving fast to advance AI technology while ensuring safety; balancing profit motives with societal benefit; respecting creative rights while democratizing creative tools; and navigating between elite expertise and public preference.

As Anderson noted in his final comment, the decisions Altman and his peers make in the coming years may have unprecedented impacts on humanity’s future. Whether OpenAI can live up to its stated mission of ensuring “all of humanity benefits from artificial general intelligence” remains to be seen.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cloudflare problems hit websites around the world

Ominously, 31 minutes before Cloudflare acknowledged the problems with its global network, it had also reported problems with its support portal. “Our support portal provider is currently experiencing issues, and as such customers might encounter errors viewing or responding to support cases. Responses on customer inquiries are not affected, and

Read More »

Azure blocks record 15 Tbps DDoS attack as IoT botnets gain new firepower

Varkey added that modern DDoS attacks increasingly resemble hit-and-run incidents, striking suddenly, lasting only minutes, and disappearing before defenses fully engage. He said their speed and intensity require always-on protection and preemptive resilience rather than reactive mitigation. The attack shows how millions of consumer devices have effectively become strategic weapons capable of

Read More »

Arm jumps on the Nvidia NVLink Fusion bandwagon at SC25

“Some partners want to mix different CPUs and accelerate technologies for specialized use cases,” said Dion Harris, senior director, HPC and AI infrastructure solutions at Nvidia. “NVLink Fusion enables hyperscalers and custom ASIC builders to leverage Nvidia’s rack scale architecture to rapidly deploy custom silicon,” Harris said during a media

Read More »

TotalEnergies Buys Into Kretinsky Power Assets for $6B

TotalEnergies SE agreed to buy a 50 percent stake in a portfolio of European power assets for about EUR 5.1 billion ($5.9 billion), expanding in the sector even as some major oil and gas peers retreat. Total will acquire the assets from Czech tycoon Daniel Kretinsky’s holding company EPH, paying in new shares, it said in a statement Monday. That will give the Czech firm a stake of just over 4 percent in Total, making it one of the company’s largest shareholders. The French oil major is bulking up in the power sector as it pursues a diversification drive that targets 20 percent of energy sales from electricity by 2030. It has been acquiring solar, wind and battery projects in Europe and North America, but also gas-fueled plants, betting on soaring electricity demand from the electrification of industry and the artificial-intelligence boom. The latest deal will give Total gas and biomass power stations and battery projects in Italy, the UK, Ireland, the Netherlands and France. The company now expects its Integrated Power business to generate free cash flow as early as 2027, bringing its forecast forward by a year. “The deal will give TotalEnergies a critical size in Europe’s power market, make its generation mix more resilient and improve predictability” of cash flows from the electricity segment, Frederic Lorec, an analyst at AlphaValue, said by phone. “The price of the deal makes sense.” Total rose as much as 0.7 percent in Paris trading, and was up 0.6 percent as of 12:50 p.m. local time. Power Bet While electricity demand weakened in parts of Europe in the wake of the 2022 energy crisis, Total is betting on a revival as data centers proliferate, absorbing vast quantities of power. Home heating, transportation and industry are also gradually electrifying across the continent, helping to spur investments in clean energy

Read More »

Driller H&P Posts $57MM Quarterly Loss

Helmerich & Payne Inc (H&P) on Monday reported $57.36 million, or $0.58 per share, in net loss for the fiscal fourth quarter (July-September). That was an improvement from a net loss of $162.76 million for the prior three months, when the Tulsa, Oklahoma-based drilling rig operator logged $128 million in goodwill impairment from its acquisition of KCA Deutag International Ltd, completed early 2025. For fiscal Q4 Helmerich & Payne booked $18.93 million in asset impairment charges and $7.45 million in restructuring charges. Net loss adjusted for nonrecurring items was $1 million, or -$0.01 per share – beating the Zacks Consensus Estimate of -$0.26. Impact from nonrecurring charges totaled $56 million. Drilling services revenue totaled $1.01 billion, down from $1.04 billion for fiscal Q3. H&P recorded an operating loss of $1.46 million, improving from -$128.27 million for fiscal Q3. Its North America Solutions (NAS) segment registered $118.16 million in operating income, down from $157.65 million for fiscal Q3. “NAS realized direct margins of $242 million during the quarter, yielding an associated margin per day of $18,620 and profitability continuing to lead all North American land drillers”, the company said. The International Solutions Segment had an operating loss of $75.72 million, better than -$166.51 million for fiscal Q3. “International Solutions again exceeded guidance midpoint expectations with direct margins of approximately $30 million”, H&P said. Offshore Gulf of Mexico generated $20.29 million in operating income, up from $8.77 million for fiscal Q3. H&P paid $25 million in dividends in fiscal Q4. “Fiscal 2025 was a historic year for H&P, as we grew our global drilling footprint to over 200 operating rigs, surpassed over $1 billion of direct margins in our North American Solutions business, welcomed the talented team from KCA Deutag and established new relationships with a diverse set of global customers”, commented chief executive

Read More »

Venture Global Files Applications for Plaquemines LNG Expansion

Venture Global Inc said Monday it had applied for a construction permit before the Federal Energy Regulatory Commission and export authorization before the Department of Energy (DOE) for a project to add over 30 million metric tons per annum (MTPA) of capacity to the Plaquemines LNG complex in Plaquemines Parish, Louisiana. The Arlington, Virginia-based producer said in a statement on its website it has increased the project’s capacity by nearly 40 percent from the initial announcement earlier this year “due to the continued optimization of our liquefaction trains and strong market demand”. “This bolt-on expansion will be built incrementally in three phases and consist of 32 modular liquefaction trains… This will bring the total peak production capacity across the entire Plaquemines complex to over 58 MTPA”, Venture Global said. Chief executive Greg Sabel said, “This strategic step provides Venture Global with the optionality to develop a scalable project that can efficiently meet market needs as they evolve”. When it announced the brownfield expansion March 6, initially comprising 24 trains, Venture Global estimated the investment to be around $18 billion. Venture Global shipped the first LNG cargo from Plaquemines LNG late 2024. The shipment to Germany was for Energie Baden-Wuerttemberg AG. “Plaquemines LNG is one of the two fastest greenfield projects of its size to reach first production and, now, first cargo delivery, along with Venture Global’s first project, Calcasieu Pass”, Venture Global said in a statement December 26, 2024. Sabel said then, “In just five years, Venture Global has built, produced and launched exports from two large-scale LNG projects which has never been done before in the history of the industry”. Venture Global said at the time, “Like Venture Global’s Calcasieu Pass project, Plaquemines has exported its first cargo far in advance of the U.S. Department of Energy’s requirement to

Read More »

ConocoPhillips Makes Offshore Gas Discovery in Australia’s Otway Basin

ConocoPhillips made a natural gas discovery offshore Victoria in the Otway Basin, though further work is needed to determine potential flow rates, the United States company’s Australian unit said Monday. “The Essington-1 well is the first discovery in the Otway since 2021 and is a promising start to ConocoPhillips’ exploration activities in the region”, ConocoPhillips Australia president Jan-Arne Johansen said in an online statement. “The initial results are encouraging, and we look forward to continuing drilling our second exploration well in December”. ConocoPhillips Australia said, “Preliminary estimates from logs and wireline results place the primary Waarre A target reservoir as a 62.6-meter gross hydrocarbon column. The secondary Waarre C target shows a further 33.2-meter gross hydrocarbon column as best estimates”. 3D Energi said separately, “Elevated gas readings were recorded in both the Waarre C (intersected at 2,265 meters MDRT) and Waarre A (intersected at 2,515 meters MDRT) reservoirs”. “In both reservoirs, gas peaks coincide with elevated resistivity readings observed on Logging While Drilling tools, consistent with probable hydrocarbon presence”, 3D Energi added. The discovery sits 12 kilometers (7.46 miles) from producing gas wells and about 53 kilometers (32.93 miles) from Port Campbell, Victoria, according to ConocoPhillips Australia. “Further work will be conducted to determine potential flow rates, the reservoir’s ultimate resource recovery and the commercial viability for potential development plans”, ConocoPhillips Australia said. The partners expect to complete operations at the well this month, after which the well will be plugged and abandoned. “A second well in VIC/P79 (Charlemont-1) expected to commence in December (weather and operational conditions permitting) and additional wells may be considered in the future under the accepted Environmental Plan”, ConocoPhillips Australia said. It announced the start of the Otway exploration campaign November 1, “in an effort to find new domestic natural gas supply and be part of

Read More »

Turkey Plans $4B Sukuk in Energy Production Push

Turkey’s state energy company Turkiye Petrolleri AO plans to sell as much as $4 billion in Islamic debt as part of its push to expand oil and gas production, marking the firm’s first such international debt offering. The company, also known by its Turkish initials TPAO, is preparing to issue the five-year sukuk to international investors by the end of the year, Energy Minister Alparslan Bayraktar told Bloomberg on Monday. The debut sukuk follows non-deal roadshow meetings in London, Abu Dhabi and Dubai, where officials briefed potential investors on TPAO’s financial outlook and projects, including Black Sea natural gas production and the Gabar oil field in Turkey’s southeast, he said. Owned by Turkey’s sovereign wealth fund, TPAO also has a growing portfolio of international projects including exploration plans in Libya, Oman and Pakistan alongside existing production in Azerbaijan, Iraq and Russia.  TPAO produced 33.7 million barrels of oil and 2.2 billion cubic meters of gas in Turkey in 2024, former CEO Ahmet Turkoglu told a parliamentary commission earlier this year. It also pumped 39.4 million barrels of oil equivalent from international projects.  He said that the company made a profit of 15.4 billion liras last year – equivalent to around $390 million at the time of the comments. Production is set to increase both at home and abroad. Turkey plans to increase output at the main Black Sea gas field, Sakarya, to 45 million cubic meters per day in 2028 from the current 9.5 mcm, Bayraktar said. TPAO is also planning to develop unconventional reserves in the southeast in partnership with US-based Continental Resources, Inc. and TransAtlantic Petroleum Ltd.  TPAO established a subsidiary, TPAO Varlik Kiralama, earlier this month to manage the sukuk issuance. The debt sale comes as Turkey’s borrowing costs decline due to an easing of political tensions at

Read More »

ADNOC Gas Achieves Record Q3

ADNOC Gas PLC has reported an eight percent year-on-year increase in net profit to $1.34 billion for the third quarter, the company’s highest for the July-September period. The increase was driven by a four percent rise in domestic gas sales volumes, according to an online statement by the company. Demand is supported by growth in the United Arab Emirates’ economy, while contract negotiations also improved underlying margins, said the gas processing and sales arm of Abu Dhabi National Oil Co. Earnings per share landed at $0.017. ADNOC Gas has extended its five percent annual dividend growth policy to 2030, aiming for $24.4 billion in total for 2025-30, according to a stock filing October 8. ADNOC Gas has introduced a policy to distribute dividends quarterly starting with Q3 2025. “The introduction of quarterly dividend distributions starting in Q3 2025 with $896 million to be paid by December 12 – alongside a five percent annual increase in dividend payout now extended until 2030 – offers greater transparency and even more regular income, allowing shareholders to plan and manage their finances with confidence”, it said in its quarterly statement. ADNOC Gas said, “Year-to-date net income reached $3.99 billion, exceeding market expectations, even as oil prices averaged $71/barrel in the first nine months of 2025 compared to $83/barrel in 2024”. “Q3 2025 saw ADNOC Gas’ domestic gas business deliver record results, with EBITDA rising to $914 million, up 26 percent year-on-year”. On lower prices, revenue fell from $4.87 billion for Q3 2024 to $4.86 billion for Q3 2025. Operating profit landed at $1.74 billion, up from $1.69 billion for Q3 2024. Profit before tax was $1.72 billion, up from $1.68 billion for Q3 2024. Net cash from operating activities before changes in working capital was $4.65 billion, up from $4.24 billion for Q3 2024. ADNOC Gas ended

Read More »

Nvidia’s first exascale system is the 4th fastest supercomputer in the world

The world’s fourth exascale supercomputer has arrived, pitting Nvidia’s proprietary chip technologies against the x86 systems that have dominated supercomputing for decades. For the 66th edition of the TOP500, El Capitan holds steady at No. 1 while JUPITER Booster becomes the fourth exascale system on the list. The JUPITER Booster supercomputer, installed in Germany, uses Nvidia CPUs and GPUs and delivers a peak performance of exactly 1 exaflop, according to the November TOP500 list of supercomputers, released on Monday. The exaflop measurement is considered a major milestone in pushing computing performance to the limits. Today’s computers are typically measured in gigaflops and teraflops—and an exaflop translates to 1 billion gigaflops. Nvidia’s GPUs dominate AI servers installed in data centers as computing shifts to AI. As part of this shift, AI servers with Nvidia’s ARM-based Grace CPUs are emerging as a high-performance alternative to x86 chips. JUPITER is the fourth-fastest supercomputer in the world, behind three systems with x86 chips from AMD and Intel, according to TOP500. The top three supercomputers on the TOP500 list are in the U.S. and owned by the U.S. Department of Energy. The top two supercomputers—the 1.8-exaflop El Capitan at Lawrence Livermore National Laboratory and the 1.35-exaflop Frontier at Oak Ridge National Laboratory—use AMD CPUs and GPUs. The third-ranked 1.01-exaflop Aurora at Argonne National Laboratory uses Intel CPUs and GPUs. Intel scrapped its GPU roadmap after the release of Aurora and is now restructuring operations. The JUPITER Booster, which was assembled by France-based Eviden, has Nvidia’s GH200 superchip, which links two Nvidia Hopper GPUs with CPUs based on ARM designs. The CPU and GPU are connected via Nvidia’s proprietary NVLink interconnect, which is based on InfiniBand and provides bandwidth of up to 900 gigabytes per second. JUPITER first entered the Top500 list at 793 petaflops, but

Read More »

Samsung’s 60% memory price hike signals higher data center costs for enterprises

Industry-wide price surge driven by AI Samsung is not alone in raising prices. In October, TrendForce reported that Samsung and SK Hynix raised DRAM and NAND flash prices by up to 30% for Q4. Similarly, SK Hynix said during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, with the company posting record quarterly operating profit exceeding $8 billion, driven by surging AI demand. Industry analysts attributed the price increases to manufacturers redirecting production capacity. HBM production for AI accelerators consumes three times the wafer capacity of standard DRAM, according to a TrendForce report, citing remarks from Micron’s Chief Business Officer. After two years of oversupply, memory inventories have dropped to approximately eight weeks from over 30 weeks in early 2023. “The memory industry is tightening faster than expected as AI server demand for HBM, DDR5, and enterprise SSDs far outpaces supply growth,” said Manish Rawat, semiconductor analyst at TechInsights. “Even with new fab capacity coming online, much of it is dedicated to HBM, leaving conventional DRAM and NAND undersupplied. Memory is shifting from a cyclical commodity to a strategic bottleneck where suppliers can confidently enforce price discipline.” This newfound pricing power was evident in Samsung’s approach to contract negotiations. “Samsung’s delayed pricing announcement signals tough behind-the-scenes negotiations, with Samsung ultimately securing the aggressive hike it wanted,” Rawat said. “The move reflects a clear power shift toward chipmakers: inventories are normalized, supply is tight, and AI demand is unavoidable, leaving buyers with little room to negotiate.” Charlie Dai, VP and principal analyst at Forrester, said the 60% increase “signals confidence in sustained AI infrastructure growth and underscores memory’s strategic role as the bottleneck in accelerated computing.” Servers to cost 10-25% more For enterprises building AI infrastructure, these supply dynamics translate directly into

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »