Stay Ahead, Stay ONMINE

Our commitment to community safety

Mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world. These incidents are a reminder of how real the threat of violence is—and how quickly violent intent can move from words to action. People may also bring these moments and feelings into ChatGPT. They may ask questions about the news, try to understand what happened, express fear or anger, or talk about violence in ways that are fictional, historical, political, personal, or potentially dangerous. We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.We’re sharing what we do to minimize uses of our services in furtherance of violence or other harm: how our models are trained to respond safely, how our systems detect potential risk of harm, and what actions we take when someone violates our policies. We are constantly improving the steps we take to help protect people and communities, guided by input from psychologists, psychiatrists, civil liberties and law enforcement experts, and others who help us navigate difficult decisions around safety, privacy, and democratized access.How we mitigate risks of harm in ChatGPT.Our Model Spec⁠(opens in a new window) lays out our long-standing principles for how we want our models to behave: maximizing helpfulness and user freedom while minimizing the risk of harm through sensible defaults. We work to train our models to refuse requests for instructions, tactics, or planning that could meaningfully enable violence. At the same time, people may ask neutral questions about violence for factual, historical, educational, or preventive reasons, and we aim to allow those discussions while maintaining clear safety boundaries—for example, by omitting detailed, operational instructions that could facilitate harm. The line between benign and harmful uses can be subtle, so we continually refine our approach and work with experts to help distinguish between safe, bounded responses and actionable steps for carrying out violence or other real-world harm.As part of this ongoing work, we’ve continued expanding our safeguards to help ChatGPT better recognize subtle signs of risk of harm across different contexts. Some safety risks only become clear over time: a single message may seem harmless on its own, but a broader pattern within a long conversation—or across conversations—can suggest something more concerning. Building on years of work in model training, evaluations and red teaming, and ongoing expert input, we have strengthened how ChatGPT recognizes subtle warning signs across long, high-stakes conversations and carefully responds. We’ll share more about this work in the coming weeks.Our safety work also extends to situations where users may be in distress⁠ or at risk of self-harm. In these moments, our goal is to avoid facilitating harmful acts, and also to help de-escalate the situation and guide people to real-world support. ChatGPT surfaces localized crisis resources, encourages people to reach out to mental health professionals or trusted loved ones, and in the most serious cases directs people to seek emergency help. How we monitor and enforce our rules.We assume the best of our users, but when we detect that someone is attempting to use our tools to potentially plan or carry out violence, we take action, including revoking access to OpenAI’s services. Our Usage Policies⁠ set clear expectations for acceptable use and that we may prohibit use for threats, intimidation, harassment, terrorism or violence, weapons development, illicit activity, destruction of property or systems, and attempts to circumvent our safeguards. We take those policies seriously and work hard to enforce them. We use automated detection systems to identify potentially concerning activity at scale. These systems analyze user content and behavior using a range of tools designed to identify signals that may indicate policy violations or harmful activity, including classifiers, reasoning models, hash-matching technologies, blocklists, and other monitoring systems.When an account or conversation is flagged, it is assessed in context by trained personnel. These human reviewers are trained on our policies and protocols, and operate within established privacy and security safeguards, meaning their access to user information is limited, conducted within secure systems, and subject to confidentiality and data protection requirements. Their role is to assess the flagged activity in context, including the content of the interaction, surrounding conversation, and any relevant patterns of behavior over time. This contextual review is important because automated systems may identify signals of potential concern without fully capturing intent or nuance.The goal is to determine whether the flagged activity violates our policies and/or indicates that a user may carry out an act of violence, requires escalation for more detailed human review, or can be dismissed or deprioritized as low risk or non-violative. When we determine that a bannable offense has occurred, we aim to immediately revoke access to OpenAI’s services. That may include disabling the account, banning other accounts of the same user, and taking steps to detect and stop the opening of new accounts. We have a zero-tolerance policy for using our tools to assist in committing violence. People can appeal enforcement decisions, and we review those appeals to confirm the outcome.We surface real-world support and refer to law enforcement when appropriate.Most enforcement actions, including bans for violence, happen directly between OpenAI and the user, making clear they have crossed a line. But in some sensitive cases, we may contact others who are best positioned to help. Where we assess that a case presents indicators of potentially serious, real-world harm, it is escalated for a more in-depth investigation, including assessing the overall level of risk using structured criteria. This stage is reserved for a limited subset of cases and is intended to ensure higher-risk scenarios are assessed with additional context and expertise.  When conversations indicate an imminent and credible risk of harm to others, we notify law enforcement. Mental health and behavioral experts help us assess difficult cases and our referral criteria is flexible to account for the fact that a user may not explicitly discuss the target, means, and timing of planned violence in a ChatGPT conversation but that there may still be potential risk of imminent and credible violence. Last Fall, we introduced Parental Controls⁠ to help families guide how ChatGPT works in their homes. Parental controls allow parents to link their account with their teen’s account and customize settings for a safe, age-appropriate experience. Parents don’t have access to their teen’s conversations, and in rare cases where our system and trained human reviewers detect possible signs of acute distress, parents may be notified—but only with the information needed to support their teen’s safety. Parents are automatically notified by either email, SMS, push notification, or all three.Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. We learn, improve and course-correct. We continue to strengthen our models, detection methods, review processes, and escalation criteria in response to observed usage, emerging risks, and input from internal and external experts. We are especially focused on hard cases: for example, where it is not clear whether a particular input is legitimate or poses a risk of harm; sophisticated attempts to evade safeguards; or when people repeatedly try to misuse our services. We will continue to prioritize safety⁠ while balancing privacy and other civil liberties so we can act on serious risks.

Mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world. These incidents are a reminder of how real the threat of violence is—and how quickly violent intent can move from words to action. 

People may also bring these moments and feelings into ChatGPT. They may ask questions about the news, try to understand what happened, express fear or anger, or talk about violence in ways that are fictional, historical, political, personal, or potentially dangerous. We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.

We’re sharing what we do to minimize uses of our services in furtherance of violence or other harm: how our models are trained to respond safely, how our systems detect potential risk of harm, and what actions we take when someone violates our policies. We are constantly improving the steps we take to help protect people and communities, guided by input from psychologists, psychiatrists, civil liberties and law enforcement experts, and others who help us navigate difficult decisions around safety, privacy, and democratized access.

How we mitigate risks of harm in ChatGPT.

Our Model Spec(opens in a new window) lays out our long-standing principles for how we want our models to behave: maximizing helpfulness and user freedom while minimizing the risk of harm through sensible defaults. 

We work to train our models to refuse requests for instructions, tactics, or planning that could meaningfully enable violence. At the same time, people may ask neutral questions about violence for factual, historical, educational, or preventive reasons, and we aim to allow those discussions while maintaining clear safety boundaries—for example, by omitting detailed, operational instructions that could facilitate harm. The line between benign and harmful uses can be subtle, so we continually refine our approach and work with experts to help distinguish between safe, bounded responses and actionable steps for carrying out violence or other real-world harm.

As part of this ongoing work, we’ve continued expanding our safeguards to help ChatGPT better recognize subtle signs of risk of harm across different contexts. Some safety risks only become clear over time: a single message may seem harmless on its own, but a broader pattern within a long conversation—or across conversations—can suggest something more concerning. Building on years of work in model training, evaluations and red teaming, and ongoing expert input, we have strengthened how ChatGPT recognizes subtle warning signs across long, high-stakes conversations and carefully responds. We’ll share more about this work in the coming weeks.

Our safety work also extends to situations where users may be in distress or at risk of self-harm. In these moments, our goal is to avoid facilitating harmful acts, and also to help de-escalate the situation and guide people to real-world support. ChatGPT surfaces localized crisis resources, encourages people to reach out to mental health professionals or trusted loved ones, and in the most serious cases directs people to seek emergency help. 

How we monitor and enforce our rules.

We assume the best of our users, but when we detect that someone is attempting to use our tools to potentially plan or carry out violence, we take action, including revoking access to OpenAI’s services. Our Usage Policies set clear expectations for acceptable use and that we may prohibit use for threats, intimidation, harassment, terrorism or violence, weapons development, illicit activity, destruction of property or systems, and attempts to circumvent our safeguards. We take those policies seriously and work hard to enforce them. 

We use automated detection systems to identify potentially concerning activity at scale. These systems analyze user content and behavior using a range of tools designed to identify signals that may indicate policy violations or harmful activity, including classifiers, reasoning models, hash-matching technologies, blocklists, and other monitoring systems.

When an account or conversation is flagged, it is assessed in context by trained personnel. These human reviewers are trained on our policies and protocols, and operate within established privacy and security safeguards, meaning their access to user information is limited, conducted within secure systems, and subject to confidentiality and data protection requirements. Their role is to assess the flagged activity in context, including the content of the interaction, surrounding conversation, and any relevant patterns of behavior over time. This contextual review is important because automated systems may identify signals of potential concern without fully capturing intent or nuance.

The goal is to determine whether the flagged activity violates our policies and/or indicates that a user may carry out an act of violence, requires escalation for more detailed human review, or can be dismissed or deprioritized as low risk or non-violative. When we determine that a bannable offense has occurred, we aim to immediately revoke access to OpenAI’s services. That may include disabling the account, banning other accounts of the same user, and taking steps to detect and stop the opening of new accounts. We have a zero-tolerance policy for using our tools to assist in committing violence. People can appeal enforcement decisions, and we review those appeals to confirm the outcome.

We surface real-world support and refer to law enforcement when appropriate.

Most enforcement actions, including bans for violence, happen directly between OpenAI and the user, making clear they have crossed a line. But in some sensitive cases, we may contact others who are best positioned to help. 

Where we assess that a case presents indicators of potentially serious, real-world harm, it is escalated for a more in-depth investigation, including assessing the overall level of risk using structured criteria. This stage is reserved for a limited subset of cases and is intended to ensure higher-risk scenarios are assessed with additional context and expertise.  When conversations indicate an imminent and credible risk of harm to others, we notify law enforcement. Mental health and behavioral experts help us assess difficult cases and our referral criteria is flexible to account for the fact that a user may not explicitly discuss the target, means, and timing of planned violence in a ChatGPT conversation but that there may still be potential risk of imminent and credible violence. 

Last Fall, we introduced Parental Controls to help families guide how ChatGPT works in their homes. Parental controls allow parents to link their account with their teen’s account and customize settings for a safe, age-appropriate experience. Parents don’t have access to their teen’s conversations, and in rare cases where our system and trained human reviewers detect possible signs of acute distress, parents may be notified—but only with the information needed to support their teen’s safety. Parents are automatically notified by either email, SMS, push notification, or all three.

Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. 

We learn, improve and course-correct. 

We continue to strengthen our models, detection methods, review processes, and escalation criteria in response to observed usage, emerging risks, and input from internal and external experts. We are especially focused on hard cases: for example, where it is not clear whether a particular input is legitimate or poses a risk of harm; sophisticated attempts to evade safeguards; or when people repeatedly try to misuse our services. We will continue to prioritize safety while balancing privacy and other civil liberties so we can act on serious risks.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco bolsters security, AI support in latest SD-WAN release

“Simply put, we are making it incredibly obvious when our customers are configuring insecure features that introduce new and unnecessary risks into their networks,” wrote Anthony Grieco, senior vice president and chief security and trust officer at Cisco, in a blog post when the initiative was introduced. “Initially, customers will receive increased security warnings

Read More »

The era of chatbot AIOps is fading as agentic AI gains traction

Among the expected business benefits of AI-driven network management are: Faster resolution of network problems: 54.1% Improve network performance/experience: 51.3% Reduced security risk: 48.7% Cost optimization: 47.8% Proactive problem prevention: 45.9% More time available for strategic projects: 41.9% Responsiveness to change: 37.8% Mitigation of network team’s skills/personnel gaps: 33% “I

Read More »

Matador Resources names CFO, COO

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk

Read More »

ICYMI: RefComm Expoconference—why it’s the diamond of downstream events

In this ICYMI episode of Oil & Gas Journal’s ReEnterprised podcast, downstream editor Robert Brelsford explains why the technical content he has repeatedly encountered at one refining conference continues to deliver practical value for professionals responsible for refining operations. Drawing on more than 20 years covering the petroleum industry, he’s covered every facet of the refining operations, including delayed coking, fluid catalytic cracking, sulfur recovery units, and more. Brelsford describes technical sessions where refinery peer presenters candidly share detailed case studies, including operational challenges, how issues unfolded in real time, and the best practices recommended for operators facing similar conditions—allowing attendees to leave with actionable knowledge directly applicable to daily refinery operations. The episode also addresses the growing challenge of knowledge transfer as decades of hands‑on experience exit the workforce. Brelsford highlights targeted training and presentations designed for refinery personnel at all career stages, particularly newer operators who cannot rely on written documentation alone to replace lost unit expertise. Across training and technical sessions alike, the focus remains on real‑world solutions to real‑world problems, reinforcing safety, troubleshooting capability, and operational excellence long after the event ends.

Read More »

United Arab Emirates to leave OPEC

The United Arab Emirates (UAE), a member of the Organization of the Petroleum Exporting Countries (OPEC) since 1967 and one of its largest producers, said it will exit the organization effective May 1, citing a need for greater flexibility in managing its production strategy. The move comes at a time of heightened geopolitical tension and severe supply disruption tied to the ongoing Iran conflict and the closure of the Strait of Hormuz. UAE Energy Minister Suhail Mohamed al-Mazrouei said the decision followed a careful review of the country’s energy strategy. Stay updated on oil price volatility, shipping disruptions, LNG market analysis, and production output at OGJ’s Iran war content hub. The departure removes a key source of spare capacity from OPEC’s quota system and raises immediate questions about the group’s ability to coordinate supply policy. The UAE has in recent years invested heavily to expand upstream capacity, targeting production levels well above its current OPEC allocation. Tensions between the UAE and OPEC leadership—particularly over baseline production quotas—have persisted for several years, reflecting broader divergence in strategy among core Gulf producers. By exiting, Abu Dhabi gains full autonomy to align output with market conditions and national revenue objectives rather than collective targets set by the group. Market reaction Front-month crude futures showed limited immediate reaction following the announcement. At the time of writing, Brent crude had risen above $110/bbl—its highest level in 3 weeks—as stalled US-Iran negotiations showed little progress toward a deal that could restore oil flows through the Strait of Hormuz. The market remains tightly focused on near-term disruptions stemming from restricted flows through Hormuz, which continues to constrain export volumes across the region. As a result, any incremental barrels from the UAE are unlikely to reach global markets in the immediate term. While the immediate market impact may be limited,

Read More »

Petrobras aims for additional ownership in defined portion of Campos basin

Petróleo Brasileiro SA (Petrobras) has agreed to acquire 100% of a defined portion of the Argonauta field associated with the shared Jubarte reservoir in Brazil’s Campos basin from Shell Brasil Petróleo Ltda., ONGC Campos Ltda., and Brava Energia (formerly Enauta Petróleo e Gás Ltda.). The transaction involves assets within the BC‑10 concession linked to Petrobras’ existing unitization agreement for the presalt Jubarte reservoir, which has been in effect since Aug. 1, 2025. The acquired Argonauta portion represents a 0.86% interest in the Jubarte shared reservoir under the unitization agreement. Total consideration will be R$700 million and US$150 million, to be paid in three installments: R$100 million at closing; R$600 million on Jan. 15, 2027, or at closing, whichever occurs later; and US$150 million 2 years after closing. Following completion of the transaction, Petrobras will increase its interest in the Jubarte shared reservoir to 98.11%. The Brazilian federal government, represented by Pré‑Sal Petróleo SA (PPSA), will retain its 1.89% interest related to the extension of the reservoir into non‑contracted areas. Petrobras said the transaction will also simplify shared‑asset management. Upon closing, the negotiation process for equalization will be concluded, along with any remaining discussions related to unitization or production balancing between the Jubarte reservoir and the acquired Argonauta area. According to Petrobras, the acquisition offers attractive economic and financial terms and is aligned with the company’s strategy to strengthen and streamline its operations in the Campos basin. The transaction is subject to customary closing conditions, including approval from Brazil’s National Agency of Petroleum, Natural Gas and Biofuels (ANP) and the Administrative Council for Economic Defense (CADE). Parque das Baleias The Jubarte shared reservoir is operated by Petrobras as part of the Parque das Baleias development in the northern Campos basin, in water depths of 1,220–1,400 m. Jubarte is the principal field

Read More »

Maurel & Prom discovers gas at Hechicero 1X on Sinú 9 block, Colombia

Maurel & Prom SA has made a gas discovery on Sinú‑9 block in Colombia, confirming gas across multiple intervals. The operator expects to bring the well into production in the coming days. The Hechicero‑1X well was spudded Feb. 24 and drilled to a total depth of 8,500 ft MD on Mar. 28. Electric logs confirmed gas across several intervals within the Ciénaga de Oro (CDO) formation, the primary target, with 288 ft of net pay, the operator said in a release Apr. 28. Additional gas‑bearing reservoirs were identified in the shallower Porquero formation and the deeper Pre‑CDO–San Cayetano interval, with net pay of 149 ft and 103 ft, respectively. Partner NG Energy International Corp., in a separate release Apr. 28, said results are consistent with the Magico‑1X and Brujo‑1X wells.  Hechicero‑1X was completed to allow selective production from five CDO intervals and the Pre‑CDO–San Cayetano interval. Initial tests conducted Apr. 24 on the Pre‑CDO–San Cayetano interval delivered an instantaneous rate of 26.4 MMcfd at 1,800 psi wellhead pressure through a restricted 43/128‑in. choke. Maurel & Prom plans to bring the well on stream from the Pre‑CDO–San Cayetano interval using existing infrastructure tied into Colombia’s national transportation system. The rig will next move to Magico‑2X, the second well in the six‑well exploration campaign.

Read More »

Golden Pass LNG ships first export cargo

Editor’s Note: Updated Apr. 23 to include information provided by the US Energy Information Administration.  Golden Pass LNG, a joint venture between QatarEnergy and ExxonMobil Corp., has loaded and shipped its first LNG export cargo from the plant in Sabine Pass, Tex. The departure comes following first LNG production from Train 1 late last month. Once fully operational, Golden Pass LNG expects to export about 18 million tons/year (tpy) of LNG. Golden Pass LNG is the 10th LNG plant in the US, the US Energy Information Administration (EIA) noted in a separate release Apr. 23. It is the only new US LNG export plant currently expected to begin LNG shipments this year, EIA said. Construction and commissioning continue on Trains 2 and 3, which are expected to come online in turn, following stable operation of Train 1. EIA noted Golden Pass aims to start up Train 2 in second-half 2026 and Train 3 in first-half 2027. QatarEnergy holds 70% interest in Golden Pass LNG, while ExxonMobil holds the remaining 30%. LNG demand  ExxonMobil forecasts natural gas demand to rise 20% by 2050 and LNG demand to rise by 3% per year through 2050. The operator is developing four LNG projects and, by 2030, expects to double its supply compared to 2020 to more than 40 million tpy.

Read More »

Ecopetrol agrees to acquire equity stake in Brava Energia with plans for increased ownership

State-owned Ecopetrol SA, Bogotá, Colombia, has agreed to acquire a 26% equity stake in Brava Energia SA from a group of shareholders and plans to launch a tender offer to increase its ownership to 51%, which would give it control of the Brazilian oil and gas independent. The move would add exposure to roughly 81,000 boe/d of production and 459 MMboe of reserves, expanding Ecopetrol’s footprint in Brazil. Ecopetrol entered into share purchase agreement with Jive, Yellowstone, and Bloco Somah Printemps Quantum, which together constitute a group holding about 26% of the outstanding common shares of Brava Energia. Brava Energia, the second-largest independent company listed in the Brazilian market in terms of reserves and production, was incorporated in 2024 from the merger between 3R Petroleum Óleo e Gás SA and Enauta Participações SA. Completion of the deal is subject to certain conditions, including, among others, approval by Brazil’s Administrative Council for Economic Defense (CADE), the grant of certain waivers and consents considering Brava’s financing instruments and relevant commercial agreements, as well as the purchase by Ecopetrol SA, or one of its affiliates or subsidiaries within the Ecopetrol Group, of the number of shares required to achieve a 51% controlling stake of Brava’s voting share capital. Ecopetrol plans to launch a voluntary tender offer on the B3 stock exchange in Brazil to buy additional shares to reach 51% controlling stake at R$23.00 per share, subject to regulatory requirements and certain conditions. Ecopetrol in Brazil In Brazil, Ecopetrol, through subsidiary Ecopetrol Óleo e Gás do Brasil Ltda., holds 30% interest in 11 blocks in the southern area of Santos basin in consortium with Shell Brasil Petróleo Ltda. (operator, 70%).  The company also holds a 30% non-operated interest in Gato do Mato (BM-S-54) and Sul de Gato do Mato (production sharing agreement), which

Read More »

The Power Certainty Premium: GPC Infrastructure CEO Jim Summers on Delivering Gas-Powered Compute at AI Scale

Reliability Is the Real Constraint Summers evaluates every large-scale power decision against four pillars: legal, economic, sustainable, and reliable. In the current market, one dominates. Reliability — defined not merely as uptime, but as certainty of project execution — has become the industry’s most pressing problem. “There’s a lot of noise in the market,” Summers says. “The question is whether a project is real; whether it can be delivered on time, and whether it can maintain multiple nines once it’s operating.” Legal frameworks for behind-the-meter generation are largely settled. Economics matter, particularly across multi-year development cycles. Sustainability factors in, though in many cases it has been deferred behind more immediate concerns. Execution, by contrast, is now existential. Hyperscalers are no longer evaluating power sources alone: they are evaluating delivery credibility. From Megawatts to Certainty, Speed, and Risk Transfer Historically, data centers relied on utilities to supply three things together: energy, predictable timelines, and manageable risk. That bundle has broken down. Utilities face long interconnection queues, uncertain delivery dates, and rising infrastructure costs. For developers, that uncertainty has created what industry observers and stakeholders are starting to call a “power certainty premium,” i.e. a willingness to pay more for guaranteed timelines. GPC’s customers, Summers says, are no longer buying megawatts alone. They are buying speed to market, certainty of delivery, and risk transfer. “Even if the timeline isn’t shorter, they want a date certain,” he notes. “Utilities often can’t provide that today.” That evolution is driving demand for on-site, behind-the-meter generation, where developers control timelines and cost structures rather than waiting on grid expansion. Supply Chain as the New Critical Path Remove the grid and a new constraint appears: equipment availability. For GPC, the primary gating factor is supply chain; specifically the “prime mover,” the generation equipment itself. Large industrial turbines

Read More »

AI data flows force rethink of data center networking at Backblaze

According to a report that Backblaze released this morning, traffic from content delivery networks and hosting and Internet services providers have stayed largely within historical norms over the past year. But traffic from hyperscalers and neoclouds fluctuated dramatically, with steep climbs in September and October and another uptick in March. Another network traffic change related to AI is geography. “Traditionally, it didn’t matter where cloud infrastructure was located,” says Nowak. But with AI workloads, if storage is close to compute, enterprises get lower latency and higher throughput. Today, Virginia and California have a high concentration of AI compute providers. This, in turn, brings in more storage companies. “In July, we chose to double our footprint in US East to increase the proximity to hyperscalers and neoclouds,” says Nowak. And that, in turn, leads to even more demand for compute, and even greater concentration. “There’s a snowball effect,” Nowak says. Why neoclouds for AI? Enterprises might think that they don’t need to worry about network traffic details if they’re using a hyperscaler for their AI workloads because the data and the processing both stay within the cloud. But there are advantages to using a third-party storage provider combined with neoclouds for the GPUs. According to a report released by Synergy Research Group in early April, neocloud revenues hit $9 billion in the fourth quarter of 2025, a 223% year-over-year increase. Revenues passed $25 billion for the whole year and are expected to hit $400 billion by 2031.

Read More »

TD Cowen: AI Adoption Is Already Here. Infrastructure Demand Is What Comes Next.

Enterprise AI adoption is no longer emerging. It is already embedded and beginning to scale in ways that will reshape data center demand. The latest TD Cowen GenAI Adoption Survey makes that clear. Across 689 U.S. enterprises, 92% are now using at least one major AI platform, with Microsoft Copilot, Google Gemini, and ChatGPT forming the core triad of daily enterprise tooling. That’s the baseline. The more important story is what comes next. AI is moving quickly from assistive software to autonomous systems, and that shift carries direct implications for compute demand, power consumption, and infrastructure design. From Copilots to Autonomous Systems Today’s enterprise AI footprint is already broad, but it is still largely human-in-the-loop. That is beginning to change. Roughly a third of respondents say they already have semi-autonomous AI agents running in production, while another large cohort is piloting or planning deployments over the next 12 to 18 months. By 2027, more than three-quarters expect to be running AI agents capable of executing multi-step workflows without human intervention. This is not incremental adoption. It is a step-function shift. Autonomous agents don’t just respond to prompts; they execute tasks, interact with enterprise systems, and continuously access data. For data centers, that translates into more persistent, baseline load: exactly the kind of demand profile that stresses power delivery, increases utilization, and accelerates capacity planning timelines. To wit: AI is moving from a bursty workload to a continuous one. ROI Is No Longer the Question At the same time, the debate around AI return on investment is effectively over. Three-quarters of respondents report positive ROI, while only a small minority report negative outcomes. A meaningful share is already seeing multiples of return on their investments. The implication seems straightforward: AI budgets are becoming durable. This is no longer experimental spend that

Read More »

BYOP Moves to the Center of Data Center Strategy

Self-Sufficiency Becomes a Feature, Not a Risk Consider Wyoming’s Project Jade, where county commissioners approved an AI campus tied to 2.7 GW of new natural gas-fired generation being developed by Tallgrass Energy. Reporting from POWER described the project as a “bring your own power” model designed for a high degree of self-sufficiency, with a mix of natural gas generation and Bloom fuel cells. The campus is expected to scale significantly over time. What stands out is not only the size, but the positioning. Self-sufficiency is becoming a selling point both for developers seeking to de-risk timelines, and for local stakeholders wary of overloading existing utility infrastructure. Fuel Cells and Nuclear: The Middle Ground and the Long Game Fuel cells occupy an important middle ground in this shift. Bloom Energy’s 2026 report positions fuel cells as a leading onsite option due to shorter lead times, modular deployment, and lower local emissions. Market activity suggests that interest is real. For developers, fuel cells can be easier to permit than large turbine installations and can be deployed incrementally. That makes them effective as bridge-to-grid solutions or as permanent components of hybrid architectures. Advanced nuclear remains the most strategically significant, but least immediate, BYOP pathway. Companies including Switch and other data center operators have explored partnerships with Oklo around its Aurora small modular reactor design. Nuclear holds long-term appeal because it offers firm, low-carbon power at scale. But for current AI buildouts, it remains a future option rather than a near-term construction solution. The immediate reality is that gas and modular onsite systems are closing the time-to-power gap, while nuclear is being positioned as a longer-duration successor as licensing and deployment timelines evolve. The model itself is also evolving. BYOP is beginning to blur the line between developer, energy provider, and compute customer. Reuters

Read More »

Microsoft Builds for Two Worlds: Sovereign Cloud and AI Factories

So far in 2026, across the United States and overseas, Microsoft is building an infrastructure portfolio at full hyperscale. The strategy runs on two tracks. The first is familiar: sovereign cloud expansion involving new regions, local data residency, and compliance-driven enterprise infrastructure. The second is larger and more consequential: purpose-built AI factory campuses designed for dense GPU clusters, liquid cooling, private fiber, and power acquisition at a scale that extends far beyond traditional cloud infrastructure. Despite reports last year that Microsoft was pulling back on data center development, the company is accelerating. It is not only advancing its own large-scale campuses, but also absorbing premium AI capacity originally aligned with OpenAI. In Texas and Norway, projects tied to OpenAI’s infrastructure plans have shifted back into Microsoft’s orbit. Even after contractual changes gave OpenAI greater flexibility to source compute elsewhere, Microsoft remains the market’s most reliable backstop buyer for top-tier AI infrastructure. It no longer needs to control every OpenAI build to maintain its position. In 2026, Microsoft is still the company best positioned to turn uncertain AI demand into deployed capacity, e.g. concrete, steel, power, and silicon at scale. Building at Industrial Scale The clearest indicator of Microsoft’s intent is its capital spending. In its January 2026 earnings cycle, Reuters reported that Microsoft’s quarterly capital expenditures reached a record $37.5 billion, up nearly 66% year over year. The company’s cloud backlog rose to $625 billion, with roughly 45% of remaining performance obligations tied to OpenAI. About two-thirds of that quarterly capex was directed toward compute chips. To be clear: this is no speculative buildout. Microsoft is deploying capital against a massive, committed demand pipeline, even as it maintains significant exposure to OpenAI-driven workloads. The company is solving two infrastructure problems at once: supporting broad Azure and Copilot growth, while ensuring

Read More »

AI’s Execution Era: Aligned and Netrality on Power, Speed, and the New Data Center Reality

At Data Center World 2026, the industry didn’t need convincing that something fundamental has shifted. “This feels different,” said Bill Kleyman as he opened a keynote fireside with Phill Lawson-Shanks and Amber Caramella. “In the past 24 months, we’ve seen more evolution… than in the two decades before.” What followed was less a forecast than a field report from the front lines of the AI infrastructure buildout—where demand is immediate, power is decisive, and execution is everything. A Different Kind of Growth Cycle For Caramella, the shift starts with scale—and speed. “What feels fundamentally different is just the sheer pace and breadth of the demand combined with a real shift in architecture,” she said. Vacancy rates have collapsed even as capacity expands. AI workloads are not just additive—they are redefining absorption curves across the market. But the deeper change is behavioral. “Over 75% of people are using AI in their day-to-day business… and now the conversation is shifting to agentic AI,” Caramella noted. That shift—from tools to delegated workflows—points to a second wave of infrastructure demand that has not yet fully materialized. Lawson-Shanks framed the transformation in more structural terms. The industry, he said, has always followed a predictable chain: workload → software → hardware → facility → location. That chain has broken. “We had a very predictable industry… prior to Covid. And Covid changed everything,” he said, describing how hyperscale demand compressed deployment cycles overnight. What followed was a surge that utilities—and supply chains—were not prepared to meet. From Capacity to Constraint: Power Becomes Strategy If AI has a gating factor, it is no longer compute. It is power. “Before it used to be an operational convenience,” Caramella said. “Now it’s a strategic advantage—or constraint if you don’t have it.” That shift is reshaping executive decision-making. Power is no

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »