Stay Ahead, Stay ONMINE

Evolving Product Operating Models in the Age of AI

previous article on organizing for AI (link), we looked at how the interplay between three key dimensions — ownership of outcomes, outsourcing of staff, and the geographical proximity of team members — can yield a variety of organizational archetypes for implementing strategic AI initiatives, each implying a different twist to the product operating model. Now we take a closer look at how the product operating model, and the core competencies of empowered product teams in particular, can evolve to face the emerging opportunities and challenges in the age of AI. We start by placing the current orthodoxy in its historical context and present a process model highlighting four key phases in the evolution of team composition in product operating models. We then consider how teams can be reshaped to successfully create AI-powered products and services going forward. Note: All figures in the following sections have been created by the author of this article. The Evolution of Product Operating Models Current Orthodoxy and Historical Context Product coaches such as Marty Cagan have done much in recent years to popularize the “3-in-a-box” model of empowered product teams. In general, according to the current orthodoxy, these teams should consist of three first-class, core competencies: product management, product design, and engineering. Being first-class means that none of these competencies are subordinate to each other in the org chart, and the product manager, design lead, and engineering lead are empowered to jointly make strategic product-related decisions. Being core reflects the belief that removing or otherwise compromising on any of these three competencies would lead to worse product outcomes, i.e., products that do not work for customers or for the business. A central conviction of the current orthodoxy is that the 3-in-a-box model helps address product risks in four key areas: value, viability, usability, and feasibility. Product management is accountable for overall outcomes, and especially concerned with ensuring that the product is valuable to customers (typically implying a higher willingness to pay) and viable for the business, e.g., in terms of how much it costs to build, operate, and maintain the product in the long run. Product design is accountable for user experience (UX), and primarily interested in maximizing usability of the product, e.g., through intuitive onboarding, good use of affordances, and a pleasing user interface (UI) that allows for efficient work. Lastly, engineering is accountable for technical delivery, and primarily focused on ensuring feasibility of the product, e.g., characterized by the ability to ship an AI use case within certain technical constraints, ensuring sufficient predictive performance, inference speed, and safety. Getting to this 3-in-a-box model has not been an easy journey, however, and the model is still not widely adopted outside tech companies. In the early days, product teams – if they could even be called that – mainly consisted of developers that tended to be responsible for both coding and gathering requirements from sales teams or other internal business stakeholders. Such product teams would focus on feature delivery rather than user experience or strategic product development; today such teams are thus often referred to as “feature teams”. The TV show Halt and Catch Fire vividly depicts tech companies organizing like this in the 1980s and 90s. Shows like The IT Crowd underscore how such disempowered teams can persist in IT departments in modern times. As software projects grew in complexity in the late 1990s and early 2000s, the need for a dedicated product management competency to align product development with business goals and customer needs became increasingly evident. Companies like Microsoft and IBM began formalizing the role of a product manager and other companies soon followed. Then, as the 2000s saw the emergence of various online consumer-facing services (e.g., for search, shopping, and social networking), design/UX became a priority. Companies like Apple and Google started emphasizing design, leading to the formalization of corresponding roles. Designers began working closely with developers to ensure that products were not only functional but also visually appealing and user-friendly. Since the 2010s, the increased adoption of agile and lean methodologies further reinforced the need for cross-functional teams that could iterate quickly and respond to user feedback, all of which paved the way for the current 3-in-a-box orthodoxy. A Process Framework for the Evolution of Product Operating Models Looking ahead 5-10 years from today’s vantage point in 2025, it is interesting to consider how the emergence of AI as a “table stakes” competency might shake up the current orthodoxy, potentially triggering the next step in the evolution of product operating models. Figure 1 below proposes a four-phase process framework of how existing product models might evolve to incorporate the AI competency over time, drawing on instructive parallels to the situation faced by design/UX only a few years ago. Note that, at the risk of somewhat abusing terminology, but in line with today’s industry norms, the terms “UX” and “design” are used interchangeably in the following to refer to the competency concerned with minimizing usability risk. Figure 1: An Evolutionary Process Framework Phase 1 in the above framework is characterized by ignorance and/or skepticism. UX initially faced the struggle of justifying its worth at companies that had previously focused primarily on functional and technical performance, as in the context of non-consumer-facing enterprise software (think ERP systems of the 1990s). AI today faces a similar uphill battle. Not only is AI poorly understood by many stakeholders to begin with, but companies that have been burned by early forays into AI may now be wallowing in the “trough of disillusionment”, leading to skepticism and a wait-and-see approach towards adopting AI. There may also be concerns around the ethics of collecting behavioral data, algorithmic decision-making, bias, and getting to grips with the inherently uncertain nature of probabilistic AI output (e.g., consider the implications for software testing). Phase 2 is marked by a growing recognition of the strategic importance of the new competency. For UX, this phase was catalyzed by the rise of consumer-facing online services, where improvements to UX could significantly drive engagement and monetization. As success stories of companies like Apple and Google began to spread, the strategic value of prioritizing UX became harder to overlook. With the confluence of some key trends over the past decade, such as the availability of cheaper computation via hyper-scalers (e.g., AWS, GCP, Azure), access to Big Data in a variety of domains, and the development of powerful new machine learning algorithms, our collective awareness of the potential of AI had been growing steadily by the time ChatGPT burst onto the scene and captured everyone’s attention. The rise of design patterns to harness probabilistic outcomes and the related success stories of AI-powered companies (e.g., Netflix, Uber) mean that AI is now increasingly seen as a key differentiator, much like UX before. In Phase 3, the roles and responsibilities pertaining to the new competency become formalized. For UX, this meant differentiating between the roles of designers (covering experience, interactions, and the look and feel of user interfaces) and researchers (specializing in qualitative and quantitative methods for gaining a deeper understanding of user preferences and behavioral patterns). To remove any doubts about the value of UX, it was made into a first-class, Core Competency, sitting next to product management and engineering to form the current triumvirate of the standard product operating model. The past few years have witnessed the increased formalization of AI-related roles, expanding beyond a jack-of-all conception of “data scientists” to more specialized roles like “research scientists”, “ML engineers”, and more recently, “prompt engineers”. Looking ahead, an intriguing open question is how the AI competency will be incorporated into the current 3-in-a-box model. We may see an iterative formalization of embedded, consultative, and hybrid models, as discussed in the next section. Finally, Phase 4 sees the emergence of norms and best practices for effectively leveraging the new competency. For UX, this is reflected today by the adoption of practices like design thinking and lean UX. It has also become rare to find top-class, customer-centric product teams without a strong, first-class UX competency. Meanwhile, recent years have seen concerted efforts to develop standardized AI practices and policies (e.g., Google’s AI Principles, SAP’s AI Ethics Policy, and the EU AI Act), partly to cope with the dangers that AI already poses, and partly to stave off dangers it may pose in the future (especially as AI becomes more powerful and is put to nefarious uses by bad actors). The extent to which the normalization of AI as a competency might impact the current orthodox framing of the 3-in-a-box Product Operating Model remains to be seen. Towards AI-Ready Product Operating Models Leveraging AI Expertise: Embedded, Consultative, and Hybrid Models Figure 2 below proposes a high-level framework to think about how the AI competency could be incorporated in today’s orthodox, 3-in-a-box product operating model. Figure 2: Options for AI-Ready Product Operating Models In the embedded model, AI (personified by data scientists, ML engineers, etc.) may be added either as a new, durable, and first-class competency next to product management, UX/design, and engineering, or as a subordinated competency to these “big three” (e.g., staffing data scientists in an engineering team). By contrast, in the consultative model, the AI competency might reside in some centralized entity, such as an AI Center of Excellence (CoE), and leveraged by product teams on a case-by-case basis. For instance, AI experts from the CoE may be brought in temporarily to advise a product team on AI-specific issues during product discovery and/or delivery. In the hybrid model, as the name suggests, some AI experts may be embedded as long-term members of the product team and others may be brought in at times to provide additional consultative guidance. While Figure 2 only illustrates the case of a single product team, one can imagine these model options scaling to multiple product teams, capturing the interaction between different teams. For example, an “experience team” (responsible for building customer-facing products) might collaborate closely with a “platform team” (maintaining AI services/APIs that experience teams can leverage) to ship an AI product to customers. Each of the above models for leveraging AI come with certain pros and cons. The embedded model can enable closer collaboration, more consistency, and faster decision-making. Having AI experts in the core team can lead to more seamless integration and collaboration; their continuous involvement ensures that AI-related inputs, whether conceptual or implementation-focused, can be integrated consistently throughout the product discovery and delivery phases. Direct access to AI expertise can speed up problem-solving and decision-making. However, embedding AI experts in every product team may be too expensive and difficult to justify, especially for companies or specific teams that cannot articulate a clear and compelling thesis about the expected AI-enabled return on investment. As a scarce resource, AI experts may either only be available to a handful of teams that can make a strong enough business case, or be spread too thinly across several teams, leading to adverse outcomes (e.g., slower turnaround of tasks and employee churn). With the consultative model, staffing AI experts in a central team can be more cost-effective. Central experts can be allocated more flexibly to projects, allowing higher utilization per expert. It is also possible for one highly specialized expert (e.g., focused on large language models, AI lifecycle management, etc.) to advise multiple product teams at once. However, a purely consultative model can make product teams dependent on colleagues outside the team; these AI consultants may not always be available when needed, and may switch to another company at some point, leaving the product team high and dry. Regularly onboarding new AI consultants to the product team is time- and effort-intensive, and such consultants, especially if they are junior or new to the company, may not feel able to challenge the product team even when doing so might be necessary (e.g., warning about data-related bias, privacy concerns, or suboptimal architectural decisions). The hybrid model aims to balance the trade-offs between the purely embedded and purely consultative models. This model can be implemented organizationally as a hub-and-spoke structure to foster regular knowledge sharing and alignment between the hub (CoE) and spokes (embedded experts). Giving product teams access to both embedded and consultative AI experts can provide both consistency and flexibility. The embedded AI experts can develop domain-specific know-how that can help with feature engineering and model performance diagnosis, while specialized AI consultants can advise and up-skill the embedded experts on more general, state-of-the-art technologies and best practices. However, the hybrid model is more complex to manage. Tasks must be divided carefully between the embedded and consultative AI experts to avoid redundant work, delays, and conflicts. Overseeing the alignment between embedded and consultative experts can create additional managerial overhead that may need to be borne to varying degrees by the product manager, design lead, and engineering lead. The Effect of Boundary Conditions and Path Dependence Besides considering the pros and cons of the model options depicted in Figure 2, product teams should also account for boundary conditions and path dependence in deciding how to incorporate the AI competency. Boundary conditions refer to the constraints that shape the environment in which a team must operate. Such conditions may relate to aspects such as organizational structure (encompassing reporting lines, informal hierarchies, and decision-making processes within the company and team), resource availability (in terms of budget, personnel, and tools), regulatory and compliance-related requirements (e.g., legal and/or industry-specific regulations), and market dynamics (spanning the competitive landscape, customer expectations, and market trends). Path dependence refers to how historical decisions can influence current and future decisions; it emphasizes the importance of past events in shaping the later trajectory of an organization. Key aspects leading to such dependencies include historical practices (e.g., established routines and processes), past investments (e.g., in infrastructure, technology, and human capital, leading to potentially irrational decision-making by teams and executives due to the sunk cost fallacy), and organizational culture (covering the shared values, beliefs, and behaviors that have developed over time). Boundary conditions can limit a product team’s options when it comes to configuring the operating model; some desirable choices may be out of reach (e.g., budget constraints preventing the staffing of an embedded AI expert with a certain specialization). Path dependence can create an adverse type of inertia, whereby teams continue to follow established processes and methods even if better alternatives exist. This can make it challenging to adopt new operating models that require significant changes to existing practices. One way to work around path dependence is to enable different product teams to evolve their respective operating models at different speeds according to their team-specific needs; a team building an AI-first product may choose to invest in embedded AI experts sooner than another team that is exploring potential AI use cases for the first time. Finally, it is worth remembering that the choice of a product operating model can have far-reaching consequences for the design of the product itself. Conway’s Law states that “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” In our context, this means that the way product teams are organized, communicate, and incorporate the AI competency can directly impact the architecture of the products and services that they go on to create. For instance, consultative models may be more likely to result in the use of generic AI APIs (which the consultants can reuse across teams), while embedded AI experts may be better-positioned to implement product-specific optimizations aided by domain know-how (albeit at the risk of tighter coupling to other components of the product architecture). Companies and teams should therefore be empowered to configure their AI-ready product operating models, giving due consideration to the broader, long-term implications.

previous article on organizing for AI (link), we looked at how the interplay between three key dimensions — ownership of outcomes, outsourcing of staff, and the geographical proximity of team members — can yield a variety of organizational archetypes for implementing strategic AI initiatives, each implying a different twist to the product operating model.

Now we take a closer look at how the product operating model, and the core competencies of empowered product teams in particular, can evolve to face the emerging opportunities and challenges in the age of AI. We start by placing the current orthodoxy in its historical context and present a process model highlighting four key phases in the evolution of team composition in product operating models. We then consider how teams can be reshaped to successfully create AI-powered products and services going forward.

Note: All figures in the following sections have been created by the author of this article.

The Evolution of Product Operating Models

Current Orthodoxy and Historical Context

Product coaches such as Marty Cagan have done much in recent years to popularize the “3-in-a-box” model of empowered product teams. In general, according to the current orthodoxy, these teams should consist of three first-class, core competencies: product management, product design, and engineering. Being first-class means that none of these competencies are subordinate to each other in the org chart, and the product manager, design lead, and engineering lead are empowered to jointly make strategic product-related decisions. Being core reflects the belief that removing or otherwise compromising on any of these three competencies would lead to worse product outcomes, i.e., products that do not work for customers or for the business.

A central conviction of the current orthodoxy is that the 3-in-a-box model helps address product risks in four key areas: value, viability, usability, and feasibility. Product management is accountable for overall outcomes, and especially concerned with ensuring that the product is valuable to customers (typically implying a higher willingness to pay) and viable for the business, e.g., in terms of how much it costs to build, operate, and maintain the product in the long run. Product design is accountable for user experience (UX), and primarily interested in maximizing usability of the product, e.g., through intuitive onboarding, good use of affordances, and a pleasing user interface (UI) that allows for efficient work. Lastly, engineering is accountable for technical delivery, and primarily focused on ensuring feasibility of the product, e.g., characterized by the ability to ship an AI use case within certain technical constraints, ensuring sufficient predictive performance, inference speed, and safety.

Getting to this 3-in-a-box model has not been an easy journey, however, and the model is still not widely adopted outside tech companies. In the early days, product teams – if they could even be called that – mainly consisted of developers that tended to be responsible for both coding and gathering requirements from sales teams or other internal business stakeholders. Such product teams would focus on feature delivery rather than user experience or strategic product development; today such teams are thus often referred to as “feature teams”. The TV show Halt and Catch Fire vividly depicts tech companies organizing like this in the 1980s and 90s. Shows like The IT Crowd underscore how such disempowered teams can persist in IT departments in modern times.

As software projects grew in complexity in the late 1990s and early 2000s, the need for a dedicated product management competency to align product development with business goals and customer needs became increasingly evident. Companies like Microsoft and IBM began formalizing the role of a product manager and other companies soon followed. Then, as the 2000s saw the emergence of various online consumer-facing services (e.g., for search, shopping, and social networking), design/UX became a priority. Companies like Apple and Google started emphasizing design, leading to the formalization of corresponding roles. Designers began working closely with developers to ensure that products were not only functional but also visually appealing and user-friendly. Since the 2010s, the increased adoption of agile and lean methodologies further reinforced the need for cross-functional teams that could iterate quickly and respond to user feedback, all of which paved the way for the current 3-in-a-box orthodoxy.

A Process Framework for the Evolution of Product Operating Models

Looking ahead 5-10 years from today’s vantage point in 2025, it is interesting to consider how the emergence of AI as a “table stakes” competency might shake up the current orthodoxy, potentially triggering the next step in the evolution of product operating models. Figure 1 below proposes a four-phase process framework of how existing product models might evolve to incorporate the AI competency over time, drawing on instructive parallels to the situation faced by design/UX only a few years ago. Note that, at the risk of somewhat abusing terminology, but in line with today’s industry norms, the terms “UX” and “design” are used interchangeably in the following to refer to the competency concerned with minimizing usability risk.

Figure 1: An Evolutionary Process Framework

Phase 1 in the above framework is characterized by ignorance and/or skepticism. UX initially faced the struggle of justifying its worth at companies that had previously focused primarily on functional and technical performance, as in the context of non-consumer-facing enterprise software (think ERP systems of the 1990s). AI today faces a similar uphill battle. Not only is AI poorly understood by many stakeholders to begin with, but companies that have been burned by early forays into AI may now be wallowing in the “trough of disillusionment”, leading to skepticism and a wait-and-see approach towards adopting AI. There may also be concerns around the ethics of collecting behavioral data, algorithmic decision-making, bias, and getting to grips with the inherently uncertain nature of probabilistic AI output (e.g., consider the implications for software testing).

Phase 2 is marked by a growing recognition of the strategic importance of the new competency. For UX, this phase was catalyzed by the rise of consumer-facing online services, where improvements to UX could significantly drive engagement and monetization. As success stories of companies like Apple and Google began to spread, the strategic value of prioritizing UX became harder to overlook. With the confluence of some key trends over the past decade, such as the availability of cheaper computation via hyper-scalers (e.g., AWS, GCP, Azure), access to Big Data in a variety of domains, and the development of powerful new machine learning algorithms, our collective awareness of the potential of AI had been growing steadily by the time ChatGPT burst onto the scene and captured everyone’s attention. The rise of design patterns to harness probabilistic outcomes and the related success stories of AI-powered companies (e.g., Netflix, Uber) mean that AI is now increasingly seen as a key differentiator, much like UX before.

In Phase 3, the roles and responsibilities pertaining to the new competency become formalized. For UX, this meant differentiating between the roles of designers (covering experience, interactions, and the look and feel of user interfaces) and researchers (specializing in qualitative and quantitative methods for gaining a deeper understanding of user preferences and behavioral patterns). To remove any doubts about the value of UX, it was made into a first-class, Core Competency, sitting next to product management and engineering to form the current triumvirate of the standard product operating model. The past few years have witnessed the increased formalization of AI-related roles, expanding beyond a jack-of-all conception of “data scientists” to more specialized roles like “research scientists”, “ML engineers”, and more recently, “prompt engineers”. Looking ahead, an intriguing open question is how the AI competency will be incorporated into the current 3-in-a-box model. We may see an iterative formalization of embedded, consultative, and hybrid models, as discussed in the next section.

Finally, Phase 4 sees the emergence of norms and best practices for effectively leveraging the new competency. For UX, this is reflected today by the adoption of practices like design thinking and lean UX. It has also become rare to find top-class, customer-centric product teams without a strong, first-class UX competency. Meanwhile, recent years have seen concerted efforts to develop standardized AI practices and policies (e.g., Google’s AI Principles, SAP’s AI Ethics Policy, and the EU AI Act), partly to cope with the dangers that AI already poses, and partly to stave off dangers it may pose in the future (especially as AI becomes more powerful and is put to nefarious uses by bad actors). The extent to which the normalization of AI as a competency might impact the current orthodox framing of the 3-in-a-box Product Operating Model remains to be seen.

Towards AI-Ready Product Operating Models

Leveraging AI Expertise: Embedded, Consultative, and Hybrid Models

Figure 2 below proposes a high-level framework to think about how the AI competency could be incorporated in today’s orthodox, 3-in-a-box product operating model.

Figure 2: Options for AI-Ready Product Operating Models

In the embedded model, AI (personified by data scientists, ML engineers, etc.) may be added either as a new, durable, and first-class competency next to product management, UX/design, and engineering, or as a subordinated competency to these “big three” (e.g., staffing data scientists in an engineering team). By contrast, in the consultative model, the AI competency might reside in some centralized entity, such as an AI Center of Excellence (CoE), and leveraged by product teams on a case-by-case basis. For instance, AI experts from the CoE may be brought in temporarily to advise a product team on AI-specific issues during product discovery and/or delivery. In the hybrid model, as the name suggests, some AI experts may be embedded as long-term members of the product team and others may be brought in at times to provide additional consultative guidance. While Figure 2 only illustrates the case of a single product team, one can imagine these model options scaling to multiple product teams, capturing the interaction between different teams. For example, an “experience team” (responsible for building customer-facing products) might collaborate closely with a “platform team” (maintaining AI services/APIs that experience teams can leverage) to ship an AI product to customers.

Each of the above models for leveraging AI come with certain pros and cons. The embedded model can enable closer collaboration, more consistency, and faster decision-making. Having AI experts in the core team can lead to more seamless integration and collaboration; their continuous involvement ensures that AI-related inputs, whether conceptual or implementation-focused, can be integrated consistently throughout the product discovery and delivery phases. Direct access to AI expertise can speed up problem-solving and decision-making. However, embedding AI experts in every product team may be too expensive and difficult to justify, especially for companies or specific teams that cannot articulate a clear and compelling thesis about the expected AI-enabled return on investment. As a scarce resource, AI experts may either only be available to a handful of teams that can make a strong enough business case, or be spread too thinly across several teams, leading to adverse outcomes (e.g., slower turnaround of tasks and employee churn).

With the consultative model, staffing AI experts in a central team can be more cost-effective. Central experts can be allocated more flexibly to projects, allowing higher utilization per expert. It is also possible for one highly specialized expert (e.g., focused on large language models, AI lifecycle management, etc.) to advise multiple product teams at once. However, a purely consultative model can make product teams dependent on colleagues outside the team; these AI consultants may not always be available when needed, and may switch to another company at some point, leaving the product team high and dry. Regularly onboarding new AI consultants to the product team is time- and effort-intensive, and such consultants, especially if they are junior or new to the company, may not feel able to challenge the product team even when doing so might be necessary (e.g., warning about data-related bias, privacy concerns, or suboptimal architectural decisions).

The hybrid model aims to balance the trade-offs between the purely embedded and purely consultative models. This model can be implemented organizationally as a hub-and-spoke structure to foster regular knowledge sharing and alignment between the hub (CoE) and spokes (embedded experts). Giving product teams access to both embedded and consultative AI experts can provide both consistency and flexibility. The embedded AI experts can develop domain-specific know-how that can help with feature engineering and model performance diagnosis, while specialized AI consultants can advise and up-skill the embedded experts on more general, state-of-the-art technologies and best practices. However, the hybrid model is more complex to manage. Tasks must be divided carefully between the embedded and consultative AI experts to avoid redundant work, delays, and conflicts. Overseeing the alignment between embedded and consultative experts can create additional managerial overhead that may need to be borne to varying degrees by the product manager, design lead, and engineering lead.

The Effect of Boundary Conditions and Path Dependence

Besides considering the pros and cons of the model options depicted in Figure 2, product teams should also account for boundary conditions and path dependence in deciding how to incorporate the AI competency.

Boundary conditions refer to the constraints that shape the environment in which a team must operate. Such conditions may relate to aspects such as organizational structure (encompassing reporting lines, informal hierarchies, and decision-making processes within the company and team), resource availability (in terms of budget, personnel, and tools), regulatory and compliance-related requirements (e.g., legal and/or industry-specific regulations), and market dynamics (spanning the competitive landscape, customer expectations, and market trends). Path dependence refers to how historical decisions can influence current and future decisions; it emphasizes the importance of past events in shaping the later trajectory of an organization. Key aspects leading to such dependencies include historical practices (e.g., established routines and processes), past investments (e.g., in infrastructure, technology, and human capital, leading to potentially irrational decision-making by teams and executives due to the sunk cost fallacy), and organizational culture (covering the shared values, beliefs, and behaviors that have developed over time).

Boundary conditions can limit a product team’s options when it comes to configuring the operating model; some desirable choices may be out of reach (e.g., budget constraints preventing the staffing of an embedded AI expert with a certain specialization). Path dependence can create an adverse type of inertia, whereby teams continue to follow established processes and methods even if better alternatives exist. This can make it challenging to adopt new operating models that require significant changes to existing practices. One way to work around path dependence is to enable different product teams to evolve their respective operating models at different speeds according to their team-specific needs; a team building an AI-first product may choose to invest in embedded AI experts sooner than another team that is exploring potential AI use cases for the first time.

Finally, it is worth remembering that the choice of a product operating model can have far-reaching consequences for the design of the product itself. Conway’s Law states that “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” In our context, this means that the way product teams are organized, communicate, and incorporate the AI competency can directly impact the architecture of the products and services that they go on to create. For instance, consultative models may be more likely to result in the use of generic AI APIs (which the consultants can reuse across teams), while embedded AI experts may be better-positioned to implement product-specific optimizations aided by domain know-how (albeit at the risk of tighter coupling to other components of the product architecture). Companies and teams should therefore be empowered to configure their AI-ready product operating models, giving due consideration to the broader, long-term implications.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco strengthens integrated IT/OT network and security controls

Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along

Read More »

MOL’s Tiszaújváros steam cracker processes first circular feedstock

MOL Group has completed its first certified production trial using circular feedstock at subsidiary MOL Petrochemicals Co. Ltd. complex in Tiszaújváros, Hungary, advancing the company’s strategic push toward circular economy integration in petrochemical production. Confirmed completed as of Sept. 15, the pilot marked MOL Group’s first use of post-consumer plastic

Read More »

Network jobs watch: Hiring, skills and certification trends

Desire for higher compensation Improve career prospects Want more interesting work “A robust and engaged tech workforce is essential to keeping enterprises operating at the highest level,” said Julia Kanouse, Chief Membership Officer at ISACA, in a statement. “In better understanding IT professionals’ motivations and pain points, including how these

Read More »

WTI Falls on Stockpile, Fed Moves

Oil eased after a three-session advance as traders assessed fresh US stockpile data and a Federal Reserve interest-rate cut. West Texas Intermediate fell 0.7% to settle above $64 a barrel after the Federal Reserve lowered its benchmark interest rate by a quarter percentage point and penciled in two more reductions this year. Although lower rates typically boost energy demand, investors focused on policymakers’ warnings of mounting labor market weakness. Traders had also mostly priced in a 25 basis-point cut ahead of the decision, leading some to unwind hedges against a bigger-than-expected reduction. The dollar strengthened, making commodities priced in the currency less attractive. “There is a somewhat counterintuitive reaction to the Fed’s cut, but the dovish pivot cements their shift to protect the labor side of their mandate,” said Frank Monkam, head of macro trading at Buffalo Bayou Commodities. The shift suggests “an admission that growth risks to the economy are becoming more apparent and concerning.” The Fed move compounded an earlier slide as traders discounted the most recent US stockpile data, which showed crude inventories fell 9.29 million barrels amid a sizable increase in exports. However, the adjustment factor ballooned and distillate inventories rose to the highest since January, adding a bearish tilt to the report. “Traders like to see domestic demand pulling the inventories,” as opposed to exports, said Dennis Kissler, senior vice president for trading at BOK Financial Securities. The distillate buildup also stunted a rally following Ukraine’s attack on the Saratov refinery in its latest strike on Russian energy facilities — which have helped cut the OPEC+ member’s production to its lowest post-pandemic level, according to Goldman Sachs Group Inc. Still, the strikes haven’t been enough to push oil out of the $5 band it has been in for most of the past month-and-a-half, buffeted between

Read More »

XRG Walks Away From $19B Santos Takeover

Abu Dhabi National Oil Co. dropped its planned $19 billion takeover of Australian natural gas producer Santos Ltd., walking away from an ambitious effort to expand overseas after failing to agree on key terms. A “combination of factors” discouraged the company’s XRG unit from making a final bid, it said Wednesday. The decision was strictly commercial and reflected disagreement over issues including valuation and tax, people familiar with the matter said, asking not to be identified discussing private information. It’s a notable retreat for XRG, the Adnoc spinoff launched to great fanfare last year and tasked with deploying Abu Dhabi’s billions into international dealmaking. The firm has been looking to build a global portfolio, particularly in chemicals and liquefied natural gas, and nixing the Santos transaction may slow an M&A drive aimed at diversifying the Middle Eastern emirate away from crude. The company made its indicative offer in June with a consortium that included Abu Dhabi Development Holding Co. and Carlyle Group Inc. The board of Santos, Australia’s second-largest fossil-fuel producer, recommended the $5.76-a-share proposal, which represented a 28% premium to the stock price at the time. But although the shares surged that day, they have remained well below the offer price, potentially indicating investors were skeptical the consortium could land the deal. Santos extended an exclusivity period for a second time last month, saying the group had sought more time to complete due diligence and obtain approvals. “The market will ask questions about Santos’ valuation after this,” Saul Kavonic, an energy analyst at MST Marquee, said by email. Investors may be wary about “any skeletons that may be lurking there, all the more so because XRG was a less price-sensitive buyer than most, yet still couldn’t make it work.” Santos’ American depositary receipts slumped as much as 9.5% to $4.69 on Wednesday. Covestro Hurdles Following agreements for

Read More »

Slovakia and Hungary Resist Trump Bid to Halt Russian Energy

Slovakia and Hungary signaled they would resist pressure from US President Donald Trump to cut Russian oil and gas imports until the European Union member states find sufficient alternative supplies.  “Before we can fully commit, we need to have the right conditions in place — otherwise we risk seriously damaging our industry and economy,” Slovak Economy Minister Denisa Sakova told reporters in Bratislava on Wednesday.  The minister said sufficient infrastructure must first be in place to support alternative routes. The comments amount to a pushback against fresh pressure from Trump for all EU states to end Russian energy imports, a move that would hit Slovakia and Hungary.  Hungarian Cabinet Minister Gergely Gulyas reiterated that his country would rebuff EU initiatives that threatened the security of its energy supplies. Sakova said she made clear Slovakia’s position during talks with US Energy Secretary Chris Wright in Vienna this week. She said the Trump official expressed understanding, while acknowledging that the US must boost energy projects in Europe.  Trump said over the weekend that he’s prepared to move ahead with “major” sanctions on Russian oil if European nations do the same. The government in Bratislava is prepared to shut its Russian energy links if it has sufficient infrastructure to transport volumes, Sakova said.  “As long as we have an alternative route, and the transmission capacity is sufficient, Slovakia has no problem diversifying,” the minister said. A complete cutoff of Russian supplies would pose a risk, she said, because Slovakia is located at the very end of alternative supply routes coming from the West.  Slovakia and Hungary, landlocked nations bordering Ukraine, have historically depended on Russian oil and gas. After Russia’s full-scale invasion of Ukraine in 2022, both launched several diversification initiatives. Slovakia imports around third of its oil from non-Russian sources via the Adria pipeline

Read More »

Slovakia Resists Pressure to Quickly Halt Russian Energy

Slovakia and Hungary signaled they would resist pressure from US President Donald Trump to cut Russian oil and gas imports until the European Union member states find sufficient alternative supplies.  “Before we can fully commit, we need to have the right conditions in place — otherwise we risk seriously damaging our industry and economy,” Slovak Economy Minister Denisa Sakova told reporters in Bratislava on Wednesday.  The minister said sufficient infrastructure must first be in place to support alternative routes. The comments amount to a pushback against fresh pressure from Trump for all EU states to end Russian energy imports, a move that would hit Slovakia and Hungary.  Hungarian Cabinet Minister Gergely Gulyas reiterated that his country would rebuff EU initiatives that threatened the security of its energy supplies. Sakova said she made clear Slovakia’s position during talks with US Energy Secretary Chris Wright in Vienna this week. She said the Trump official expressed understanding, while acknowledging that the US must boost energy projects in Europe.  Trump said over the weekend that he’s prepared to move ahead with “major” sanctions on Russian oil if European nations do the same. The government in Bratislava is prepared to shut its Russian energy links if it has sufficient infrastructure to transport volumes, Sakova said.  “As long as we have an alternative route, and the transmission capacity is sufficient, Slovakia has no problem diversifying,” the minister said. A complete cutoff of Russian supplies would pose a risk, she said, because Slovakia is located at the very end of alternative supply routes coming from the West.  Slovakia and Hungary, landlocked nations bordering Ukraine, have historically depended on Russian oil and gas. After Russia’s full-scale invasion of Ukraine in 2022, both launched several diversification initiatives. Slovakia imports around third of its oil from non-Russian sources via the Adria pipeline

Read More »

Energy-related US CO2 emissions down 20% since 2005: EIA

Listen to the article 2 min This audio is auto-generated. Please let us know if you have feedback. Per capita carbon dioxide emissions from energy consumption fell in every state from 2005 to 2023, primarily due to less coal being burned, the U.S. Energy Information Administration said in a Monday report.  In total, CO2 emissions fell by 20% in those years. The U.S. population increased by 14% during that period, so per capita, emissions fell by 30%, according to EIA. “Increased electricity generation from natural gas, which releases about half as many CO2 emissions per unit of energy when combusted as coal, and from non-CO2-emitting wind and solar generation offset the decrease in coal generation,” EIA said. Emissions decreased in every state, falling the most in Maryland and the District of Columbia, which saw per capita drops of 49% and 48%, respectively. Emissions fell the least in Idaho, where they dropped by 3%, and Mississippi, where they dropped by 1%. Optional Caption Courtesy of Energy Information Administration “In 2023, Maryland had the lowest per capita CO2 emissions of any state, at 7.8 metric tons of CO2 (mtCO2), which is the second lowest in recorded data beginning in 1960,” EIA said. “The District of Columbia has lower per capita CO2 emissions than any state and tied its record low of 3.6 mtCO2 in 2023.” EIA forecasts a 1% increase in total U.S. emissions from energy consumption this year, “in part because of more recent increased fossil fuel consumption for crude oil production and electricity generation growth.” In 2023, the transportation sector was responsible for the largest share of emissions from energy consumption across 28 states, EIA said. In 2005, the electric power sector had “accounted for the largest share of emissions in 31 states, while the transportation sector made up the

Read More »

Chord Announces ‘Strategic Acquisition of Williston Basin Assets’

Chord Energy Corporation announced a “strategic acquisition of Williston Basin assets” in a statement posted on its website recently. In the statement, Chord said a wholly owned subsidiary of the company has entered into a definitive agreement to acquire assets in the Williston Basin from XTO Energy Inc. and affiliates for a total cash consideration of $550 million, subject to customary purchase price adjustments. The consideration is expected to be funded through a combination of cash on hand and borrowings, Chord noted in the statement, which highlighted that the effective date for the transaction is September 1, 2025, and that the deal is expected to close by year-end. Chord outlined in the statement that the deal includes 48,000 net acres in the Williston core, noting that “90 net 10,000 foot equivalent locations (72 net operated) extend Chord’s inventory life”. Pointing out “inventory quality” in the statement, Chord highlighted that “low average NYMEX WTI breakeven economics ($40s) compete at the front-end of Chord’s program and lower the weighted-average breakeven of Chord’s portfolio”. The company outlined that the deal is “expected to be accretive to all key metrics including cash flow, free cash flow and NAV in both near and long-term”. “We are excited to announce the acquisition of these high-quality assets,” Danny Brown, Chord Energy’s President and Chief Executive Officer, said in the statement. “The acquired assets are in one of the best areas of the Williston Basin and have significant overlap with Chord’s existing footprint, setting the stage for long-lateral development. The assets have a low average NYMEX WTI breakeven and are immediately competitive for capital,” he added. “We expect that the transaction will create significant accretion for shareholders across all key metrics, while maintaining pro forma leverage below the peer group and supporting sustainable FCF generation and return of capital,” he continued.

Read More »

Power shortages are the only thing slowing the data center market

Another major shortage – which should not be news to anyone – is power. Lynch said that it is the primary reason many data centers are moving out of the heavily congested areas, like Northern Virginia and Santa Clara, and into secondary markets. Power is more available in smaller markets than larger ones. “If our client needs multi-megawatt capacity in Silicon Valley, we’re being told by the utility providers that that capacity will not be available for up to 10 years from now,” so out of necessity, many have moved to secondary markets, such as Hillsborough, Oregon, Reno, Nevada, and Columbus, Ohio. The growth of hyperscalers as well as AI is driving up the power requirements of facilities further into the multi-megawatt range. The power industry moves at a very different pace than the IT world, much slower and more deliberate. Lynch said the lead time for equipment makes it difficult to predict when some large scale, ambitious data centers can be completed. A multi-megawatt facility may even require new transmission lines to be built out as well. This translates into longer build times for new data centers. CBRE found that the average data center now takes about three years to complete, up from 2 years just a short time ago. Intel, AMD, and Nvidia haven’t even laid out a road map for three years, but with new architectures coming every year, a data center risks being obsolete by the time it’s completed. However, what’s the alternative? To wait? Customers will never catch up at that rate, Lynch said.   That is simply not a viable option, so development and construction must go on even with short supplies of everything from concrete and steel to servers and power transformers.

Read More »

Arista continues to defy expectations, build enterprise momentum

During her keynote, Ullal noted Arista is not only selling high-speed switches for AI data centers but also leveraging its own technology to create a new category of “AI centers” that simplify network management and operations, with a goal of 60% to 80% growth in the AI market. Arista has its sights set on enterprise expansion Arista hired Todd Nightingale as its new president a couple of month ago, and the reason should be obvious to industry watchers: to grow the enterprise business. Nightingale recently served as CEO of Fastly, but he is best known for his tenure as Cisco. He joined when Cisco acquired Meraki, where he was the CEO. Ullal indicated the campus and WAN business would grow from the current $750 million to $800 million run rate to $1.25 billion, which is a whopping 60% growth. Some of this will come from VeloCloud being added to Arista’s numbers, but not all of it. Arista’s opportunity in campus and WAN is in bringing its high performance, resilient networking to this audience. In a survey I conducted last year, 93% of respondents stated the network is more important to business operations than it was two years ago. During his presentation, Nightingale talked about this shift when he said: “There is no longer such a thing as a network that is not mission critical. We think of mission critical networks for military sites and tier one hospitals, but every hotel and retailer who has their Wi-Fi go down and can’t transact business will say the network is critical.” Also, with AI, inferencing traffic is expected to put a steady load on the network, and any kind of performance hiccup will have negative business ramifications. Historically, Arista’s value proposition for companies outside the Fortune 2000 was a bit of a solution

Read More »

Arista touts liquid cooling, optical tech to reduce power consumption for AI networking

Both technologies will likely find a role in future AI and optical networks, experts say, as both promise to reduce power consumption and support improved bandwidth density. Both have advantages and disadvantages as well – CPOs are more complex to deploy given the amount of technology included in a CPO package, whereas LPOs promise more simplicity.  Bechtolsheim said that LPO can provide an additional 20% power savings over other optical forms. Early tests show good receiver performance even under degraded conditions, though transmit paths remain sensitive to reflections and crosstalk at the connector level, Bechtolsheim added. At the recent Hot Interconnects conference, he said: “The path to energy-efficient optics is constrained by high-volume manufacturing,” stressing that advanced optics packaging remains difficult and risky without proven production scale.  “We are nonreligious about CPO, LPO, whatever it is. But we are religious about one thing, which is the ability to ship very high volumes in a very predictable fashion,” Bechtolsheim said at the investor event. “So, to put this in quantity numbers here, the industry expects to ship something like 50 million OSFP modules next calendar year. The current shipment rate of CPO is zero, okay? So going from zero to 50 million is just not possible. The supply chain doesn’t exist. So, even if the technology works and can be demonstrated in a lab, to get to the volume required to meet the needs of the industry is just an incredible effort.” “We’re all in on liquid cooling to reduce power, eliminating fan power, supporting the linear pluggable optics to reduce power and cost, increasing rack density, which reduces data center footprint and related costs, and most importantly, optimizing these fabrics for the AI data center use case,” Bechtolsheim added. “So what we call the ‘purpose-built AI data center fabric’ around Ethernet

Read More »

Network and cloud implications of agentic AI

The chain analogy is critical here. Realistic uses of AI agents will require core database access; what can possibly make an AI business case that isn’t tied to a company’s critical data? The four critical elements of these applications—the agent, the MCP server, the tools, and the data— are all dragged along with each other, and traffic on the network is the linkage in the chain. How much traffic is generated? Here, enterprises had another surprise. Enterprises told me that their initial view of their AI hosting was an “AI cluster” with a casual data link to their main data center network. With AI agents, they now see smaller AI servers actually installed within their primary data centers, and all the traffic AI creates, within the model and to and from it, now flows on the data center network. Vendors who told enterprises that AI networking would have a profound impact are proving correct. You can run a query or perform a task with an agent and have that task parse an entire database of thousands or millions of records. Someone not aware of what an agent application implies in terms of data usage can easily create as much traffic as a whole week’s normal access-and-update would create. Enough, they say, to impact network capacity and the QoE of other applications. And, enterprises remind us, if that traffic crosses in/out of the cloud, the cloud costs could skyrocket. About a third of the enterprises said that issues with AI agents generated enough traffic to create local congestion on the network or a blip in cloud costs large enough to trigger a financial review. MCP tool use by agents is also a major security and governance headache. Enterprises point out that MCP standards haven’t always required strong authentication, and they also

Read More »

There are 121 AI processor companies. How many will succeed?

The US currently leads in AI hardware and software, but China’s DeepSeek and Huawei continue to push advanced chips, India has announced an indigenous GPU program targeting production by 2029, and policy shifts in Washington are reshaping the playing field. In Q2, the rollback of export restrictions allowed US companies like Nvidia and AMD to strike multibillion-dollar deals in Saudi Arabia.  JPR categorizes vendors into five segments: IoT (ultra-low-power inference in microcontrollers or small SoCs); Edge (on-device or near-device inference in 1–100W range, used outside data centers); Automotive (distinct enough to break out from Edge); data center training; and data center inference. There is some overlap between segments as many vendors play in multiple segments. Of the five categories, inference has the most startups with 90. Peddie says the inference application list is “humongous,” with everything from wearable health monitors to smart vehicle sensor arrays, to personal items in the home, and every imaginable machine in every imaginable manufacturing and production line, plus robotic box movers and surgeons.  Inference also offers the most versatility. “Smart devices” in the past, like washing machines or coffee makers, could do basically one thing and couldn’t adapt to any changes. “Inference-based systems will be able to duck and weave, adjust in real time, and find alternative solutions, quickly,” said Peddie. Peddie said despite his apparent cynicism, this is an exciting time. “There are really novel ideas being tried like analog neuron processors, and in-memory processors,” he said.

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist (and coming soon free Data Center Intern listing). Data Center Critical Facility Manager Impact, TX There position is also available in: Cheyenne, WY; Ashburn, VA or Manassas, VA. This opportunity is working directly with a leading mission-critical data center developer / wholesaler / colo provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations (enterprise and hyperscale customers). This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer New Albany, OH This traveling position is also available in: Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Cedar Rapids, IA; Phoenix, AZ; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits.  Data Center Engineering Design ManagerAshburn, VA This opportunity is working directly with a leading mission-critical data center developer /

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »