Stay Ahead, Stay ONMINE

Evolving Product Operating Models in the Age of AI

previous article on organizing for AI (link), we looked at how the interplay between three key dimensions — ownership of outcomes, outsourcing of staff, and the geographical proximity of team members — can yield a variety of organizational archetypes for implementing strategic AI initiatives, each implying a different twist to the product operating model. Now we take a closer look at how the product operating model, and the core competencies of empowered product teams in particular, can evolve to face the emerging opportunities and challenges in the age of AI. We start by placing the current orthodoxy in its historical context and present a process model highlighting four key phases in the evolution of team composition in product operating models. We then consider how teams can be reshaped to successfully create AI-powered products and services going forward. Note: All figures in the following sections have been created by the author of this article. The Evolution of Product Operating Models Current Orthodoxy and Historical Context Product coaches such as Marty Cagan have done much in recent years to popularize the “3-in-a-box” model of empowered product teams. In general, according to the current orthodoxy, these teams should consist of three first-class, core competencies: product management, product design, and engineering. Being first-class means that none of these competencies are subordinate to each other in the org chart, and the product manager, design lead, and engineering lead are empowered to jointly make strategic product-related decisions. Being core reflects the belief that removing or otherwise compromising on any of these three competencies would lead to worse product outcomes, i.e., products that do not work for customers or for the business. A central conviction of the current orthodoxy is that the 3-in-a-box model helps address product risks in four key areas: value, viability, usability, and feasibility. Product management is accountable for overall outcomes, and especially concerned with ensuring that the product is valuable to customers (typically implying a higher willingness to pay) and viable for the business, e.g., in terms of how much it costs to build, operate, and maintain the product in the long run. Product design is accountable for user experience (UX), and primarily interested in maximizing usability of the product, e.g., through intuitive onboarding, good use of affordances, and a pleasing user interface (UI) that allows for efficient work. Lastly, engineering is accountable for technical delivery, and primarily focused on ensuring feasibility of the product, e.g., characterized by the ability to ship an AI use case within certain technical constraints, ensuring sufficient predictive performance, inference speed, and safety. Getting to this 3-in-a-box model has not been an easy journey, however, and the model is still not widely adopted outside tech companies. In the early days, product teams – if they could even be called that – mainly consisted of developers that tended to be responsible for both coding and gathering requirements from sales teams or other internal business stakeholders. Such product teams would focus on feature delivery rather than user experience or strategic product development; today such teams are thus often referred to as “feature teams”. The TV show Halt and Catch Fire vividly depicts tech companies organizing like this in the 1980s and 90s. Shows like The IT Crowd underscore how such disempowered teams can persist in IT departments in modern times. As software projects grew in complexity in the late 1990s and early 2000s, the need for a dedicated product management competency to align product development with business goals and customer needs became increasingly evident. Companies like Microsoft and IBM began formalizing the role of a product manager and other companies soon followed. Then, as the 2000s saw the emergence of various online consumer-facing services (e.g., for search, shopping, and social networking), design/UX became a priority. Companies like Apple and Google started emphasizing design, leading to the formalization of corresponding roles. Designers began working closely with developers to ensure that products were not only functional but also visually appealing and user-friendly. Since the 2010s, the increased adoption of agile and lean methodologies further reinforced the need for cross-functional teams that could iterate quickly and respond to user feedback, all of which paved the way for the current 3-in-a-box orthodoxy. A Process Framework for the Evolution of Product Operating Models Looking ahead 5-10 years from today’s vantage point in 2025, it is interesting to consider how the emergence of AI as a “table stakes” competency might shake up the current orthodoxy, potentially triggering the next step in the evolution of product operating models. Figure 1 below proposes a four-phase process framework of how existing product models might evolve to incorporate the AI competency over time, drawing on instructive parallels to the situation faced by design/UX only a few years ago. Note that, at the risk of somewhat abusing terminology, but in line with today’s industry norms, the terms “UX” and “design” are used interchangeably in the following to refer to the competency concerned with minimizing usability risk. Figure 1: An Evolutionary Process Framework Phase 1 in the above framework is characterized by ignorance and/or skepticism. UX initially faced the struggle of justifying its worth at companies that had previously focused primarily on functional and technical performance, as in the context of non-consumer-facing enterprise software (think ERP systems of the 1990s). AI today faces a similar uphill battle. Not only is AI poorly understood by many stakeholders to begin with, but companies that have been burned by early forays into AI may now be wallowing in the “trough of disillusionment”, leading to skepticism and a wait-and-see approach towards adopting AI. There may also be concerns around the ethics of collecting behavioral data, algorithmic decision-making, bias, and getting to grips with the inherently uncertain nature of probabilistic AI output (e.g., consider the implications for software testing). Phase 2 is marked by a growing recognition of the strategic importance of the new competency. For UX, this phase was catalyzed by the rise of consumer-facing online services, where improvements to UX could significantly drive engagement and monetization. As success stories of companies like Apple and Google began to spread, the strategic value of prioritizing UX became harder to overlook. With the confluence of some key trends over the past decade, such as the availability of cheaper computation via hyper-scalers (e.g., AWS, GCP, Azure), access to Big Data in a variety of domains, and the development of powerful new machine learning algorithms, our collective awareness of the potential of AI had been growing steadily by the time ChatGPT burst onto the scene and captured everyone’s attention. The rise of design patterns to harness probabilistic outcomes and the related success stories of AI-powered companies (e.g., Netflix, Uber) mean that AI is now increasingly seen as a key differentiator, much like UX before. In Phase 3, the roles and responsibilities pertaining to the new competency become formalized. For UX, this meant differentiating between the roles of designers (covering experience, interactions, and the look and feel of user interfaces) and researchers (specializing in qualitative and quantitative methods for gaining a deeper understanding of user preferences and behavioral patterns). To remove any doubts about the value of UX, it was made into a first-class, Core Competency, sitting next to product management and engineering to form the current triumvirate of the standard product operating model. The past few years have witnessed the increased formalization of AI-related roles, expanding beyond a jack-of-all conception of “data scientists” to more specialized roles like “research scientists”, “ML engineers”, and more recently, “prompt engineers”. Looking ahead, an intriguing open question is how the AI competency will be incorporated into the current 3-in-a-box model. We may see an iterative formalization of embedded, consultative, and hybrid models, as discussed in the next section. Finally, Phase 4 sees the emergence of norms and best practices for effectively leveraging the new competency. For UX, this is reflected today by the adoption of practices like design thinking and lean UX. It has also become rare to find top-class, customer-centric product teams without a strong, first-class UX competency. Meanwhile, recent years have seen concerted efforts to develop standardized AI practices and policies (e.g., Google’s AI Principles, SAP’s AI Ethics Policy, and the EU AI Act), partly to cope with the dangers that AI already poses, and partly to stave off dangers it may pose in the future (especially as AI becomes more powerful and is put to nefarious uses by bad actors). The extent to which the normalization of AI as a competency might impact the current orthodox framing of the 3-in-a-box Product Operating Model remains to be seen. Towards AI-Ready Product Operating Models Leveraging AI Expertise: Embedded, Consultative, and Hybrid Models Figure 2 below proposes a high-level framework to think about how the AI competency could be incorporated in today’s orthodox, 3-in-a-box product operating model. Figure 2: Options for AI-Ready Product Operating Models In the embedded model, AI (personified by data scientists, ML engineers, etc.) may be added either as a new, durable, and first-class competency next to product management, UX/design, and engineering, or as a subordinated competency to these “big three” (e.g., staffing data scientists in an engineering team). By contrast, in the consultative model, the AI competency might reside in some centralized entity, such as an AI Center of Excellence (CoE), and leveraged by product teams on a case-by-case basis. For instance, AI experts from the CoE may be brought in temporarily to advise a product team on AI-specific issues during product discovery and/or delivery. In the hybrid model, as the name suggests, some AI experts may be embedded as long-term members of the product team and others may be brought in at times to provide additional consultative guidance. While Figure 2 only illustrates the case of a single product team, one can imagine these model options scaling to multiple product teams, capturing the interaction between different teams. For example, an “experience team” (responsible for building customer-facing products) might collaborate closely with a “platform team” (maintaining AI services/APIs that experience teams can leverage) to ship an AI product to customers. Each of the above models for leveraging AI come with certain pros and cons. The embedded model can enable closer collaboration, more consistency, and faster decision-making. Having AI experts in the core team can lead to more seamless integration and collaboration; their continuous involvement ensures that AI-related inputs, whether conceptual or implementation-focused, can be integrated consistently throughout the product discovery and delivery phases. Direct access to AI expertise can speed up problem-solving and decision-making. However, embedding AI experts in every product team may be too expensive and difficult to justify, especially for companies or specific teams that cannot articulate a clear and compelling thesis about the expected AI-enabled return on investment. As a scarce resource, AI experts may either only be available to a handful of teams that can make a strong enough business case, or be spread too thinly across several teams, leading to adverse outcomes (e.g., slower turnaround of tasks and employee churn). With the consultative model, staffing AI experts in a central team can be more cost-effective. Central experts can be allocated more flexibly to projects, allowing higher utilization per expert. It is also possible for one highly specialized expert (e.g., focused on large language models, AI lifecycle management, etc.) to advise multiple product teams at once. However, a purely consultative model can make product teams dependent on colleagues outside the team; these AI consultants may not always be available when needed, and may switch to another company at some point, leaving the product team high and dry. Regularly onboarding new AI consultants to the product team is time- and effort-intensive, and such consultants, especially if they are junior or new to the company, may not feel able to challenge the product team even when doing so might be necessary (e.g., warning about data-related bias, privacy concerns, or suboptimal architectural decisions). The hybrid model aims to balance the trade-offs between the purely embedded and purely consultative models. This model can be implemented organizationally as a hub-and-spoke structure to foster regular knowledge sharing and alignment between the hub (CoE) and spokes (embedded experts). Giving product teams access to both embedded and consultative AI experts can provide both consistency and flexibility. The embedded AI experts can develop domain-specific know-how that can help with feature engineering and model performance diagnosis, while specialized AI consultants can advise and up-skill the embedded experts on more general, state-of-the-art technologies and best practices. However, the hybrid model is more complex to manage. Tasks must be divided carefully between the embedded and consultative AI experts to avoid redundant work, delays, and conflicts. Overseeing the alignment between embedded and consultative experts can create additional managerial overhead that may need to be borne to varying degrees by the product manager, design lead, and engineering lead. The Effect of Boundary Conditions and Path Dependence Besides considering the pros and cons of the model options depicted in Figure 2, product teams should also account for boundary conditions and path dependence in deciding how to incorporate the AI competency. Boundary conditions refer to the constraints that shape the environment in which a team must operate. Such conditions may relate to aspects such as organizational structure (encompassing reporting lines, informal hierarchies, and decision-making processes within the company and team), resource availability (in terms of budget, personnel, and tools), regulatory and compliance-related requirements (e.g., legal and/or industry-specific regulations), and market dynamics (spanning the competitive landscape, customer expectations, and market trends). Path dependence refers to how historical decisions can influence current and future decisions; it emphasizes the importance of past events in shaping the later trajectory of an organization. Key aspects leading to such dependencies include historical practices (e.g., established routines and processes), past investments (e.g., in infrastructure, technology, and human capital, leading to potentially irrational decision-making by teams and executives due to the sunk cost fallacy), and organizational culture (covering the shared values, beliefs, and behaviors that have developed over time). Boundary conditions can limit a product team’s options when it comes to configuring the operating model; some desirable choices may be out of reach (e.g., budget constraints preventing the staffing of an embedded AI expert with a certain specialization). Path dependence can create an adverse type of inertia, whereby teams continue to follow established processes and methods even if better alternatives exist. This can make it challenging to adopt new operating models that require significant changes to existing practices. One way to work around path dependence is to enable different product teams to evolve their respective operating models at different speeds according to their team-specific needs; a team building an AI-first product may choose to invest in embedded AI experts sooner than another team that is exploring potential AI use cases for the first time. Finally, it is worth remembering that the choice of a product operating model can have far-reaching consequences for the design of the product itself. Conway’s Law states that “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” In our context, this means that the way product teams are organized, communicate, and incorporate the AI competency can directly impact the architecture of the products and services that they go on to create. For instance, consultative models may be more likely to result in the use of generic AI APIs (which the consultants can reuse across teams), while embedded AI experts may be better-positioned to implement product-specific optimizations aided by domain know-how (albeit at the risk of tighter coupling to other components of the product architecture). Companies and teams should therefore be empowered to configure their AI-ready product operating models, giving due consideration to the broader, long-term implications.

previous article on organizing for AI (link), we looked at how the interplay between three key dimensions — ownership of outcomes, outsourcing of staff, and the geographical proximity of team members — can yield a variety of organizational archetypes for implementing strategic AI initiatives, each implying a different twist to the product operating model.

Now we take a closer look at how the product operating model, and the core competencies of empowered product teams in particular, can evolve to face the emerging opportunities and challenges in the age of AI. We start by placing the current orthodoxy in its historical context and present a process model highlighting four key phases in the evolution of team composition in product operating models. We then consider how teams can be reshaped to successfully create AI-powered products and services going forward.

Note: All figures in the following sections have been created by the author of this article.

The Evolution of Product Operating Models

Current Orthodoxy and Historical Context

Product coaches such as Marty Cagan have done much in recent years to popularize the “3-in-a-box” model of empowered product teams. In general, according to the current orthodoxy, these teams should consist of three first-class, core competencies: product management, product design, and engineering. Being first-class means that none of these competencies are subordinate to each other in the org chart, and the product manager, design lead, and engineering lead are empowered to jointly make strategic product-related decisions. Being core reflects the belief that removing or otherwise compromising on any of these three competencies would lead to worse product outcomes, i.e., products that do not work for customers or for the business.

A central conviction of the current orthodoxy is that the 3-in-a-box model helps address product risks in four key areas: value, viability, usability, and feasibility. Product management is accountable for overall outcomes, and especially concerned with ensuring that the product is valuable to customers (typically implying a higher willingness to pay) and viable for the business, e.g., in terms of how much it costs to build, operate, and maintain the product in the long run. Product design is accountable for user experience (UX), and primarily interested in maximizing usability of the product, e.g., through intuitive onboarding, good use of affordances, and a pleasing user interface (UI) that allows for efficient work. Lastly, engineering is accountable for technical delivery, and primarily focused on ensuring feasibility of the product, e.g., characterized by the ability to ship an AI use case within certain technical constraints, ensuring sufficient predictive performance, inference speed, and safety.

Getting to this 3-in-a-box model has not been an easy journey, however, and the model is still not widely adopted outside tech companies. In the early days, product teams – if they could even be called that – mainly consisted of developers that tended to be responsible for both coding and gathering requirements from sales teams or other internal business stakeholders. Such product teams would focus on feature delivery rather than user experience or strategic product development; today such teams are thus often referred to as “feature teams”. The TV show Halt and Catch Fire vividly depicts tech companies organizing like this in the 1980s and 90s. Shows like The IT Crowd underscore how such disempowered teams can persist in IT departments in modern times.

As software projects grew in complexity in the late 1990s and early 2000s, the need for a dedicated product management competency to align product development with business goals and customer needs became increasingly evident. Companies like Microsoft and IBM began formalizing the role of a product manager and other companies soon followed. Then, as the 2000s saw the emergence of various online consumer-facing services (e.g., for search, shopping, and social networking), design/UX became a priority. Companies like Apple and Google started emphasizing design, leading to the formalization of corresponding roles. Designers began working closely with developers to ensure that products were not only functional but also visually appealing and user-friendly. Since the 2010s, the increased adoption of agile and lean methodologies further reinforced the need for cross-functional teams that could iterate quickly and respond to user feedback, all of which paved the way for the current 3-in-a-box orthodoxy.

A Process Framework for the Evolution of Product Operating Models

Looking ahead 5-10 years from today’s vantage point in 2025, it is interesting to consider how the emergence of AI as a “table stakes” competency might shake up the current orthodoxy, potentially triggering the next step in the evolution of product operating models. Figure 1 below proposes a four-phase process framework of how existing product models might evolve to incorporate the AI competency over time, drawing on instructive parallels to the situation faced by design/UX only a few years ago. Note that, at the risk of somewhat abusing terminology, but in line with today’s industry norms, the terms “UX” and “design” are used interchangeably in the following to refer to the competency concerned with minimizing usability risk.

Figure 1: An Evolutionary Process Framework

Phase 1 in the above framework is characterized by ignorance and/or skepticism. UX initially faced the struggle of justifying its worth at companies that had previously focused primarily on functional and technical performance, as in the context of non-consumer-facing enterprise software (think ERP systems of the 1990s). AI today faces a similar uphill battle. Not only is AI poorly understood by many stakeholders to begin with, but companies that have been burned by early forays into AI may now be wallowing in the “trough of disillusionment”, leading to skepticism and a wait-and-see approach towards adopting AI. There may also be concerns around the ethics of collecting behavioral data, algorithmic decision-making, bias, and getting to grips with the inherently uncertain nature of probabilistic AI output (e.g., consider the implications for software testing).

Phase 2 is marked by a growing recognition of the strategic importance of the new competency. For UX, this phase was catalyzed by the rise of consumer-facing online services, where improvements to UX could significantly drive engagement and monetization. As success stories of companies like Apple and Google began to spread, the strategic value of prioritizing UX became harder to overlook. With the confluence of some key trends over the past decade, such as the availability of cheaper computation via hyper-scalers (e.g., AWS, GCP, Azure), access to Big Data in a variety of domains, and the development of powerful new machine learning algorithms, our collective awareness of the potential of AI had been growing steadily by the time ChatGPT burst onto the scene and captured everyone’s attention. The rise of design patterns to harness probabilistic outcomes and the related success stories of AI-powered companies (e.g., Netflix, Uber) mean that AI is now increasingly seen as a key differentiator, much like UX before.

In Phase 3, the roles and responsibilities pertaining to the new competency become formalized. For UX, this meant differentiating between the roles of designers (covering experience, interactions, and the look and feel of user interfaces) and researchers (specializing in qualitative and quantitative methods for gaining a deeper understanding of user preferences and behavioral patterns). To remove any doubts about the value of UX, it was made into a first-class, Core Competency, sitting next to product management and engineering to form the current triumvirate of the standard product operating model. The past few years have witnessed the increased formalization of AI-related roles, expanding beyond a jack-of-all conception of “data scientists” to more specialized roles like “research scientists”, “ML engineers”, and more recently, “prompt engineers”. Looking ahead, an intriguing open question is how the AI competency will be incorporated into the current 3-in-a-box model. We may see an iterative formalization of embedded, consultative, and hybrid models, as discussed in the next section.

Finally, Phase 4 sees the emergence of norms and best practices for effectively leveraging the new competency. For UX, this is reflected today by the adoption of practices like design thinking and lean UX. It has also become rare to find top-class, customer-centric product teams without a strong, first-class UX competency. Meanwhile, recent years have seen concerted efforts to develop standardized AI practices and policies (e.g., Google’s AI Principles, SAP’s AI Ethics Policy, and the EU AI Act), partly to cope with the dangers that AI already poses, and partly to stave off dangers it may pose in the future (especially as AI becomes more powerful and is put to nefarious uses by bad actors). The extent to which the normalization of AI as a competency might impact the current orthodox framing of the 3-in-a-box Product Operating Model remains to be seen.

Towards AI-Ready Product Operating Models

Leveraging AI Expertise: Embedded, Consultative, and Hybrid Models

Figure 2 below proposes a high-level framework to think about how the AI competency could be incorporated in today’s orthodox, 3-in-a-box product operating model.

Figure 2: Options for AI-Ready Product Operating Models

In the embedded model, AI (personified by data scientists, ML engineers, etc.) may be added either as a new, durable, and first-class competency next to product management, UX/design, and engineering, or as a subordinated competency to these “big three” (e.g., staffing data scientists in an engineering team). By contrast, in the consultative model, the AI competency might reside in some centralized entity, such as an AI Center of Excellence (CoE), and leveraged by product teams on a case-by-case basis. For instance, AI experts from the CoE may be brought in temporarily to advise a product team on AI-specific issues during product discovery and/or delivery. In the hybrid model, as the name suggests, some AI experts may be embedded as long-term members of the product team and others may be brought in at times to provide additional consultative guidance. While Figure 2 only illustrates the case of a single product team, one can imagine these model options scaling to multiple product teams, capturing the interaction between different teams. For example, an “experience team” (responsible for building customer-facing products) might collaborate closely with a “platform team” (maintaining AI services/APIs that experience teams can leverage) to ship an AI product to customers.

Each of the above models for leveraging AI come with certain pros and cons. The embedded model can enable closer collaboration, more consistency, and faster decision-making. Having AI experts in the core team can lead to more seamless integration and collaboration; their continuous involvement ensures that AI-related inputs, whether conceptual or implementation-focused, can be integrated consistently throughout the product discovery and delivery phases. Direct access to AI expertise can speed up problem-solving and decision-making. However, embedding AI experts in every product team may be too expensive and difficult to justify, especially for companies or specific teams that cannot articulate a clear and compelling thesis about the expected AI-enabled return on investment. As a scarce resource, AI experts may either only be available to a handful of teams that can make a strong enough business case, or be spread too thinly across several teams, leading to adverse outcomes (e.g., slower turnaround of tasks and employee churn).

With the consultative model, staffing AI experts in a central team can be more cost-effective. Central experts can be allocated more flexibly to projects, allowing higher utilization per expert. It is also possible for one highly specialized expert (e.g., focused on large language models, AI lifecycle management, etc.) to advise multiple product teams at once. However, a purely consultative model can make product teams dependent on colleagues outside the team; these AI consultants may not always be available when needed, and may switch to another company at some point, leaving the product team high and dry. Regularly onboarding new AI consultants to the product team is time- and effort-intensive, and such consultants, especially if they are junior or new to the company, may not feel able to challenge the product team even when doing so might be necessary (e.g., warning about data-related bias, privacy concerns, or suboptimal architectural decisions).

The hybrid model aims to balance the trade-offs between the purely embedded and purely consultative models. This model can be implemented organizationally as a hub-and-spoke structure to foster regular knowledge sharing and alignment between the hub (CoE) and spokes (embedded experts). Giving product teams access to both embedded and consultative AI experts can provide both consistency and flexibility. The embedded AI experts can develop domain-specific know-how that can help with feature engineering and model performance diagnosis, while specialized AI consultants can advise and up-skill the embedded experts on more general, state-of-the-art technologies and best practices. However, the hybrid model is more complex to manage. Tasks must be divided carefully between the embedded and consultative AI experts to avoid redundant work, delays, and conflicts. Overseeing the alignment between embedded and consultative experts can create additional managerial overhead that may need to be borne to varying degrees by the product manager, design lead, and engineering lead.

The Effect of Boundary Conditions and Path Dependence

Besides considering the pros and cons of the model options depicted in Figure 2, product teams should also account for boundary conditions and path dependence in deciding how to incorporate the AI competency.

Boundary conditions refer to the constraints that shape the environment in which a team must operate. Such conditions may relate to aspects such as organizational structure (encompassing reporting lines, informal hierarchies, and decision-making processes within the company and team), resource availability (in terms of budget, personnel, and tools), regulatory and compliance-related requirements (e.g., legal and/or industry-specific regulations), and market dynamics (spanning the competitive landscape, customer expectations, and market trends). Path dependence refers to how historical decisions can influence current and future decisions; it emphasizes the importance of past events in shaping the later trajectory of an organization. Key aspects leading to such dependencies include historical practices (e.g., established routines and processes), past investments (e.g., in infrastructure, technology, and human capital, leading to potentially irrational decision-making by teams and executives due to the sunk cost fallacy), and organizational culture (covering the shared values, beliefs, and behaviors that have developed over time).

Boundary conditions can limit a product team’s options when it comes to configuring the operating model; some desirable choices may be out of reach (e.g., budget constraints preventing the staffing of an embedded AI expert with a certain specialization). Path dependence can create an adverse type of inertia, whereby teams continue to follow established processes and methods even if better alternatives exist. This can make it challenging to adopt new operating models that require significant changes to existing practices. One way to work around path dependence is to enable different product teams to evolve their respective operating models at different speeds according to their team-specific needs; a team building an AI-first product may choose to invest in embedded AI experts sooner than another team that is exploring potential AI use cases for the first time.

Finally, it is worth remembering that the choice of a product operating model can have far-reaching consequences for the design of the product itself. Conway’s Law states that “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” In our context, this means that the way product teams are organized, communicate, and incorporate the AI competency can directly impact the architecture of the products and services that they go on to create. For instance, consultative models may be more likely to result in the use of generic AI APIs (which the consultants can reuse across teams), while embedded AI experts may be better-positioned to implement product-specific optimizations aided by domain know-how (albeit at the risk of tighter coupling to other components of the product architecture). Companies and teams should therefore be empowered to configure their AI-ready product operating models, giving due consideration to the broader, long-term implications.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Chronosphere unveils logging package with cost control features

According to a study by Chronosphere, enterprise log data is growing at 250% year-over-year, and Chronosphere Logs helps engineers and observability teams to resolve incidents faster while controlling costs. The usage and volume analysis and proactive recommendations can help reduce data before it’s stored, the company says. “Organizations are drowning

Read More »

Cisco CIO on the future of IT: AI, simplicity, and employee power

AI can democratize access to information to deliver a “white-glove experience” once reserved for senior executives, Previn said. That might include, for example, real-time information retrieval and intelligent process execution for every employee. “Usually, in a large company, you’ve got senior executives, and you’ve got early career hires, and it’s

Read More »

AMI MegaRAC authentication bypass flaw is being exploitated, CISA warns

The spoofing attack works by manipulating HTTP request headers sent to the Redfish interface. Attackers can add specific values to headers like “X-Server-Addr” to make their external requests appear as if they’re coming from inside the server itself. Since the system automatically trusts internal requests as authenticated, this spoofing technique

Read More »

Israeli Gas Flows to Egypt Return to Normal as Iran Truce Holds

Israeli natural gas flows to Egypt returned to normal levels after a truce with Iran allowed the Jewish state to reopen facilities shuttered by the 12-day conflict. Daily exports have climbed to 1 billion cubic feet per day, according to two people with direct knowledge of the situation. That’s up from 260 million cubic feet when Israel’s Leviathan gas field, the country’s biggest, restarted on Wednesday, they said, declining to be identified because they’re not authorized to speak to the media.  The increased flows have let Egyptian authorities resume supplies to some factories that had been halted because of the shortages. Israel temporarily closed two of its three gas fields – Chevron-operated Leviathan and Energean’s Karish – shortly after launching attacks on Iran on June 13. The facilities that provided the bulk of exports to Egypt and Jordan resumed operations last week after a US-brokered ceasefire with the Islamic Republic took hold. The ramped-up supplies are a relief for Cairo, which has swung from a net exporter to importer of natural gas in recent years. As Israel and Iran traded blows, Egypt enacted contingency plans that included seeking alternative fuel purchases, limiting gas to some industries and switching power stations to fuel oil and diesel to maintain electricity output. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

California Regulator Wants to Pause Newsom Refinery Profit Cap

California’s energy market regulator is backing off a plan to place a profit cap on oil refiners in the state.  Siva Gunda, vice chair of the California Energy Commission, said during a Friday briefing that the cap would “serve as a deterrent” to refiners boosting investments in the state. Gunda said the commission wants to increase gasoline supply in California after two refineries announced plans to close in the next year, accounting for about one-fifth of the state’s crude-processing capacity. The recommendation marks a reversal from years of regulatory scrutiny by Governor Gavin Newsom and the California Energy Commission that contributed to plans by Phillips 66 and Valero Energy Corp. to shut their refineries. The closings prompted Newsom to adjust course in April and urge the energy regulator to collaborate with fuel makers to ensure affordable and reliable supply. Gunda wrote in a Friday letter to Newsom that the commission should pause implementation of a profit margin cap and focus on fuel resupply strategies instead. It comes more than two years after Newsom and state lawmakers gave the energy commission authority to determine a profit margin on refiners and impose financial penalties for violations. The state will be looking to increase fuel imports to make up for the loss of refining capacity, Gunda said. In the short term, California gas prices could rise 15 to 30 cents a gallon because of the loss of production, he said. A spokesperson for the energy commission said the estimated price increases would be mitigated by the plan presented on Friday. Californians already pay the highest gasoline prices in the country. Wade Crowfoot, secretary of the California Natural Resources Agency, said residents want the state to transition away from oil and gas yet they need to prevent cost spikes. “We get it,” he said.

Read More »

State utility regulators urge FERC to slash ROE transmission incentive

Utility regulators from about 35 states are urging the Federal Energy Regulatory Commission to sharply limit a 0.5% return on equity incentive the agency gives to utilities that join regional transmission organizations. “The time has come for the Commission to eliminate its policy of granting the RTO Participation Adder in perpetuity, if not to eliminate this incentive altogether,” the Organization of PJM States, the Organization of MISO States, the New England States Committee on Electricity and the Southwest Power Pool Regional State Committee said in a Friday letter to FERC. The state regulators and others contends the RTO incentive adds millions to ratepayer costs to encourage behavior — being an RTO member — that they would likely do anyway. In 2021, FERC proposed limiting its ROE adder to three years. FERC Chairman Mark Christie supports the proposal as well as limiting other incentives aimed at encouraging utilities to build transmission lines. However, it appears he has been unable to convince a majority of FERC commissioners to reduce those incentives. Christie’s term ends today, although he plans to stay at the agency until at least FERC’s next open meeting on July 24. Limiting the ROE incentive could reduce utility income. Public Service Enterprise Group, for example, estimates that removing the incentive would cut annual net income and cash inflows by about $40 million for its Public Service Electric & Gas subsidiary, according to a Feb. 25 filing at the U.S. Securities and Exchange Commission. The utility earned about $1.5 billion in 2024. Ending the incentive would reduce American Electric Power’s pretax income by $35 million to $50 million a year, the utility company said in its 2023 annual report with the SEC. In April, WIRES, a transmission-focused trade group, the Edison Electric Institute, which represents investor-owned utilities, and GridWise Alliance, a

Read More »

Affordability a ‘formidable challenge’ as load shifts to tech, industrial customers: ICF

Dive Brief: Keeping electricity affordable for consumers is a “formidable challenge” amid projections of declining generation capacity reserves and persistent uncertainty around the scale and pace of future load growth, ICF International Vice President of Energy Markets Maria Scheller said Thursday.  Meanwhile, broad policy uncertainty and an increasingly shaky regulatory environment give utilities and capital markets pause about expensive new infrastructure investments that could become stranded assets, Scheller said in a webinar on ICF’s “Powering the Future: Addressing Surging U.S. Electricity Demand” report. Policy conversations around import tariffs, federal energy tax credits and permitting reform are unfolding as the balance of electricity demand shifts from residential and business consumers to technology and industrial customers, which tend to require around-the-clock power, Scheller added. Dive Insight: The coming shift in U.S. electricity consumption represents less of a new paradigm than a return to the industrial-driven demand the country saw from the 1950s into the 1980s, after which deindustrialization and consumer-centric trends like the widespread adoption of air conditioning, electric resistance heating and personal computing shifted the balance toward the residential segment, Scheller said. The shift is important because unlike residential loads, which show considerable seasonal and intraday variation, industrial loads are flatter, less weather-dependent and more sensitive to voltage fluctuations, Scheller said. By 2035, ICF expects nearly 40% of total U.S. load will have a “flat, power-quality-sensitive profile,” and that overall load will grow faster than peak load, she said. In 2030, ICF projects more than 3% annual power consumption growth, compared with less than 2% annual peak load growth, according to a webinar slide. That’s not to say residential demand won’t also grow in the next few years as consumers electrify home heating and buy more electric vehicles — only that data centers and other industrial demand will “dwarf” it, Scheller

Read More »

Trump attacks on NRC independence pose health, safety risks

Edwin Lyman is director of nuclear power safety at the Union of Concerned Scientists. A White House executive order issued last month targeting the independence of the Nuclear Regulatory Commission, the federal agency that oversees the safety and security of U.S. commercial nuclear facilities and materials, as well as the possibly illegal firing earlier this month of Commissioner Christopher Hanson by President Donald Trump, are raising serious concerns about the agency’s effectiveness as a regulator going forward. While I’ve often been a critic of the NRC for taking actions favoring the nuclear industry at the expense of public health and safety, preserving the NRC in its current form is the best hope for heading off a U.S. nuclear plant disaster like the 2011 Fukushima Daiichi reactor meltdowns in Japan. My long-standing beef with the NRC has primarily been with its political leadership, not with the rank-and-file staff of highly knowledgeable inspectors, analysts and researchers committed to helping ensure that nuclear power remains safe and secure. These professionals are well aware how quickly things can go south at a nuclear power plant without rigorous oversight. They know from experience what obscure corners to look in and what questions to ask. And they can tell — and are not afraid to push back — when they are getting sold snake oil by fly-by-night startups looking to make easy money by capitalizing on the current nuclear power craze. Technical rigor and expert judgment form the bedrock of this work. But when staff are compelled to sweep legitimate safety concerns under the rug in the interest of political expediency, many will leave rather than compromise their scientific integrity. So there is little wonder that a wave of experienced personnel is headed out the door in the wake of the executive order on NRC “reform,”

Read More »

North America Loses Rigs Week on Week

North America dropped six rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on June 27. Although the U.S. dropped seven rigs week on week, Canada added one rig during the same timeframe, taking the total North America rig count down to 687, comprising 547 rigs from the U.S. and 140 rigs from Canada, the count outlined. Of the total U.S. rig count of 547, 533 rigs are categorized as land rigs, 12 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 432 oil rigs, 109 gas rigs, and six miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 496 horizontal rigs, 38 directional rigs, and 13 vertical rigs. Week on week, the U.S. land rig count reduced by five, its offshore rig count decreased by two, and its inland water rig count remained unchanged, the count highlighted. The country’s oil rig count dropped by six, its gas rig count dropped by two, and its miscellaneous rig count increased by one, week on week, the count showed. The U.S. horizontal rig count dropped by six, its directional rig count dropped by two, and its vertical rig count increased by one, week on week, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Wyoming dropped five rigs, and Oklahoma, Louisiana, and Colorado each dropped one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Granite Wash basin dropped one rig and the Permian basin dropped one rig. Canada’s total rig count of 140 is made up of 94 oil rigs and 46 gas rigs, Baker Hughes pointed

Read More »

Datacenter industry calls for investment after EU issues water consumption warning

CISPE’s response to the European Commission’s report warns that the resulting regulatory uncertainty could hurt the region’s economy. “Imposing new, standalone water regulations could increase costs, create regulatory fragmentation, and deter investment. This risks shifting infrastructure outside the EU, undermining both sustainability and sovereignty goals,” CISPE said in its latest policy recommendation, Advancing water resilience through digital innovation and responsible stewardship. “Such regulatory uncertainty could also reduce Europe’s attractiveness for climate-neutral infrastructure investment at a time when other regions offer clear and stable frameworks for green data growth,” it added. CISPE’s recommendations are a mix of regulatory harmonization, increased investment, and technological improvement. Currently, water reuse regulation is directed towards agriculture. Updated regulation across the bloc would encourage more efficient use of water in industrial settings such as datacenters, the asosciation said. At the same time, countries struggling with limited public sector budgets are not investing enough in water infrastructure. This could only be addressed by tapping new investment by encouraging formal public-private partnerships (PPPs), it suggested: “Such a framework would enable the development of sustainable financing models that harness private sector innovation and capital, while ensuring robust public oversight and accountability.” Nevertheless, better water management would also require real-time data gathered through networks of IoT sensors coupled to AI analytics and prediction systems. To that end, cloud datacenters were less a drain on water resources than part of the answer: “A cloud-based approach would allow water utilities and industrial users to centralize data collection, automate operational processes, and leverage machine learning algorithms for improved decision-making,” argued CISPE.

Read More »

HPE-Juniper deal clears DOJ hurdle, but settlement requires divestitures

In HPE’s press release following the court’s decision, the vendor wrote that “After close, HPE will facilitate limited access to Juniper’s advanced Mist AIOps technology.” In addition, the DOJ stated that the settlement requires HPE to divest its Instant On business and mandates that the merged firm license critical Juniper software to independent competitors. Specifically, HPE must divest its global Instant On campus and branch WLAN business, including all assets, intellectual property, R&D personnel, and customer relationships, to a DOJ-approved buyer within 180 days. Instant On is aimed primarily at the SMB arena and offers a cloud-based package of wired and wireless networking gear that’s designed for so-called out-of-the-box installation and minimal IT involvement, according to HPE. HPE and Juniper focused on the positive in reacting to the settlement. “Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market,” HPE CEO Antonio Neri said in a statement. “For the first time, customers will now have a modern network architecture alternative that can best support the demands of AI workloads. The combination of HPE Aruba Networking and Juniper Networks will provide customers with a comprehensive portfolio of secure, AI-native networking solutions, and accelerate HPE’s ability to grow in the AI data center, service provider and cloud segments.” “This marks an exciting step forward in delivering on a critical customer need – a complete portfolio of modern, secure networking solutions to connect their organizations and provide essential foundations for hybrid cloud and AI,” said Juniper Networks CEO Rami Rahim. “We look forward to closing this transaction and turning our shared vision into reality for enterprise, service provider and cloud customers.”

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

HPE announces GreenLake Intelligence, goes all-in with agentic AI

Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

Read More »

MEF goes beyond metro Ethernet, rebrands as Mplify with expanded scope on NaaS and AI

While MEF is only now rebranding, Vachon said that the scope of the organization had already changed by 2005. Instead of just looking at metro Ethernet, the organization at the time had expanded into carrier Ethernet requirements.  The organization has also had a growing focus on solving the challenge of cross-provider automation, which is where the LSO framework fits in. LSO provides the foundation for an automation framework that allows providers to more efficiently deliver complex services across partner networks, essentially creating a standardized language for service integration.  NaaS leadership and industry blueprint Building on the LSO automation framework, the organization has been working on efforts to help providers with network-as-a-service (NaaS) related guidance and specifications. The organization’s evolution toward NaaS reflects member-driven demands for modern service delivery models. Vachon noted that MEF member organizations were asking for help with NaaS, looking for direction on establishing common definitions and some standard work. The organization responded by developing comprehensive industry guidance. “In 2023 we launched the first blueprint, which is like an industry North Star document. It includes what we think about NaaS and the work we’re doing around it,” Vachon said. The NaaS blueprint encompasses the complete service delivery ecosystem, with APIs including last mile, cloud, data center and security services. (Read more about its vision for NaaS, including easy provisioning and integrated security across a federated network of providers)

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »