Stay Ahead, Stay ONMINE

Six Organizational Models for Data Science

Introduction Data science teams can operate in myriad ways within a company. These organizational models influence the type of work that the team does, but also the team’s culture, goals, Impact, and overall value to the company.  Adopting the wrong organizational model can limit impact, cause delays, and compromise the morale of a team. As a result, leadership should be aware of these different organizational models and explicitly select models aligned to each project’s goals and their team’s strengths. This article explores six distinct models we’ve observed across numerous organizations. These models are primarily differentiated by who initiates the work, what output the data science team generates, and how the data science team is evaluated. We note common pitfalls, pros, and cons of each model to help you determine which might work best for your organization. 1. The scientist  Prototypical scenario A scientist at a university studies changing ocean temperatures and subsequently publishes peer-reviewed journal articles detailing their findings. They hope that policymakers will one day recognize the importance of changing ocean temperatures, read their papers, and take action based on their research. Who initiates Data scientists working within this model typically initiate their own projects, driven by their intellectual curiosity and desire to advance knowledge within a field. How is the work judged A scientist’s output is often assessed by how their work impacts the thinking of their peers. For instance, did their work draw other experts’ attention to an area of study, did it resolve fundamental open questions, did it enable subsequent discoveries, or lay the groundwork for subsequent applications? Common pitfalls to avoid Basic scientific research pushes humanity’s knowledge forward, delivering foundational knowledge that enables long term societal progress. However, data science projects that use this model risk focusing on questions that have large long term implications, but limited opportunities for near term impact. Moreover, the model encourages decoupling of scientists from decision makers and thus it may not cultivate the shared context, communication styles, or relationships that are necessary to drive action (e.g., regrettably little action has resulted from all the research on climate change).  Pros The opportunity to develop deep expertise at the forefront of a field Potential for groundbreaking discoveries Attracts strong talent that values autonomy Cons May struggle to drive outcomes based on findings May lack alignment with organizational priorities Many interesting questions don’t have large commercial implications 2. The business intelligence  Prototypical scenario A marketing team requests data about the Open and Click Through Rates for each of their last emails. The Business Intelligence team responds with a spreadsheet or dashboard that displays the requested data. Who initiates An operational (Marketing, Sales, etc) or Product team submits a ticket or makes a request directly to a data science team member.  How the DS team is judged The BI team’s contribution will be judged by how quickly and accurately they service inbound requests.  Common pitfalls to avoid BI teams can efficiently execute against well specified inbound requests. Unfortunately, requests won’t typically include substantial context about a domain, the decisions being made, or the company’s larger goals. As a result, BI teams often struggle to drive innovation or strategically meaningful levels of impact. In the worst situations, the BI team’s work will be used to justify decisions that were already made.  Pros Clear roles and responsibilities for the data science team Rapid execution against specific requests Direct fulfillment of stakeholder needs (Happy partners!) Cons Rarely capitalizes on the non-executional skills of data scientists Unlikely to drive substantial innovation Top talent will typically seek a broader and less executional scope 3. The analyst  Prototypical scenario A product team requests an analysis of the recent spike in customer churn. The data science team studies how churn spiked and what might have driven the change. The analyst presents their findings in a meeting, and the analysis is persisted in a slide deck that is shared with all attendees.  Who initiates Similar to the BI model, the Analyst model typically begins with an operational or product team’s request.  How the DS team is judged The Analyst’s work is typically judged by whether the requester feels they received useful insights. In the best cases, the analysis will point to an action that is subsequently taken and yields a desired outcome (e.g., an analysis indicates that the spike in client churn occurred just as page load times increased on the platform. Subsequent efforts to decrease page load times return churn to normal levels). Common Pitfalls To Avoid Analyst’s insights can guide critical strategic decisions, while helping the data science team develop invaluable domain expertise and relationships. However, if an analyst doesn’t sufficiently understand the operational constraints in a domain, then their analyses may not be directly actionable.  Pros Analyses can provide substantive and impactful learnings  Capitalizes on the data science team’s strengths in interpreting data Creates opportunity to build deep subject matter expertise  Cons Insights may not always be directly actionable May not have visibility into the impact of an analysis Analysts at risk of becoming “Armchair Quarterbacks” 4. The recommender Prototypical scenario A product manager requests a system that ranks products on a website. The Recommender develops an algorithm and conducts A/B testing to measure its impact on sales, engagement, etc. The Recommender iteratively improves their algorithm via a series of A/B tests.  Who initiates A product manager typically initiates this type of project, recognizing the need for a recommendation engine to improve the users’ experience or drive business metrics.  How the DS team is judged The Recommender is ideally judged by their impact on key performance indicators like sales efficiency or conversion rates. The precise form that this takes will often depend on whether the recommendation engine is client or back office facing (e.g., lead scores for a sales team).   Common pitfalls to avoid Recommendation projects thrive when they are aligned to high frequency decisions that each have low incremental value (e.g., What song to play next). Training and assessing recommendations may be challenging for low frequency decisions, because of low data volume. Even assessing if recommendation adoption is warranted can be challenging if each decision has high incremental value.  To illustrate, consider efforts to develop and deploy computer vision systems for medical diagnoses. Despite their objectively strong performance, adoption has been slow because cancer diagnoses are relatively low frequency and have very high incremental value.  Pros Clear objectives and opportunity for measurable impact via A/B testing Potential for significant ROI if the recommendation system is successful Direct alignment with customer-facing outcomes and the organization’s goals Cons Errors will directly hurt client or financial outcomes Internally facing recommendation engines may be hard to validate Potential for algorithm bias and negative externalities  5. The automator Prototypical scenario A self-driving car takes its owner to the airport. The owner sits in the driver’s seat, just in case they need to intervene, but they rarely do. Who initiates An operational, product, or data science team can see the opportunity to automate a task.  How the DS team is judged The Automator is evaluated on whether their system produces better or cheaper outcomes than when a human was executing the task. Common pitfalls to avoid Automation can deliver super-human performance or remove substantial costs. However, automating a complex human task can be very challenging and expensive, particularly, if it is embedded in a complex social or legal system. Moreover, framing a project around automation encourages teams to mimic human processes, which may prove challenging because of the unique strengths and weaknesses of the human vs the algorithm.  Pros May drive substantial improvements or cost savings Consistent performance without the variability intrinsic to human decisions Frees up human resources for higher-value more strategic activities Cons Automating complex tasks can be resource-intensive, and thus low ROI Ethical considerations around job displacement and accountability Challenging to maintain and update as conditions evolve 6. The decision supporter Prototypical scenario An end user opens Google Maps and types in a destination. Google Maps presents multiple possible routes, each optimized for different criteria like travel time, avoiding highways, or using public transit. The user reviews these options and selects the one that best aligns with their preferences before they drive along their chosen route. Who initiates The data science team often recognizes an opportunity to assist decision-makers, by  distilling a large space of possible actions into a small set of high quality options that each optimize for a different outcomes (e.g., shortest route vs fastest route) How the DS team is judged The Decision Supporter is evaluated based on whether their system helps users select good options and then experience the promised outcomes (e.g., did the trip take the expected time, and did the user avoid highways as promised). Common pitfalls to avoid Decision support systems capitalize on the respective strengths of humans and algorithms. The success of this system will depend on how well the humans and algorithms collaborate. If the human doesn’t want or trust the input of the algorithmic system, then this kind of project is much less likely to drive impact.  Pros Capitalizes on the strengths of machines to make accurate predictions at large scale, and the strengths of humans to make strategic trade offs  Engagement of the data science team in the project’s inception and framing increase the likelihood that it will produce an innovative and strategically differentiating capability for the company  Provides transparency into the decision-making process Cons Requires significant effort to model and quantify various trade-offs Users may struggle to understand or weigh the presented trade-offs Complex to validate that predicted outcomes match actual results A portfolio of projects Under- or overutilizing particular models can prove detrimental to a team’s long term success. For instance, we’ve observed teams avoiding BI projects, and suffer from a lack of alignment about how goals are quantified. Or, teams that avoid Analyst projects may struggle because they lack critical domain expertise.  Even more frequently, we’ve observed teams over utilize a subset of models and become entrapped by them. This process is illustrated in a case study, that we experienced:  A new data science team was created to partner with an existing operational team. The operational team was excited to become “data driven” and so they submitted many requests for data and analysis. To keep their heads above water, the data science team over utilize the BI and Analyst models. This reinforced the operational team’s tacit belief that the data team existed to service their requests.  Eventually, the data science team became frustrated with their inability to drive innovation or directly quantify their impact. They fought to secure the time and space to build an innovative Decision Support system. But after it was launched, the operational team chose not to utilize it at a high rate.  The data science team had trained their cross functional partners to view them as a supporting org, rather than joint owners of decisions. So their latest project felt like an “armchair quarterback”: It expressed strong opinions, but without sharing ownership of execution or outcome.  Over reliance on the BI and Analyst models had entrapped the team. Launching the new Decision Support system had proven a time consuming and frustrating process for all parties. A tops-down mandate was eventually required to drive enough adoption to assess the system. It worked! In hindsight, adopting a broader portfolio of project types earlier could have prevented this situation. For instance, instead of culminating with an insight some Analysis projects should have generated strong Recommendations about particular actions. And the data science team should have partnered with the operational team to see this work all the way through execution to final assessment.  Conclusion Data Science leaders should intentionally adopt an organizational model for each project based on its goals, constraints, and the surrounding organizational dynamics. Moreover, they should be mindful to build self reinforcing portfolios of different project types.  To select a model for a project, consider: The nature of the problems you’re solving: Are the motivating questions exploratory or well-defined?  Desired outcomes: Are you seeking incremental improvements or innovative breakthroughs?  Organizational hunger: How much support will the project receive from relevant operating teams? Your team’s skills and interests: How strong are your team’s communication vs production coding skills? Available resources: Do you have the bandwidth to maintain and extend a system in perpetuity?  Are you ready: Does your team have the expertise and relationships to make a particular type of project successful? 

Introduction

Data science teams can operate in myriad ways within a company. These organizational models influence the type of work that the team does, but also the team’s culture, goals, Impact, and overall value to the company. 

Adopting the wrong organizational model can limit impact, cause delays, and compromise the morale of a team. As a result, leadership should be aware of these different organizational models and explicitly select models aligned to each project’s goals and their team’s strengths.

This article explores six distinct models we’ve observed across numerous organizations. These models are primarily differentiated by who initiates the work, what output the data science team generates, and how the data science team is evaluated. We note common pitfalls, pros, and cons of each model to help you determine which might work best for your organization.

1. The scientist 

Prototypical scenario

A scientist at a university studies changing ocean temperatures and subsequently publishes peer-reviewed journal articles detailing their findings. They hope that policymakers will one day recognize the importance of changing ocean temperatures, read their papers, and take action based on their research.

Who initiates

Data scientists working within this model typically initiate their own projects, driven by their intellectual curiosity and desire to advance knowledge within a field.

How is the work judged

A scientist’s output is often assessed by how their work impacts the thinking of their peers. For instance, did their work draw other experts’ attention to an area of study, did it resolve fundamental open questions, did it enable subsequent discoveries, or lay the groundwork for subsequent applications?

Common pitfalls to avoid

Basic scientific research pushes humanity’s knowledge forward, delivering foundational knowledge that enables long term societal progress. However, data science projects that use this model risk focusing on questions that have large long term implications, but limited opportunities for near term impact. Moreover, the model encourages decoupling of scientists from decision makers and thus it may not cultivate the shared context, communication styles, or relationships that are necessary to drive action (e.g., regrettably little action has resulted from all the research on climate change). 

Pros

  • The opportunity to develop deep expertise at the forefront of a field
  • Potential for groundbreaking discoveries
  • Attracts strong talent that values autonomy

Cons

  • May struggle to drive outcomes based on findings
  • May lack alignment with organizational priorities
  • Many interesting questions don’t have large commercial implications

2. The business intelligence 

Prototypical scenario

A marketing team requests data about the Open and Click Through Rates for each of their last emails. The Business Intelligence team responds with a spreadsheet or dashboard that displays the requested data.

Who initiates

An operational (Marketing, Sales, etc) or Product team submits a ticket or makes a request directly to a data science team member. 

How the DS team is judged

The BI team’s contribution will be judged by how quickly and accurately they service inbound requests. 

Common pitfalls to avoid

BI teams can efficiently execute against well specified inbound requests. Unfortunately, requests won’t typically include substantial context about a domain, the decisions being made, or the company’s larger goals. As a result, BI teams often struggle to drive innovation or strategically meaningful levels of impact. In the worst situations, the BI team’s work will be used to justify decisions that were already made. 

Pros

  • Clear roles and responsibilities for the data science team
  • Rapid execution against specific requests
  • Direct fulfillment of stakeholder needs (Happy partners!)

Cons

  • Rarely capitalizes on the non-executional skills of data scientists
  • Unlikely to drive substantial innovation
  • Top talent will typically seek a broader and less executional scope

3. The analyst 

Prototypical scenario

A product team requests an analysis of the recent spike in customer churn. The data science team studies how churn spiked and what might have driven the change. The analyst presents their findings in a meeting, and the analysis is persisted in a slide deck that is shared with all attendees. 

Who initiates

Similar to the BI model, the Analyst model typically begins with an operational or product team’s request. 

How the DS team is judged

The Analyst’s work is typically judged by whether the requester feels they received useful insights. In the best cases, the analysis will point to an action that is subsequently taken and yields a desired outcome (e.g., an analysis indicates that the spike in client churn occurred just as page load times increased on the platform. Subsequent efforts to decrease page load times return churn to normal levels).

Common Pitfalls To Avoid

Analyst’s insights can guide critical strategic decisions, while helping the data science team develop invaluable domain expertise and relationships. However, if an analyst doesn’t sufficiently understand the operational constraints in a domain, then their analyses may not be directly actionable. 

Pros

  • Analyses can provide substantive and impactful learnings 
  • Capitalizes on the data science team’s strengths in interpreting data
  • Creates opportunity to build deep subject matter expertise 

Cons

  • Insights may not always be directly actionable
  • May not have visibility into the impact of an analysis
  • Analysts at risk of becoming “Armchair Quarterbacks”

4. The recommender

Prototypical scenario

A product manager requests a system that ranks products on a website. The Recommender develops an algorithm and conducts A/B testing to measure its impact on sales, engagement, etc. The Recommender iteratively improves their algorithm via a series of A/B tests. 

Who initiates

A product manager typically initiates this type of project, recognizing the need for a recommendation engine to improve the users’ experience or drive business metrics. 

How the DS team is judged

The Recommender is ideally judged by their impact on key performance indicators like sales efficiency or conversion rates. The precise form that this takes will often depend on whether the recommendation engine is client or back office facing (e.g., lead scores for a sales team).  

Common pitfalls to avoid

Recommendation projects thrive when they are aligned to high frequency decisions that each have low incremental value (e.g., What song to play next). Training and assessing recommendations may be challenging for low frequency decisions, because of low data volume. Even assessing if recommendation adoption is warranted can be challenging if each decision has high incremental value.  To illustrate, consider efforts to develop and deploy computer vision systems for medical diagnoses. Despite their objectively strong performance, adoption has been slow because cancer diagnoses are relatively low frequency and have very high incremental value. 

Pros

  • Clear objectives and opportunity for measurable impact via A/B testing
  • Potential for significant ROI if the recommendation system is successful
  • Direct alignment with customer-facing outcomes and the organization’s goals

Cons

  • Errors will directly hurt client or financial outcomes
  • Internally facing recommendation engines may be hard to validate
  • Potential for algorithm bias and negative externalities 

5. The automator

Prototypical scenario

A self-driving car takes its owner to the airport. The owner sits in the driver’s seat, just in case they need to intervene, but they rarely do.

Who initiates

An operational, product, or data science team can see the opportunity to automate a task. 

How the DS team is judged

The Automator is evaluated on whether their system produces better or cheaper outcomes than when a human was executing the task.

Common pitfalls to avoid

Automation can deliver super-human performance or remove substantial costs. However, automating a complex human task can be very challenging and expensive, particularly, if it is embedded in a complex social or legal system. Moreover, framing a project around automation encourages teams to mimic human processes, which may prove challenging because of the unique strengths and weaknesses of the human vs the algorithm. 

Pros

  • May drive substantial improvements or cost savings
  • Consistent performance without the variability intrinsic to human decisions
  • Frees up human resources for higher-value more strategic activities

Cons

  • Automating complex tasks can be resource-intensive, and thus low ROI
  • Ethical considerations around job displacement and accountability
  • Challenging to maintain and update as conditions evolve

6. The decision supporter

Prototypical scenario

An end user opens Google Maps and types in a destination. Google Maps presents multiple possible routes, each optimized for different criteria like travel time, avoiding highways, or using public transit. The user reviews these options and selects the one that best aligns with their preferences before they drive along their chosen route.

Who initiates

The data science team often recognizes an opportunity to assist decision-makers, by  distilling a large space of possible actions into a small set of high quality options that each optimize for a different outcomes (e.g., shortest route vs fastest route)

How the DS team is judged

The Decision Supporter is evaluated based on whether their system helps users select good options and then experience the promised outcomes (e.g., did the trip take the expected time, and did the user avoid highways as promised).

Common pitfalls to avoid

Decision support systems capitalize on the respective strengths of humans and algorithms. The success of this system will depend on how well the humans and algorithms collaborate. If the human doesn’t want or trust the input of the algorithmic system, then this kind of project is much less likely to drive impact. 

Pros

  • Capitalizes on the strengths of machines to make accurate predictions at large scale, and the strengths of humans to make strategic trade offs 
  • Engagement of the data science team in the project’s inception and framing increase the likelihood that it will produce an innovative and strategically differentiating capability for the company 
  • Provides transparency into the decision-making process

Cons

  • Requires significant effort to model and quantify various trade-offs
  • Users may struggle to understand or weigh the presented trade-offs
  • Complex to validate that predicted outcomes match actual results

A portfolio of projects

Under- or overutilizing particular models can prove detrimental to a team’s long term success. For instance, we’ve observed teams avoiding BI projects, and suffer from a lack of alignment about how goals are quantified. Or, teams that avoid Analyst projects may struggle because they lack critical domain expertise. 

Even more frequently, we’ve observed teams over utilize a subset of models and become entrapped by them. This process is illustrated in a case study, that we experienced: 

A new data science team was created to partner with an existing operational team. The operational team was excited to become “data driven” and so they submitted many requests for data and analysis. To keep their heads above water, the data science team over utilize the BI and Analyst models. This reinforced the operational team’s tacit belief that the data team existed to service their requests. 

Eventually, the data science team became frustrated with their inability to drive innovation or directly quantify their impact. They fought to secure the time and space to build an innovative Decision Support system. But after it was launched, the operational team chose not to utilize it at a high rate. 

The data science team had trained their cross functional partners to view them as a supporting org, rather than joint owners of decisions. So their latest project felt like an “armchair quarterback”: It expressed strong opinions, but without sharing ownership of execution or outcome. 

Over reliance on the BI and Analyst models had entrapped the team. Launching the new Decision Support system had proven a time consuming and frustrating process for all parties. A tops-down mandate was eventually required to drive enough adoption to assess the system. It worked!

In hindsight, adopting a broader portfolio of project types earlier could have prevented this situation. For instance, instead of culminating with an insight some Analysis projects should have generated strong Recommendations about particular actions. And the data science team should have partnered with the operational team to see this work all the way through execution to final assessment. 

Conclusion

Data Science leaders should intentionally adopt an organizational model for each project based on its goals, constraints, and the surrounding organizational dynamics. Moreover, they should be mindful to build self reinforcing portfolios of different project types. 

To select a model for a project, consider:

  1. The nature of the problems you’re solving: Are the motivating questions exploratory or well-defined? 
  2. Desired outcomes: Are you seeking incremental improvements or innovative breakthroughs? 
  3. Organizational hunger: How much support will the project receive from relevant operating teams?
  4. Your team’s skills and interests: How strong are your team’s communication vs production coding skills?
  5. Available resources: Do you have the bandwidth to maintain and extend a system in perpetuity? 
  6. Are you ready: Does your team have the expertise and relationships to make a particular type of project successful? 
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Quantum Elements cuts quantum error rates using AI-powered digital twin

“That’s pretty clever, actually,” Sutor says. “It’s a little microwave pulse. That fixes some of the errors.” The Quantum Elements paper specifically addressed quantum error correction in IBM’s 127-qubit superconducting processor. But these techniques might also be able to be generalized to other types of quantum computers, Sutor says. And

Read More »

How AWS is reinventing the telco revenue model

Consider what that means for the mobile operator and its relationship with its customers. Instead of selling a generic 5G pipe with a static SLA, a telco can now sell a dynamic, guaranteed slice for a specific use case—say, a remote robotic surgery setup or a high-density, low-latency industrial IoT

Read More »

What’s the biggest barrier to AI success?

AI’s challenge starts with definition. We hear all the time about how AI raises productivity, and many have experienced that themselves. But what, exactly, does “productivity” mean? To the average person, it means they can do things with less effort, which they like, so it generates a lot of favorable

Read More »

Brent retreats from highs after Trump signals Iran war nearing end

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Oil futures eased from recent highs Tuesday as markets reacted to comments from US President Donald Trump suggesting the war with Iran may be nearing its conclusion, easing concerns about prolonged disruptions to Middle East crude supplies. Brent crude had climbed above $100/bbl amid escalating tensions in the region and fears that the war could prolong disruptions to shipments through the Strait of Hormuz—one of the world’s most critical energy chokepoints and a transit route for roughly one-fifth of global oil supply. Prices pulled back after Pres. Trump said the war was “almost done,” prompting traders to reassess the risk premium that had built into crude markets during the latest escalation. The earlier gains were driven by the fact that the war had disrupted tanker traffic in the Strait of Hormuz, raising concerns about wider supply disruptions from major Gulf oil producers. While the latest remarks helped calm markets, analysts note that geopolitical risks remain elevated and price volatility is likely to persist as traders monitor developments in the region. Any renewed escalation could quickly send crude prices higher again.

Read More »

Southwest Arkansas lithium project moves toward FID with 10-year offtake deal

Smackover Lithium, a joint venture between Standard Lithium Ltd. and Equinor, through subsidiaries of Equinor ASA, signed the first commercial offtake agreement for the South West Arkansas Project (SWA Project) with commodities group Trafigura Trading LLC. Under the terms of a binding take-or-pay offtake agreement, the JV will supply Trafigura with 8,000 metric tonnes/year (tpy) of battery-quality lithium carbonate (Li2CO3) over a 10-year period, beginning at the start of commercial production. Smackover Lithium is expected to achieve final investment decision (FID) for the project, which aims to use direct lithium extraction technology to produce lithium from brine resources in the Smackover formation in southern Arkansas, in 2026, with first production anticipated in 2028. The project encompasses about 30,000 acres of brine leases in the region, with the initial phase of project development focused on production from the 20,854-acre Reynolds Brine Unit.   Front-end engineering design was completed in support of a definitive feasibility study with a principal recommendation that the project is ready to progress to FID.  While pricing terms of the Trafigura deal were kept confidential, Standard Lithium said they are “structured to support the anticipated financing for the project.” The JV is seeking to finalize customer offtake agreements for roughly 80% of the 22,500 tonnes of annual nameplate lithium carbonate capacity for the initial phase of the project. This agreement represents over 40% of the targeted offtake commitments. Formed in 2024, Smackover Lithium is developing multiple DLE projects in Southwest Arkansas and East Texas. Standard Lithium is operator of the projecs with 55% interest. Equinor holds the remaining 45% interest.

Read More »

Equinor makes oil and gas discoveries in the North Sea

Equinor Energy AS discovered oil in the Troll area and gas and condensate in the Sleipner area of the North Sea. Byrding C discovery well 35/11-32 S in production license (PL) 090 HS was made 5 km northwest of Fram field in Troll. The well was drilled by the COSL Innovator rig in 373 m of water to 3,517 m TVD subsea. It was terminated in the Heather formation from the Middle Jurassic. The primary exploration target was to prove petroleum in reservoir rocks from the Late Jurassic deep marine equivalent to the Sognefjord formation. The secondary target was to prove petroleum and investigate the presence of potential reservoir rocks in two prospective intervals from the Middle Jurassic in deep marine equivalents to the Fensfjord formation. The well encountered a 22-m oil column in sandstone layers in the Sognefjord formation with a total thickness of 82 m, of which 70 m was sandstone with moderate to good reservoir properties. The oil-water contact was encountered. The secondary exploration target in the Fensfjord formation did not prove reservoir rocks or hydrocarbons. The well was not formation-tested, but data and samples were collected. The well has been permanently plugged. Preliminary estimates indicate the size of the discovery is 4.4–8.2 MMboe. Oil discovered in Byrding C will be produced using existing or future infrastructure in the area. The Frida Kahlo discovery was drilled from the Sleipner B platform in production license PL 046 northwest of Sleipner Vest and is estimated to contain 5–9 MMboe of gas and condensate. The well will be brought on stream as early as April. The four most recent exploration wells in the Sleipner area, drilled over a 3-month period, include Lofn, Langemann, Sissel, and Frida Kahlo. All have all proven gas and condensate in the Hugin formation, with combined estimated

Read More »

IEA launches record strategic oil release as Middle East war disrupts supply

The International Energy Agency (IEA) on Mar. 11 approved the largest emergency oil stock release in its history, making 400 million bbl available from member-country reserves in response to market disruptions tied to the war in the Middle East. The coordinated action, agreed unanimously by the IEA’s 32 member countries, is intended to ease supply pressure and temper price volatility as crude markets react to disrupted flows through the Strait of Hormuz. “The conflict in the Middle East is having significant impacts on global oil and gas markets, with major implications for energy security, energy affordability and the global economy for oil,” IEA executive director Fatih Birol said. The release more than doubles the previous IEA record set in 2022, when member countries collectively made 182.7 million bbl available following Russia’s invasion of Ukraine. Under the IEA system, member countries are required to maintain emergency oil stocks equal to at least 90 days of net imports, giving the agency a mechanism to respond when severe disruptions threaten global supply. The move comes after crude prices surged amid concerns that the US-Iran war could lead to prolonged disruption of exports from the Gulf. Despite the planned stock release, traders remain uncertain about whether reserve barrels alone will be enough to offset losses if the disruption persists. IEA said the emergency barrels will be supplied to the market from government-controlled and obligated industry stocks held across member countries. The action marks the sixth coordinated stock release in the agency’s history and underscores the seriousness of the current supply shock. Earlier the day, Japanese Prime Minister Sanae Takaichi said that Japan might start using its strategic oil reserves as early as next week, citing Japan’s unusually high dependence on Middle Eastern crude oil.

Read More »

Infographic: Strait of Hormuz energy trade 2025

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Coordinated attacks Feb. 28 by the US and Israel on Iran and the since-escalated conflict have nearly halted shipping traffic through the Strait of Hormuz, which typically carries about 20% of the world’s crude oil and natural gas. OGJ Statistics Editor Laura Bell-Hammer compiled data to showcase 2025 energy trade through the critical transit chokepoint.   <!–> –> <!–> ]–> <!–> ]–>

Read More »

BOEM: US OCS holds 65.8 billion bbl of technically recoverable reserves

The US Outer Continental Shelf (OCS) holds mean undiscovered technically recoverable resources (UTRR) of 65.8 billion bbl of oil and 218.43 tcf of natural gas, the US Bureau of Ocean Energy Management (BOEM) said Mar. 9. Based on current production trends, these undiscovered resources represent the potential for 100 or more years of energy production from the US Outer Continental Shelf (OCS), BOEM said. A large portion of undiscovered OSC resources is located offshore the Gulf of Mexico and Alaska, according to the report. The offshore Gulf holds 26.9 million bbl of oil and 45.59 tcf of gas, while offshore Alaska holds an estimated mean 24.1 million bbl of oil and 122.29 tcf of gas. Offshore Pacific holds a mean UTRR of 10.3 million barrels of oil and 16.2 trillion cubic feet of gas, the report said. Offshore Atlantic holds a mean UTRR of 10.3 billion barrels of oil and 16.2 trillion cubic feet of gas. The assessment also evaluates the impact of prices on hydrocarbon recovery. Alaska is particularly price-sensitive, with mean undiscovered economically recoverable resources (UERR) negligible until prices average $100/bbl and $17.79/Mcf. At those levels, the mean UERR stands at 6.25 billion bbl and 13.25 tcf. At $160/bbl and $28.47/Mcf, recoverable resources jump to 14.67 billion bbl and 58.78 tcf. In the Gulf of Mexico, the mean UERR is 17.51 billion bbl of oil and 13.71 tcf at average prices of $60/bbl and $3.20/Mcf, increasing to 20.51 billion bbl and 17.49 tcf at average prices of $100/bbl and $5.34/Mcf, respectively. BOEM conducts a national resource assessment every 4 years to understand the “distribution of undiscovered oil and gas resources on the OCS” and identify opportunities for additional oil and gas exploration and development. “The Outer Continental Shelf holds tremendous resource potential,” said BOEM Acting Director Matt Giacona. “This

Read More »

Palantir partners with Nvidia to streamline AI data center deployment

This collaboration grants enterprises full control over their data, AI models, and applications while supporting the use of open-source AI models and related data acceleration tools. The Palantir AI OS reference architecture gives enterprises total control over their data, AI models and applications. It is particularly critical for customers with existing GPU infrastructure, latency-sensitive workflows, data sovereignty requirements, and high geographic distribution. “From our first deployment with the United States government and in every deployment since, our software has had to meet the moment in the most complex and sensitive environments where customers must maintain control,” says Akshay Krishnaswamy, Palantir’s chief architect in a statement. “Together with Nvidia — and building on many customers’ existing investments — we are proud to deliver a fully integrated AI operating system that is optimized for Nvidia accelerated compute infrastructure and enables customers to realize the promise of on-premises, edge, and sovereign cloud deployments,” he added. Sovereign AI is an emerging market that represents a country’s efforts to develop and maintain control of its own AI, using its own data, and keeping the data within its borders.

Read More »

Who’s in the data-center space race?

But not everyone is that optimistic. According to Gartner, space-based data centers won’t be useful for decades, so companies should focus on expanding capacity down here on Earth. “I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” OpenAI CEO Sam Altman told The Indian Express in February. Current satellite computing can’t easily scale to data centers, agrees Holger Mueller, an analyst at Constellation Research. “Weight is still the restriction,” he says. “It’s the equivalent of you buying a tablet or small laptop to travel across Latin America versus putting in a data center in the Amazon. Different power requirements, investment, totally different setup.” Then there are issues like damaged solar panels from meteorite storms and satellite debris, he adds. “You would have to pay for operational redundancy, which is further investment.” “Data centers will be built where they are affordable,” he says. “I don’t see space happening soon. Remember the Microsoft submerged one? Crickets…” But he agrees that solar power is nice, though the sun is only visible from one side of the planet at any given time. And space is cold, he says. Cooling down in outer space In fact, space is very cold. Close to absolute zero cold. But vacuum is also a great insulator, and there’s no air to move the heat around. “You can’t convect heat away,” says Richard Bonner, CTO at Accelsius, a liquid cooling company. Bonner has worked on NASA research projects about the challenge of cooling in space and is very familiar with the problem. A small proportion of the heat might be turned back into useful electricity, but that’s not really a solution, he says, because computer chips don’t get quite that hot. Instead, heat is radiated. When an object warms up, it generates

Read More »

Community Opposition Emerges as New Gatekeeper for AI Data Center Expansion

The rapid global buildout of AI infrastructure is colliding with a new constraint that hyperscalers cannot solve with capital or GPUs: local opposition. In the first months of 2026, community resistance has already begun reshaping the development pipeline. A February analysis by Sightline Climate estimates that 30–50 percent of the data center capacity expected to come online in 2026 may not be delivered on schedule, reflecting a growing set of constraints that now include power availability, permitting challenges, and increasingly organized local opposition. The financial stakes are already substantial. Recent reporting indicates that tens of billions of dollars in planned data center development have been delayed or halted amid community pushback, including an estimated $98 billion worth of projects delayed or blocked in a single quarter of 2025, according to research cited by Data Center Watch. What had been framed throughout 2024 and 2025 as an inevitable expansion of hyperscale campuses, gigawatt-scale power agreements, and AI “factory” clusters is now encountering a different kind of gatekeeper: the communities expected to host the infrastructure. The shift is already visible in project outcomes. Across the United States, multiple projects were canceled, blocked, or fundamentally reshaped in the opening months of 2026 due to organized local opposition. Reporting from The Guardian found that 26 data center projects were canceled in December and January, compared with just one cancellation in October, suggesting that community resistance campaigns are increasingly capable of stopping projects before construction begins. At the same time, local governments are responding to community pressure with moratoriums, zoning restrictions, and permitting delays that can stall projects long enough to jeopardize financing or push developers to seek more favorable jurisdictions. While opposition to data center development is not new, the scale, coordination, and success rate of these efforts suggest a structural shift in how

Read More »

From Real Estate to AI Factories: 7×24 Exchange’s Michael Siteman on Power, Politics, and the New Logic of Data Center Development

The data center industry’s explosive growth in the AI era is transforming how projects are conceived, financed, and built. What was once a real estate-driven business has become something far more complex: an engineering and infrastructure challenge defined by power availability, network topology, and local politics. That was one of the key themes in this recent episode of the Data Center Frontier Show podcast, where Editor-in-Chief Matt Vincent spoke with Michael Siteman, President of Prodigious Proclivities and a longtime leader and board member within 7×24 Exchange International. Drawing on decades of experience spanning brokerage, development, connectivity strategy, and infrastructure advisory, Siteman offered a field-level view of how the industry is adapting to the demands of AI-driven infrastructure. “The business used to be a pure real estate play,” Siteman said. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk—all of these factors that have to work together.” Site Selection Becomes Systems Engineering For much of the early data center era, location decisions revolved around traditional real estate considerations: available buildings, proximity to customers, and nearby fiber connectivity. That logic has fundamentally changed. “Years ago, the question was: Is there a building? Are there carriers nearby?” Siteman recalled. “Now it’s completely different. Power availability, network topology, community acceptance—these are the variables that define whether a site works.” Utilities themselves have become gatekeepers in the process. “You go to a utility and ask if there’s power,” he explained. “They might say, ‘We might have power, but you have to pay us to study whether we actually have power.’” In many regions experiencing rapid digital infrastructure expansion, the answer increasingly comes back the same: there simply isn’t enough grid capacity available. Power Becomes the Project In the gigawatt-scale era of AI infrastructure, power strategy has moved

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »