Stay Ahead, Stay ONMINE

Six Organizational Models for Data Science

Introduction Data science teams can operate in myriad ways within a company. These organizational models influence the type of work that the team does, but also the team’s culture, goals, Impact, and overall value to the company.  Adopting the wrong organizational model can limit impact, cause delays, and compromise the morale of a team. As a result, leadership should be aware of these different organizational models and explicitly select models aligned to each project’s goals and their team’s strengths. This article explores six distinct models we’ve observed across numerous organizations. These models are primarily differentiated by who initiates the work, what output the data science team generates, and how the data science team is evaluated. We note common pitfalls, pros, and cons of each model to help you determine which might work best for your organization. 1. The scientist  Prototypical scenario A scientist at a university studies changing ocean temperatures and subsequently publishes peer-reviewed journal articles detailing their findings. They hope that policymakers will one day recognize the importance of changing ocean temperatures, read their papers, and take action based on their research. Who initiates Data scientists working within this model typically initiate their own projects, driven by their intellectual curiosity and desire to advance knowledge within a field. How is the work judged A scientist’s output is often assessed by how their work impacts the thinking of their peers. For instance, did their work draw other experts’ attention to an area of study, did it resolve fundamental open questions, did it enable subsequent discoveries, or lay the groundwork for subsequent applications? Common pitfalls to avoid Basic scientific research pushes humanity’s knowledge forward, delivering foundational knowledge that enables long term societal progress. However, data science projects that use this model risk focusing on questions that have large long term implications, but limited opportunities for near term impact. Moreover, the model encourages decoupling of scientists from decision makers and thus it may not cultivate the shared context, communication styles, or relationships that are necessary to drive action (e.g., regrettably little action has resulted from all the research on climate change).  Pros The opportunity to develop deep expertise at the forefront of a field Potential for groundbreaking discoveries Attracts strong talent that values autonomy Cons May struggle to drive outcomes based on findings May lack alignment with organizational priorities Many interesting questions don’t have large commercial implications 2. The business intelligence  Prototypical scenario A marketing team requests data about the Open and Click Through Rates for each of their last emails. The Business Intelligence team responds with a spreadsheet or dashboard that displays the requested data. Who initiates An operational (Marketing, Sales, etc) or Product team submits a ticket or makes a request directly to a data science team member.  How the DS team is judged The BI team’s contribution will be judged by how quickly and accurately they service inbound requests.  Common pitfalls to avoid BI teams can efficiently execute against well specified inbound requests. Unfortunately, requests won’t typically include substantial context about a domain, the decisions being made, or the company’s larger goals. As a result, BI teams often struggle to drive innovation or strategically meaningful levels of impact. In the worst situations, the BI team’s work will be used to justify decisions that were already made.  Pros Clear roles and responsibilities for the data science team Rapid execution against specific requests Direct fulfillment of stakeholder needs (Happy partners!) Cons Rarely capitalizes on the non-executional skills of data scientists Unlikely to drive substantial innovation Top talent will typically seek a broader and less executional scope 3. The analyst  Prototypical scenario A product team requests an analysis of the recent spike in customer churn. The data science team studies how churn spiked and what might have driven the change. The analyst presents their findings in a meeting, and the analysis is persisted in a slide deck that is shared with all attendees.  Who initiates Similar to the BI model, the Analyst model typically begins with an operational or product team’s request.  How the DS team is judged The Analyst’s work is typically judged by whether the requester feels they received useful insights. In the best cases, the analysis will point to an action that is subsequently taken and yields a desired outcome (e.g., an analysis indicates that the spike in client churn occurred just as page load times increased on the platform. Subsequent efforts to decrease page load times return churn to normal levels). Common Pitfalls To Avoid Analyst’s insights can guide critical strategic decisions, while helping the data science team develop invaluable domain expertise and relationships. However, if an analyst doesn’t sufficiently understand the operational constraints in a domain, then their analyses may not be directly actionable.  Pros Analyses can provide substantive and impactful learnings  Capitalizes on the data science team’s strengths in interpreting data Creates opportunity to build deep subject matter expertise  Cons Insights may not always be directly actionable May not have visibility into the impact of an analysis Analysts at risk of becoming “Armchair Quarterbacks” 4. The recommender Prototypical scenario A product manager requests a system that ranks products on a website. The Recommender develops an algorithm and conducts A/B testing to measure its impact on sales, engagement, etc. The Recommender iteratively improves their algorithm via a series of A/B tests.  Who initiates A product manager typically initiates this type of project, recognizing the need for a recommendation engine to improve the users’ experience or drive business metrics.  How the DS team is judged The Recommender is ideally judged by their impact on key performance indicators like sales efficiency or conversion rates. The precise form that this takes will often depend on whether the recommendation engine is client or back office facing (e.g., lead scores for a sales team).   Common pitfalls to avoid Recommendation projects thrive when they are aligned to high frequency decisions that each have low incremental value (e.g., What song to play next). Training and assessing recommendations may be challenging for low frequency decisions, because of low data volume. Even assessing if recommendation adoption is warranted can be challenging if each decision has high incremental value.  To illustrate, consider efforts to develop and deploy computer vision systems for medical diagnoses. Despite their objectively strong performance, adoption has been slow because cancer diagnoses are relatively low frequency and have very high incremental value.  Pros Clear objectives and opportunity for measurable impact via A/B testing Potential for significant ROI if the recommendation system is successful Direct alignment with customer-facing outcomes and the organization’s goals Cons Errors will directly hurt client or financial outcomes Internally facing recommendation engines may be hard to validate Potential for algorithm bias and negative externalities  5. The automator Prototypical scenario A self-driving car takes its owner to the airport. The owner sits in the driver’s seat, just in case they need to intervene, but they rarely do. Who initiates An operational, product, or data science team can see the opportunity to automate a task.  How the DS team is judged The Automator is evaluated on whether their system produces better or cheaper outcomes than when a human was executing the task. Common pitfalls to avoid Automation can deliver super-human performance or remove substantial costs. However, automating a complex human task can be very challenging and expensive, particularly, if it is embedded in a complex social or legal system. Moreover, framing a project around automation encourages teams to mimic human processes, which may prove challenging because of the unique strengths and weaknesses of the human vs the algorithm.  Pros May drive substantial improvements or cost savings Consistent performance without the variability intrinsic to human decisions Frees up human resources for higher-value more strategic activities Cons Automating complex tasks can be resource-intensive, and thus low ROI Ethical considerations around job displacement and accountability Challenging to maintain and update as conditions evolve 6. The decision supporter Prototypical scenario An end user opens Google Maps and types in a destination. Google Maps presents multiple possible routes, each optimized for different criteria like travel time, avoiding highways, or using public transit. The user reviews these options and selects the one that best aligns with their preferences before they drive along their chosen route. Who initiates The data science team often recognizes an opportunity to assist decision-makers, by  distilling a large space of possible actions into a small set of high quality options that each optimize for a different outcomes (e.g., shortest route vs fastest route) How the DS team is judged The Decision Supporter is evaluated based on whether their system helps users select good options and then experience the promised outcomes (e.g., did the trip take the expected time, and did the user avoid highways as promised). Common pitfalls to avoid Decision support systems capitalize on the respective strengths of humans and algorithms. The success of this system will depend on how well the humans and algorithms collaborate. If the human doesn’t want or trust the input of the algorithmic system, then this kind of project is much less likely to drive impact.  Pros Capitalizes on the strengths of machines to make accurate predictions at large scale, and the strengths of humans to make strategic trade offs  Engagement of the data science team in the project’s inception and framing increase the likelihood that it will produce an innovative and strategically differentiating capability for the company  Provides transparency into the decision-making process Cons Requires significant effort to model and quantify various trade-offs Users may struggle to understand or weigh the presented trade-offs Complex to validate that predicted outcomes match actual results A portfolio of projects Under- or overutilizing particular models can prove detrimental to a team’s long term success. For instance, we’ve observed teams avoiding BI projects, and suffer from a lack of alignment about how goals are quantified. Or, teams that avoid Analyst projects may struggle because they lack critical domain expertise.  Even more frequently, we’ve observed teams over utilize a subset of models and become entrapped by them. This process is illustrated in a case study, that we experienced:  A new data science team was created to partner with an existing operational team. The operational team was excited to become “data driven” and so they submitted many requests for data and analysis. To keep their heads above water, the data science team over utilize the BI and Analyst models. This reinforced the operational team’s tacit belief that the data team existed to service their requests.  Eventually, the data science team became frustrated with their inability to drive innovation or directly quantify their impact. They fought to secure the time and space to build an innovative Decision Support system. But after it was launched, the operational team chose not to utilize it at a high rate.  The data science team had trained their cross functional partners to view them as a supporting org, rather than joint owners of decisions. So their latest project felt like an “armchair quarterback”: It expressed strong opinions, but without sharing ownership of execution or outcome.  Over reliance on the BI and Analyst models had entrapped the team. Launching the new Decision Support system had proven a time consuming and frustrating process for all parties. A tops-down mandate was eventually required to drive enough adoption to assess the system. It worked! In hindsight, adopting a broader portfolio of project types earlier could have prevented this situation. For instance, instead of culminating with an insight some Analysis projects should have generated strong Recommendations about particular actions. And the data science team should have partnered with the operational team to see this work all the way through execution to final assessment.  Conclusion Data Science leaders should intentionally adopt an organizational model for each project based on its goals, constraints, and the surrounding organizational dynamics. Moreover, they should be mindful to build self reinforcing portfolios of different project types.  To select a model for a project, consider: The nature of the problems you’re solving: Are the motivating questions exploratory or well-defined?  Desired outcomes: Are you seeking incremental improvements or innovative breakthroughs?  Organizational hunger: How much support will the project receive from relevant operating teams? Your team’s skills and interests: How strong are your team’s communication vs production coding skills? Available resources: Do you have the bandwidth to maintain and extend a system in perpetuity?  Are you ready: Does your team have the expertise and relationships to make a particular type of project successful? 

Introduction

Data science teams can operate in myriad ways within a company. These organizational models influence the type of work that the team does, but also the team’s culture, goals, Impact, and overall value to the company. 

Adopting the wrong organizational model can limit impact, cause delays, and compromise the morale of a team. As a result, leadership should be aware of these different organizational models and explicitly select models aligned to each project’s goals and their team’s strengths.

This article explores six distinct models we’ve observed across numerous organizations. These models are primarily differentiated by who initiates the work, what output the data science team generates, and how the data science team is evaluated. We note common pitfalls, pros, and cons of each model to help you determine which might work best for your organization.

1. The scientist 

Prototypical scenario

A scientist at a university studies changing ocean temperatures and subsequently publishes peer-reviewed journal articles detailing their findings. They hope that policymakers will one day recognize the importance of changing ocean temperatures, read their papers, and take action based on their research.

Who initiates

Data scientists working within this model typically initiate their own projects, driven by their intellectual curiosity and desire to advance knowledge within a field.

How is the work judged

A scientist’s output is often assessed by how their work impacts the thinking of their peers. For instance, did their work draw other experts’ attention to an area of study, did it resolve fundamental open questions, did it enable subsequent discoveries, or lay the groundwork for subsequent applications?

Common pitfalls to avoid

Basic scientific research pushes humanity’s knowledge forward, delivering foundational knowledge that enables long term societal progress. However, data science projects that use this model risk focusing on questions that have large long term implications, but limited opportunities for near term impact. Moreover, the model encourages decoupling of scientists from decision makers and thus it may not cultivate the shared context, communication styles, or relationships that are necessary to drive action (e.g., regrettably little action has resulted from all the research on climate change). 

Pros

  • The opportunity to develop deep expertise at the forefront of a field
  • Potential for groundbreaking discoveries
  • Attracts strong talent that values autonomy

Cons

  • May struggle to drive outcomes based on findings
  • May lack alignment with organizational priorities
  • Many interesting questions don’t have large commercial implications

2. The business intelligence 

Prototypical scenario

A marketing team requests data about the Open and Click Through Rates for each of their last emails. The Business Intelligence team responds with a spreadsheet or dashboard that displays the requested data.

Who initiates

An operational (Marketing, Sales, etc) or Product team submits a ticket or makes a request directly to a data science team member. 

How the DS team is judged

The BI team’s contribution will be judged by how quickly and accurately they service inbound requests. 

Common pitfalls to avoid

BI teams can efficiently execute against well specified inbound requests. Unfortunately, requests won’t typically include substantial context about a domain, the decisions being made, or the company’s larger goals. As a result, BI teams often struggle to drive innovation or strategically meaningful levels of impact. In the worst situations, the BI team’s work will be used to justify decisions that were already made. 

Pros

  • Clear roles and responsibilities for the data science team
  • Rapid execution against specific requests
  • Direct fulfillment of stakeholder needs (Happy partners!)

Cons

  • Rarely capitalizes on the non-executional skills of data scientists
  • Unlikely to drive substantial innovation
  • Top talent will typically seek a broader and less executional scope

3. The analyst 

Prototypical scenario

A product team requests an analysis of the recent spike in customer churn. The data science team studies how churn spiked and what might have driven the change. The analyst presents their findings in a meeting, and the analysis is persisted in a slide deck that is shared with all attendees. 

Who initiates

Similar to the BI model, the Analyst model typically begins with an operational or product team’s request. 

How the DS team is judged

The Analyst’s work is typically judged by whether the requester feels they received useful insights. In the best cases, the analysis will point to an action that is subsequently taken and yields a desired outcome (e.g., an analysis indicates that the spike in client churn occurred just as page load times increased on the platform. Subsequent efforts to decrease page load times return churn to normal levels).

Common Pitfalls To Avoid

Analyst’s insights can guide critical strategic decisions, while helping the data science team develop invaluable domain expertise and relationships. However, if an analyst doesn’t sufficiently understand the operational constraints in a domain, then their analyses may not be directly actionable. 

Pros

  • Analyses can provide substantive and impactful learnings 
  • Capitalizes on the data science team’s strengths in interpreting data
  • Creates opportunity to build deep subject matter expertise 

Cons

  • Insights may not always be directly actionable
  • May not have visibility into the impact of an analysis
  • Analysts at risk of becoming “Armchair Quarterbacks”

4. The recommender

Prototypical scenario

A product manager requests a system that ranks products on a website. The Recommender develops an algorithm and conducts A/B testing to measure its impact on sales, engagement, etc. The Recommender iteratively improves their algorithm via a series of A/B tests. 

Who initiates

A product manager typically initiates this type of project, recognizing the need for a recommendation engine to improve the users’ experience or drive business metrics. 

How the DS team is judged

The Recommender is ideally judged by their impact on key performance indicators like sales efficiency or conversion rates. The precise form that this takes will often depend on whether the recommendation engine is client or back office facing (e.g., lead scores for a sales team).  

Common pitfalls to avoid

Recommendation projects thrive when they are aligned to high frequency decisions that each have low incremental value (e.g., What song to play next). Training and assessing recommendations may be challenging for low frequency decisions, because of low data volume. Even assessing if recommendation adoption is warranted can be challenging if each decision has high incremental value.  To illustrate, consider efforts to develop and deploy computer vision systems for medical diagnoses. Despite their objectively strong performance, adoption has been slow because cancer diagnoses are relatively low frequency and have very high incremental value. 

Pros

  • Clear objectives and opportunity for measurable impact via A/B testing
  • Potential for significant ROI if the recommendation system is successful
  • Direct alignment with customer-facing outcomes and the organization’s goals

Cons

  • Errors will directly hurt client or financial outcomes
  • Internally facing recommendation engines may be hard to validate
  • Potential for algorithm bias and negative externalities 

5. The automator

Prototypical scenario

A self-driving car takes its owner to the airport. The owner sits in the driver’s seat, just in case they need to intervene, but they rarely do.

Who initiates

An operational, product, or data science team can see the opportunity to automate a task. 

How the DS team is judged

The Automator is evaluated on whether their system produces better or cheaper outcomes than when a human was executing the task.

Common pitfalls to avoid

Automation can deliver super-human performance or remove substantial costs. However, automating a complex human task can be very challenging and expensive, particularly, if it is embedded in a complex social or legal system. Moreover, framing a project around automation encourages teams to mimic human processes, which may prove challenging because of the unique strengths and weaknesses of the human vs the algorithm. 

Pros

  • May drive substantial improvements or cost savings
  • Consistent performance without the variability intrinsic to human decisions
  • Frees up human resources for higher-value more strategic activities

Cons

  • Automating complex tasks can be resource-intensive, and thus low ROI
  • Ethical considerations around job displacement and accountability
  • Challenging to maintain and update as conditions evolve

6. The decision supporter

Prototypical scenario

An end user opens Google Maps and types in a destination. Google Maps presents multiple possible routes, each optimized for different criteria like travel time, avoiding highways, or using public transit. The user reviews these options and selects the one that best aligns with their preferences before they drive along their chosen route.

Who initiates

The data science team often recognizes an opportunity to assist decision-makers, by  distilling a large space of possible actions into a small set of high quality options that each optimize for a different outcomes (e.g., shortest route vs fastest route)

How the DS team is judged

The Decision Supporter is evaluated based on whether their system helps users select good options and then experience the promised outcomes (e.g., did the trip take the expected time, and did the user avoid highways as promised).

Common pitfalls to avoid

Decision support systems capitalize on the respective strengths of humans and algorithms. The success of this system will depend on how well the humans and algorithms collaborate. If the human doesn’t want or trust the input of the algorithmic system, then this kind of project is much less likely to drive impact. 

Pros

  • Capitalizes on the strengths of machines to make accurate predictions at large scale, and the strengths of humans to make strategic trade offs 
  • Engagement of the data science team in the project’s inception and framing increase the likelihood that it will produce an innovative and strategically differentiating capability for the company 
  • Provides transparency into the decision-making process

Cons

  • Requires significant effort to model and quantify various trade-offs
  • Users may struggle to understand or weigh the presented trade-offs
  • Complex to validate that predicted outcomes match actual results

A portfolio of projects

Under- or overutilizing particular models can prove detrimental to a team’s long term success. For instance, we’ve observed teams avoiding BI projects, and suffer from a lack of alignment about how goals are quantified. Or, teams that avoid Analyst projects may struggle because they lack critical domain expertise. 

Even more frequently, we’ve observed teams over utilize a subset of models and become entrapped by them. This process is illustrated in a case study, that we experienced: 

A new data science team was created to partner with an existing operational team. The operational team was excited to become “data driven” and so they submitted many requests for data and analysis. To keep their heads above water, the data science team over utilize the BI and Analyst models. This reinforced the operational team’s tacit belief that the data team existed to service their requests. 

Eventually, the data science team became frustrated with their inability to drive innovation or directly quantify their impact. They fought to secure the time and space to build an innovative Decision Support system. But after it was launched, the operational team chose not to utilize it at a high rate. 

The data science team had trained their cross functional partners to view them as a supporting org, rather than joint owners of decisions. So their latest project felt like an “armchair quarterback”: It expressed strong opinions, but without sharing ownership of execution or outcome. 

Over reliance on the BI and Analyst models had entrapped the team. Launching the new Decision Support system had proven a time consuming and frustrating process for all parties. A tops-down mandate was eventually required to drive enough adoption to assess the system. It worked!

In hindsight, adopting a broader portfolio of project types earlier could have prevented this situation. For instance, instead of culminating with an insight some Analysis projects should have generated strong Recommendations about particular actions. And the data science team should have partnered with the operational team to see this work all the way through execution to final assessment. 

Conclusion

Data Science leaders should intentionally adopt an organizational model for each project based on its goals, constraints, and the surrounding organizational dynamics. Moreover, they should be mindful to build self reinforcing portfolios of different project types. 

To select a model for a project, consider:

  1. The nature of the problems you’re solving: Are the motivating questions exploratory or well-defined? 
  2. Desired outcomes: Are you seeking incremental improvements or innovative breakthroughs? 
  3. Organizational hunger: How much support will the project receive from relevant operating teams?
  4. Your team’s skills and interests: How strong are your team’s communication vs production coding skills?
  5. Available resources: Do you have the bandwidth to maintain and extend a system in perpetuity? 
  6. Are you ready: Does your team have the expertise and relationships to make a particular type of project successful? 
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Google Cloud partners with mLogica to offer mainframe modernization

Other than the partnership with mLogica, Google Cloud also offers a variety of other mainframe migration tools, including Radis and G4 that can be employed to modernize specific applications. Enterprises can also use a combination of migration tools to modernize their mainframe applications. Some of these tools include the Gemini-powered

Read More »

Salamander receives onshore planning permission

The Salamander floating offshore wind farm celebrated having received planning permission in principle for its onshore works in “record time” . The developers submitted the onshore application to Aberdeenshire Council in August last year, laying out plans for a site roughly 1.2 miles (2km) north of Peterhead for infrastructure including the substation, a 50MW battery storage facility and the onshore export cables. A second application was also made to the Energy Consents Unit of the Scottish government for the wind farm’s energy balancing infrastructure, which includes the battery. This has been validated and is progressing through the assessment process. Salamander project director Hugh Yendole said: “We are incredibly proud to have secured an almost-unheard-of unanimous approval in record time – only seven months after submission. We have achieved a number of significant ‘firsts’ with this consent – the first combined onshore substation and battery consent and the first consent of any of the innovation projects awarded exclusivity agreements under INTOG. “It is also worth noting that the joint venture team that delivered this consent did so under somewhat challenging conditions, especially differentiating Salamander’s low-impact grid connection from the profusion of GW-scale infrastructure that is planned.” © Supplied by Big PartnershipA map showing the location of the planned Salamander floating offshore wind project. The 100MW project is being developed by Ørsted, Simply Blue Group and Subsea7, and will install up to seven wind turbines on floating foundations in water 21.75 miles (35km) off Peterhead. The project also submitted its offshore consent application in May and is awaiting approval from Scottish Ministers – this would pave the way to develop the project’s offshore components. Salamander was a successful innovation bidder in Crown Estate Scotland’s innovation and targeted oil and gas (INTOG) leasing round. The application envisions onshore construction starting in January 2027 at

Read More »

Video: Bluefield banks on solar expansion despite global headwinds

In an exclusive studio interview, James Armstrong, co-founder and managing partner of investment manager Bluefield Partners, discusses how the solar investor’s evolving business model has been a “big driver” of shareholder performance. The prospect of zonal pricing does not worry Armstrong, who said it is unlikely to happen “overnight” with much of the FTSE 250 listed trust’s portfolio being in the southern half of England and Wales. According to Armstrong, investing at the development stage through to operation has contributed to shareholder performance. This is in spite of the major headwind that has faced listed investment trusts in the past few years: the fact that most trade at a discount to the underlying net asset value of the portfolio – primarily as a result of rising interest prices. Bluefield started assembling development-stage projects at least five years ago on the basis that contracts for difference linked assets, such as the Yelvertoft solar farm, would gradually replace assets in the portfolio under the former Renewables Obligation subsidy. Last year, it partnered with £4.1 billion infrastructure investment fund GLIL Infrastructure, which invests on behalf of local authority groups, to expand on this strategy.

Read More »

Ashtead Technology sees “no shortage” of M&A opportunities

Aberdeen’s Ashtead Technology sees ample opportunities to drive its campaign of mergers and acquisitions (M&A) in the near future. Speaking to Energy Voice, Ashtead chief strategy and marketing officer Colin Ross said: “If you look at the wider market, we see no shortage of opportunities to continue to explore M&A in the year ahead.” Ashtead Technology has been on the acquisition trail since 2017, adding nine companies in the past seven years. It acquired WeSubsea and Hiretech in late 2022, ACE Winches in November 2023, and Seatronics and J2 Subsea in November 2024. These acquisitions “have been tremendously important for our business, great opportunities to grow and scale, to bring in talent to broaden our offering and really build a stronger business with more capability to serve our customers in a really effective way,” Ross said. Back in 2024, after announcing the company’s full-year results for 2023, CEO Allan Pirie said that he saw an opportunity to grow the company by “consolidating a fragmented market”. A year on and the Scottish energy services sector has seen two major movements in the M&A space in a matter of weeks. US fund manager Apollo snapped up Aberdeen-based offshore energy services group OEG Group in a deal that valued the firm at more than $1 billion (£770m). It will take an unspecified majority stake in the group, leaving its former owner, Los Angeles-based Oaktree, with a small minority stake. In addition, the future of services group Wood was thrown into doubt as Dubai-based Sidara revived its takeover bid for the company. Sidara abandoned its previous attempts to buy the company in August after offering 230p per share. Since then, Wood’s share price has fallen considerably, with the company extending the deadline for Sidara to make a final offer for the Aberdeen-based services company.

Read More »

Scotland generated record amount of renewable electricity in 2024

Scotland generated a record amount of energy from renewables last year – with data also showing the electricity generated north of the border helped power the rest of the UK. Renewable sources such as onshore and offshore wind, hydro power and solar generated a total of 38.4TWh of electricity in 2025 – an increase of 13.2% on the previous year and 8.4% higher than the previous peak of 35.5TWh, which was recorded in 2022. The majority of energy was produced by wind technology, with onshore and offshore wind projects generating 30.1TWh, the data, which was published by the Scottish Government, showed. Meanwhile, hydro power generated 5.2TWh, solar produced 0.6TWh and other forms of renewables resulted in 2.6TWh of electricity. The report also revealed: “Scotland continues to generate more electricity than it needs. In 2024, there was 19.7TWh of net electricity exports to other UK nations.” The report also said Scotland’s capacity to produce electricity from renewable sources had “increased substantially over the past 10 years”. In 2024 alone, capacity increased by 14.3% to stand at 17.6GW, compared to 15.4GW in 2023. As of the end of 2024 a total of 904 further electricity projects were being planned, with a combined capacity of 65.4GW. These included 640 projects for energy generation, with an estimated capacity of 37.5GW, along with 264 electricity storage projects, with an estimated capacity of 27.9GW. Environmental campaigners at Friends of the Earth Scotland said the figures gave a “glimpse of what’s possible for Scotland”. Speaking about the “positive renewable energy statistics,” head of campaigns Caroline Rance said: “The benefits of renewables are huge, but they are not yet sufficiently reaching our communities and the workers who are responsible for their deployment, whether that is due to manufacturing taking place overseas or big business sucking up all the

Read More »

Kolibri Sees Earnings Slip despite Production, Revenue Growth

Kolibri Global Energy Inc. has reported a 6 percent decline in net income from $19.3 million for 2023 to $18.1 million for 2024. The company linked the decrease to a reduced unrealized gain in commodity contracts for 2024. Increased revenue was counterbalanced by higher operating costs and elevated income taxes, along with an uptick in general and administrative expenses, primarily due to the company’s NASDAQ listing, and interest expenses, Kolibri said in a media release. Net revenues for 2024 were $58.5 million, up 16 percent compared to 2023 primarily due to 24 percent higher production. In 2024, Kolibri’s average production reached 3,478 barrels of oil equivalent per day (boepd), up 24 percent compared to the 2,796 boepd produced in 2023 and in line with guidance. This growth is attributed to output from wells drilled and completed during 2024. The company posted earnings before interest, taxes, depreciation, and amortization (EBITDA) of $44 million, up 13 percent from the $39.1 million reported for 2023. “We are pleased with the continued production and cash flow growth of the Company in 2024. We were able to meet our forecasted guidance in revenue and adjusted EBITDA even though actual prices were lower than the price used in our forecast”, Wolf Regener, Kolibri’s President and Chief Executive Officer, said. “The Company increased production by 24 percent, which was in line with our forecast, while only spending $31.3 million on capital expenditures, which was less than we had forecasted and a 41 percent decrease from the prior year. The cost efficiencies that our field operations team has achieved have allowed us to continue to grow production and revenue and drill 50 percent longer laterals while spending 12 percent less per well than we had forecast to spend in our 2023 drilling program”, Regener said. Regener anticipates continued

Read More »

Cnooc Profit Rises on Increased Oil and Gas Drilling Output

Cnooc Ltd. posted higher annual earnings and boosted its dividend, as growth in energy output offset weaker prices. Net income rose to 137.9 billion yuan ($19 billion) in 2024, from 123.9 billion yuan the previous year, China’s largest offshore oil-and-gas driller said in a filing. While that missed expectations of 144.6 billion yuan, and was shy of the record profit in 2022, the full-year dividend rose 12% to HK$1.40 (18 cents). Output expanded to 726.8 million barrels of oil equivalent, from 678 million barrels a year earlier, with overseas growth led by supplies from Guyana. The state-owned company has led Beijing’s efforts to enhance energy security and its operations have now delivered a sixth year of record production.  Cnooc’s focus on extraction leaves its earnings heavily dependent on global oil prices, which averaged about 3% less in 2024 on-year. But it also means the company is relatively unaffected by headwinds to demand faced by downstream peers. Earlier this week, China’s biggest top, Sinopec, reported a tumble in profits as the electric-vehicle boom weighs on fuel consumption. At this point, the company will stick to its three-year output targets through to 2027, including a push to increase gas production, Vice Chairman Zhou Xinhuai said at a briefing. Among its overseas interests, Cnooc and Exxon Mobil Corp. have merged their arbitration claims against Chevron Corp.’s proposed takeover of Hess Corp., a deal that would allow the US oil supermajor to enter Guyana’s Stabroek Block. A first tribunal hearing is due in May. PetroChina Co. — the country’s largest oil and gas company, whose operations straddle drilling, refining and retail — reports earnings on Sunday. China’s energy giants are increasingly looking to natural gas to drive growth, although domestic prices have stumbled recently due a slowing economy and plethora of supply options, from domestic fields and gas

Read More »

Former Arista COO launches NextHop AI for customized networking infrastructure

Sadana argued that unlike traditional networking where an IT person can just plug a cable into a port and it works, AI networking requires intricate, custom solutions. The core challenge is creating highly optimized, efficient networking infrastructure that can support massive AI compute clusters with minimal inefficiencies. How NextHop is looking to change the game for hyperscale networking NextHop AI is working directly alongside its hyperscaler customers to develop and build customized networking solutions. “We are here to build the most efficient AI networking solutions that are out there,” Sadana said. More specifically, Sadana said that NextHop is looking to help hyperscalers in several ways including: Compressing product development cycles: “Companies that are doing things on their own can compress their product development cycle by six to 12 months when they partner with us,” he said. Exploring multiple technological alternatives: Sadana noted that hyperscalers might try and build on their own and will often only be able to explore one or two alternative approaches. With NextHop, Sadana said his company will enable them to explore four to six different alternatives. Achieving incremental efficiency gains: At the massive cloud scale that hyperscalers operate, even an incremental one percent improvement can have an oversized outcome. “You have to make AI clusters as efficient as possible for the world to use all the AI applications at the right cost structure, at the right economics, for this to be successful,” Sadana said. “So we are participating by making that infrastructure layer a lot more efficient for cloud customers, or the hyperscalers, which, in turn, of course, gives the benefits to all of these software companies trying to run AI applications in these cloud companies.” Technical innovations: Beyond traditional networking In terms of what the company is actually building now, NextHop is developing specialized network switches

Read More »

Microsoft abandons data center projects as OpenAI considers its own, hinting at a market shift

A potential ‘oversupply position’ In a new research note, TD Cowan analysts reportedly said that Microsoft has walked away from new data center projects in the US and Europe, purportedly due to an oversupply of compute clusters that power AI. This follows reports from TD Cowen in February that Microsoft had “cancelled leases in the US totaling a couple of hundred megawatts” of data center capacity. The researchers noted that the company’s pullback was a sign of it “potentially being in an oversupply position,” with demand forecasts lowered. OpenAI, for its part, has reportedly discussed purchasing billions of dollars’ worth of data storage hardware and software to increase its computing power and decrease its reliance on hyperscalers. This fits with its planned Stargate Project, a $500 billion, US President Donald Trump-endorsed initiative to build out its AI infrastructure in the US over the next four years. Based on the easing of exclusivity between the two companies, analysts say these moves aren’t surprising. “When looking at storage in the cloud — especially as it relates to use in AI — it is incredibly expensive,” said Matt Kimball, VP and principal analyst for data center compute and storage at Moor Insights & Strategy. “Those expenses climb even higher as the volume of storage and movement of data grows,” he pointed out. “It is only smart for any business to perform a cost analysis of whether storage is better managed in the cloud or on-prem, and moving forward in a direction that delivers the best performance, best security, and best operational efficiency at the lowest cost.”

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »