Stay Ahead, Stay ONMINE

Generative AI and Civic Institutions

Different sectors, different goals Recent events have got me thinking about AI as it relates to our civic institutions — think government, education, public libraries, and so on. We often forget that civic and governmental organizations are inherently deeply different from private companies and profit-making enterprises. They exist to enable people to live their best lives, protect people’s rights, and make opportunities accessible, even if (especially if) this work doesn’t have immediate monetary returns. The public library is an example I often think about, as I come from a library-loving and defending family — their goal is to provide books, cultural materials, social supports, community engagement, and a love of reading to the entire community, regardless of ability to pay. In the private sector, efficiency is an optimization goal because any dollar spent on providing a product or service to customers is a dollar taken away from the profits. The (simplified) goal is to spend the bare minimum possible to run your business, with the maximum amount returned to you or the shareholders in profit form. In the civic space, on the other hand, efficiency is only a meaningful goal insomuch as it enables higher effectiveness — more of the service the institution provides getting to more constituents. In the civic space, efficiency is only a meaningful goal insomuch as it enables higher effectiveness — more of the service the institution provides getting to more constituents. So, if you’re at the library, and you could use an Ai Chatbot to answer patron questions online instead of assigning a librarian to do that, that librarian could be helping in-person patrons, developing educational curricula, supporting community services, or many other things. That’s a general efficiency that could make for higher effectiveness of the library as an institution. Moving from card catalogs to digital catalogs is a prime example of this kind of efficiency to effectiveness pipeline, because you can find out from your couch whether the book you want is in stock using search keywords instead of flipping through hundreds of notecards in a cabinet drawer like we did when I was a kid. However, we can pivot too hard in the direction of efficiency and lose sight of the end goal of effectiveness. If, for example, your online librarian chat is often used by schoolchildren at home to get homework help, replacing them with an AI chatbot could be a disaster — after getting incorrect information from such a bot and getting a bad grade at school, a child might be turned off from patronizing the library or seeking help there for a long time, or forever. So, it’s important to deploy Generative Ai solutions only when it is well thought out and purposeful, not just because the media is telling us that “AI is neat.” (Eagle-eyed readers will know that this is basically similar advice to what I’ve said in the past about deploying AI in businesses as well.) As a result, what we thought was a gain in efficiency leading to net higher effectiveness actually could diminish the number of lifelong patrons and library visitors, which would mean a loss of effectiveness for the library. Sometimes unintended effects from attempts to improve efficiency can diminish our ability to provide a universal service. That is, there may be a tradeoff between making every single dollar stretch as far as it can possibly go and providing reliable, comprehensive services to all the constituents of your institution. Sometimes unintended effects from attempts to improve efficiency can diminish our ability to provide a universal service. AI for efficiency It’s worth it to take a closer look at this concept — AI as a driver of efficiency. Broadly speaking, the theory we hear often is that incorporating generative AI more into our workplaces and organizations can increase productivity. Framing it at the most Econ 101 level: using AI, more work can be completed by fewer people in the same amount of time, right? Let’s challenge some aspects of this idea. AI is useful to complete certain tasks but is sadly inadequate for others. (As our imaginary schoolchild library patron learned, an LLM is not a reliable source of facts, and should not be treated like one.) So, AI’s ability to increase the volume of work being done with fewer people (efficiency) is limited by what kind of work we need to complete. If our chat interface is only used for simple questions like “What are the library’s hours on Memorial Day?” we can hook up a RAG (Retrieval Augmented Generation) system with an LLM and make that quite useful. But outside of the limited bounds of what information we can provide to the LLM, we should probably set guard rails and make the model refuse to try and answer, to avoid giving out false information to patrons. So, let’s play that out. We have a chatbot that does a very limited job, but does it well. The librarian who was on chatbot duty now may have some reduction in the work required of them, but there are still going to be a subset of questions that still require their help. We have some choices: put the librarian on chatbot duty for a reduced number of hours a week, hoping the questions come in when they’re on? Tell people to just call the reference desk or send an email if the chatbot refuses to answer them? Hope that people come in to the library in person to ask their questions? I suspect the likeliest option is actually “the patron will seek their answer elsewhere, perhaps from another LLM like ChatGPT, Claude, or Gemini.” Once again, we’ve ended up in a situation where the library loses patronage because their offering wasn’t meeting the needs of the patron. And to boot, the patron may have gotten another wrong answer somewhere else, for all we know. I am spinning out this long example just to illustrate that efficiency and effectiveness in the civic environment can have a lot more push and pull than we would initially assume. It’s not to say that AI isn’t useful to help civic organizations stretch their capabilities to serve the public, of course! But just like with any application of generative AI, we need to be very careful to think about what we’re doing, what our goals are, and whether those two are compatible. Conversion of labor Now, this has been a very simplistic example, and eventually we could hook up the whole encyclopedia to that chatbot RAG or something, of course, and try to make it work. In fact, I think we can and should continue developing more ways to chain together AI models to expand the scope of valuable work they can do, including making different specific models for different responsibilities. However, this development is itself work. It’s not really just a matter of “people do work” or “models do work”, but instead it’s “people do work building AI” or “people do work providing services to people”. There’s a calculation to be made to determine when it would be more efficient to do the targeted work itself, and when AI is the right way to go. Working on the AI has an advantage in that it will hopefully render the task reproducible, so it will lead to efficiency, but let’s remember that AI engineering is vastly different from the work of the reference librarian. We’re not interchanging the same workers, tasks, or skill sets here, and in our contemporary economy, the AI engineer’s time costs a heck of a lot more. So if we did want to measure this efficiency all in dollars and cents, the same amount of time spent working at the reference desk and doing the chat service will be much cheaper than paying an AI engineer to develop a better agentic AI for the use case. Given a bit of time, we could calculate out how many hours, days, years of work as a reference librarian we’d need to save with this chatbot to make it worth building, but often that calculation isn’t done before we move towards AI solutions. We need to interrogate the assumption that incorporating generative AI in any given scenario is a guaranteed net gain in efficiency. Externalities While we’re on this topic of weighing whether the AI solution is worth doing in a particular situation, we should remember that developing and using AI for tasks does not happen in a vacuum. It has some cost environmentally and economically when we choose to use a generative AI tool, even when it’s a single prompt and a single response. Consider that the newly released GPT-4.5 has increased prices 30x for input tokens ($2.50 per million to $75 per million) and 15x for output tokens ($10 per million to $150 per million) just since GPT-4o. And that isn’t even taking into account the water consumption for cooling data centers (3 bottles per 100 word output for GPT-4), electricity use, and rare earth minerals used in GPUs. Many civic institutions have as a macro level goal to improve the world around them and the lives of the citizens of their communities, and concern for the environment has to have a place in that. Should organizations whose purpose is to have a positive impact weigh the possibility of incorporating AI more carefully? I think so. Plus, I don’t often get too much into this, but I think we should take a moment to consider some folks’ end game for incorporating AI — reducing staffing altogether. Instead of making our existing dollars in an institution go farther, some people’s idea is just reducing the number of dollars and redistributing those dollars somewhere else. This brings up many questions, naturally, about where those dollars will go instead and whether they will be used to advance the interests of the community residents some other way, but let’s set that aside for now. My concern is for the people who might lose their jobs under this administrative model. For-profit companies hire and fire employees all the time, and their priorities and objectives are focused on profit, so this is not particularly hypocritical or inconsistent. But as I noted above, civic organizations have objectives around improving the community or communities in which they exist. In a very real way, they are advancing that goal when part of what they provide is economic opportunity to their workers. We live in a Society where working is the overwhelmingly predominant way people provide for themselves and their families, and giving jobs to people in the community and supporting the economic well-being of the community is a role that civic institutions do play. [R]educing staffing is not an unqualified good for civic organizations and government, but instead must be balanced critically against whatever other use the money that was paying their salaries will go to. At the bare minimum, this means that reducing staffing is not an unqualified good for civic organizations and government, but instead must be balanced critically against whatever other use the money that was paying their salaries will go to. It’s not impossible for reducing staff to be the right decision, but we have to bluntly acknowledge that when members of communities experience joblessness, that effect cascades. They are now no longer able to patronize the shops and services they would have been supporting with their money, the tax base may be reduced, and this negatively affects the whole collective. Workers aren’t just workers; they’re also patrons, customers, and participants in all aspects of the community. When we think of civic workers as simply money pits to be replaced with AI or whose cost for labor we need to minimize, we lose sight of the reasons for the work to be done in the first place. Conclusion I hope this discussion has brought some clarity about how really difficult it is to decide if, when, and how to apply generative AI to the civic space. It’s not nearly as simple a thought process as it might be in the for-profit sphere because the purpose and core meaning of civic institutions are completely different. Those of us who do machine learning and build AI solutions in the private sector might think, “Oh, I can see a way to use this in government,” but we have to recognize and appreciate the complex contextual implications that might have. Next month, I’ll be bringing you a discussion of how social science research is incorporating generative AI, which has some very intriguing aspects. As you may have heard, Towards Data Science has moved to an independent platform, but I will continue to post my work on my Medium page, my personal website, and the new TDS platform, so you’ll be able to find me wherever you happen to go. Subscribe to my newsletter on Medium if you’d like to ensure you get every article in your inbox. Find more of my work at www.stephaniekirmer.com. Further reading “It’s a lemon”-OpenAI’s largest AI model ever arrives to mixed reviews: GPT-4.5 offers marginal gains in capability and poor coding performance despite 30x the cost. arstechnica.com Using GPT-4 to generate 100 words consumes up to 3 bottles of water: New research shows generative AI consumes a lot of water – up to 1,408ml to generate 100 words of text. www.tomshardware.com Environmental Implications of the AI Boom: The digital world can’t exist without the natural resources to run it. What are the costs of the tech we’re using… towardsdatascience.com Economics of Generative AI: What’s the business model for generative AI, given what we know today about the technology and the market? towardsdatascience.com

Different sectors, different goals

Recent events have got me thinking about AI as it relates to our civic institutions — think government, education, public libraries, and so on. We often forget that civic and governmental organizations are inherently deeply different from private companies and profit-making enterprises. They exist to enable people to live their best lives, protect people’s rights, and make opportunities accessible, even if (especially if) this work doesn’t have immediate monetary returns. The public library is an example I often think about, as I come from a library-loving and defending family — their goal is to provide books, cultural materials, social supports, community engagement, and a love of reading to the entire community, regardless of ability to pay.

In the private sector, efficiency is an optimization goal because any dollar spent on providing a product or service to customers is a dollar taken away from the profits. The (simplified) goal is to spend the bare minimum possible to run your business, with the maximum amount returned to you or the shareholders in profit form. In the civic space, on the other hand, efficiency is only a meaningful goal insomuch as it enables higher effectiveness — more of the service the institution provides getting to more constituents.

In the civic space, efficiency is only a meaningful goal insomuch as it enables higher effectiveness — more of the service the institution provides getting to more constituents.

So, if you’re at the library, and you could use an Ai Chatbot to answer patron questions online instead of assigning a librarian to do that, that librarian could be helping in-person patrons, developing educational curricula, supporting community services, or many other things. That’s a general efficiency that could make for higher effectiveness of the library as an institution. Moving from card catalogs to digital catalogs is a prime example of this kind of efficiency to effectiveness pipeline, because you can find out from your couch whether the book you want is in stock using search keywords instead of flipping through hundreds of notecards in a cabinet drawer like we did when I was a kid.

However, we can pivot too hard in the direction of efficiency and lose sight of the end goal of effectiveness. If, for example, your online librarian chat is often used by schoolchildren at home to get homework help, replacing them with an AI chatbot could be a disaster — after getting incorrect information from such a bot and getting a bad grade at school, a child might be turned off from patronizing the library or seeking help there for a long time, or forever. So, it’s important to deploy Generative Ai solutions only when it is well thought out and purposeful, not just because the media is telling us that “AI is neat.” (Eagle-eyed readers will know that this is basically similar advice to what I’ve said in the past about deploying AI in businesses as well.)

As a result, what we thought was a gain in efficiency leading to net higher effectiveness actually could diminish the number of lifelong patrons and library visitors, which would mean a loss of effectiveness for the library. Sometimes unintended effects from attempts to improve efficiency can diminish our ability to provide a universal service. That is, there may be a tradeoff between making every single dollar stretch as far as it can possibly go and providing reliable, comprehensive services to all the constituents of your institution.

Sometimes unintended effects from attempts to improve efficiency can diminish our ability to provide a universal service.

AI for efficiency

It’s worth it to take a closer look at this concept — AI as a driver of efficiency. Broadly speaking, the theory we hear often is that incorporating generative AI more into our workplaces and organizations can increase productivity. Framing it at the most Econ 101 level: using AI, more work can be completed by fewer people in the same amount of time, right?

Let’s challenge some aspects of this idea. AI is useful to complete certain tasks but is sadly inadequate for others. (As our imaginary schoolchild library patron learned, an LLM is not a reliable source of facts, and should not be treated like one.) So, AI’s ability to increase the volume of work being done with fewer people (efficiency) is limited by what kind of work we need to complete.

If our chat interface is only used for simple questions like “What are the library’s hours on Memorial Day?” we can hook up a RAG (Retrieval Augmented Generation) system with an LLM and make that quite useful. But outside of the limited bounds of what information we can provide to the LLM, we should probably set guard rails and make the model refuse to try and answer, to avoid giving out false information to patrons.

So, let’s play that out. We have a chatbot that does a very limited job, but does it well. The librarian who was on chatbot duty now may have some reduction in the work required of them, but there are still going to be a subset of questions that still require their help. We have some choices: put the librarian on chatbot duty for a reduced number of hours a week, hoping the questions come in when they’re on? Tell people to just call the reference desk or send an email if the chatbot refuses to answer them? Hope that people come in to the library in person to ask their questions?

I suspect the likeliest option is actually “the patron will seek their answer elsewhere, perhaps from another LLM like ChatGPT, Claude, or Gemini.” Once again, we’ve ended up in a situation where the library loses patronage because their offering wasn’t meeting the needs of the patron. And to boot, the patron may have gotten another wrong answer somewhere else, for all we know.

I am spinning out this long example just to illustrate that efficiency and effectiveness in the civic environment can have a lot more push and pull than we would initially assume. It’s not to say that AI isn’t useful to help civic organizations stretch their capabilities to serve the public, of course! But just like with any application of generative AI, we need to be very careful to think about what we’re doing, what our goals are, and whether those two are compatible.

Conversion of labor

Now, this has been a very simplistic example, and eventually we could hook up the whole encyclopedia to that chatbot RAG or something, of course, and try to make it work. In fact, I think we can and should continue developing more ways to chain together AI models to expand the scope of valuable work they can do, including making different specific models for different responsibilities. However, this development is itself work. It’s not really just a matter of “people do work” or “models do work”, but instead it’s “people do work building AI” or “people do work providing services to people”. There’s a calculation to be made to determine when it would be more efficient to do the targeted work itself, and when AI is the right way to go.

Working on the AI has an advantage in that it will hopefully render the task reproducible, so it will lead to efficiency, but let’s remember that AI engineering is vastly different from the work of the reference librarian. We’re not interchanging the same workers, tasks, or skill sets here, and in our contemporary economy, the AI engineer’s time costs a heck of a lot more. So if we did want to measure this efficiency all in dollars and cents, the same amount of time spent working at the reference desk and doing the chat service will be much cheaper than paying an AI engineer to develop a better agentic AI for the use case. Given a bit of time, we could calculate out how many hours, days, years of work as a reference librarian we’d need to save with this chatbot to make it worth building, but often that calculation isn’t done before we move towards AI solutions.

We need to interrogate the assumption that incorporating generative AI in any given scenario is a guaranteed net gain in efficiency.

Externalities

While we’re on this topic of weighing whether the AI solution is worth doing in a particular situation, we should remember that developing and using AI for tasks does not happen in a vacuum. It has some cost environmentally and economically when we choose to use a generative AI tool, even when it’s a single prompt and a single response. Consider that the newly released GPT-4.5 has increased prices 30x for input tokens ($2.50 per million to $75 per million) and 15x for output tokens ($10 per million to $150 per million) just since GPT-4o. And that isn’t even taking into account the water consumption for cooling data centers (3 bottles per 100 word output for GPT-4)electricity use, and rare earth minerals used in GPUs. Many civic institutions have as a macro level goal to improve the world around them and the lives of the citizens of their communities, and concern for the environment has to have a place in that. Should organizations whose purpose is to have a positive impact weigh the possibility of incorporating AI more carefully? I think so.

Plus, I don’t often get too much into this, but I think we should take a moment to consider some folks’ end game for incorporating AI — reducing staffing altogether. Instead of making our existing dollars in an institution go farther, some people’s idea is just reducing the number of dollars and redistributing those dollars somewhere else. This brings up many questions, naturally, about where those dollars will go instead and whether they will be used to advance the interests of the community residents some other way, but let’s set that aside for now. My concern is for the people who might lose their jobs under this administrative model.

For-profit companies hire and fire employees all the time, and their priorities and objectives are focused on profit, so this is not particularly hypocritical or inconsistent. But as I noted above, civic organizations have objectives around improving the community or communities in which they exist. In a very real way, they are advancing that goal when part of what they provide is economic opportunity to their workers. We live in a Society where working is the overwhelmingly predominant way people provide for themselves and their families, and giving jobs to people in the community and supporting the economic well-being of the community is a role that civic institutions do play.

[R]educing staffing is not an unqualified good for civic organizations and government, but instead must be balanced critically against whatever other use the money that was paying their salaries will go to.

At the bare minimum, this means that reducing staffing is not an unqualified good for civic organizations and government, but instead must be balanced critically against whatever other use the money that was paying their salaries will go to. It’s not impossible for reducing staff to be the right decision, but we have to bluntly acknowledge that when members of communities experience joblessness, that effect cascades. They are now no longer able to patronize the shops and services they would have been supporting with their money, the tax base may be reduced, and this negatively affects the whole collective.

Workers aren’t just workers; they’re also patrons, customers, and participants in all aspects of the community. When we think of civic workers as simply money pits to be replaced with AI or whose cost for labor we need to minimize, we lose sight of the reasons for the work to be done in the first place.

Conclusion

I hope this discussion has brought some clarity about how really difficult it is to decide if, when, and how to apply generative AI to the civic space. It’s not nearly as simple a thought process as it might be in the for-profit sphere because the purpose and core meaning of civic institutions are completely different. Those of us who do machine learning and build AI solutions in the private sector might think, “Oh, I can see a way to use this in government,” but we have to recognize and appreciate the complex contextual implications that might have.

Next month, I’ll be bringing you a discussion of how social science research is incorporating generative AI, which has some very intriguing aspects.

As you may have heard, Towards Data Science has moved to an independent platform, but I will continue to post my work on my Medium page, my personal website, and the new TDS platform, so you’ll be able to find me wherever you happen to go. Subscribe to my newsletter on Medium if you’d like to ensure you get every article in your inbox.

Find more of my work at www.stephaniekirmer.com.

Further reading

“It’s a lemon”-OpenAI’s largest AI model ever arrives to mixed reviews: GPT-4.5 offers marginal gains in capability and poor coding performance despite 30x the cost. arstechnica.com

Using GPT-4 to generate 100 words consumes up to 3 bottles of water: New research shows generative AI consumes a lot of water – up to 1,408ml to generate 100 words of text. www.tomshardware.com

Environmental Implications of the AI Boom: The digital world can’t exist without the natural resources to run it. What are the costs of the tech we’re using… towardsdatascience.com

Economics of Generative AI: What’s the business model for generative AI, given what we know today about the technology and the market? towardsdatascience.com

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI and greed cause a massive spike in memory prices

TrendForce says that as of 2Q25, HBM3e still commanded a price premium more than four times that of DDR5, so it’s hard to fault the memory manufacturers for wanting to make a buck. However, as DDR5 prices continue to rise, the gap between the two is projected to narrow significantly

Read More »

Palo Alto Networks readies security for AI-first world

Palo Alto has articulated the value of a security platform for several years. But now, given the speed at which AI is moving, the value shifts from cost consolidation to agility. With AI, most customers don’t know what their future operating environment will look like, and a platform approach lets

Read More »

Strategists Forecast 6MM Barrel WoW USA Crude Stock Build

In an oil and gas report sent to Rigzone this week by the Macquarie team, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be up by 6.2 million barrels for the week ending October 31. “This follows a 6.9 million barrel draw in the prior week, with the crude balance realizing significantly tighter than our expectations,” the strategists said in the report. “For this week’s balance, from refineries, we model a moderate increase in crude runs (+0.4 million barrels per day),” they added. “Among net imports, we model a large increase, with exports lower (-0.6 million barrels per day) and imports higher (+0.8 million barrels per day) on a nominal basis,” they continued. In the report, the strategists noted that the timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a bounce (+0.8 million barrels per day) on a nominal basis this week,” the analysts went on to state in the report. “Rounding out the picture, we anticipate a similar increase (+0.5 million barrels) in SPR [Strategic Petroleum Reserve] stocks this week,” they noted. The strategists also said in the report that, “among products” they “look for draws in gasoline (-2.5 million barrels) and distillate (-4.7 million barrels), with jet stocks up (+0.8 million barrels)”. “We model implied demand for these three products at ~14.6 million barrels per day for the week ending October 31,” they added. In its latest weekly petroleum status report at the time of writing, which was released on October 29 and included data for the week ending October 24, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in the SPR, decreased by 6.9 million barrels from the week

Read More »

ADNOC Set to Join Argentina LNG

Abu Dhabi National Oil Co PJSC (ADNOC) signed Tuesday a “non-binding framework agreement” to invest in YPF SA and Eni SpA’s project to export up to 12 million metric tons per annum (MMtpa) of natural gas from the Vaca Muerta field onshore Argentina. ADNOC through its global investment arm XRG will “evaluate participation” in Argentina LNG, XRG said in an online statement. “By joining forces with Eni’s world-class FLNG [floating liquefied natural gas] capabilities and YPF’s proven upstream leadership, we aim to set new benchmarks for innovation, scale and reliability in the international gas market”, said XRG international president for gas Mohamed Al Aryani. Italy’s state-backed Eni said separately the agreement signed Tuesday at the ADIPEC energy forum in Abu Dhabi paves the way for a “joint development agreement”. Last month Eni and Argentina’s state-owned YPF signed a “final technical project description”, bringing Argentina LNG closer to a final investment decision. “The project involves the production, processing, transportation and liquefaction of gas for export through two floating gas liquefaction units with a capacity of six MTPA (million tons per year, equivalent to approximately 9 billion cubic meters of gas per year) each, in addition to the valorization and export of associated liquids”, Eni said in a press release October 10. “Today’s agreement follows the head of agreement signed by the two companies in June 2025”. Announcing its initial agreement with YPF, Eni said June 6 Argentina LNG has plans to expand to 30 MMtpa by 2030. XRG added, “The non-binding framework agreement, signed during ADIPEC 2025, follows XRG’s recent investments in Mozambique’s Rovuma Basin, Block-1 Turkmenistan, Arcius Energy in Egypt, Absheron in Azerbaijan and the Rio Grande LNG project in the United States, reinforcing its ambition to become a leading global gas player”. ADNOC’s Gas Ambitions XRG aims to build

Read More »

Russia in Talks with Turkey to Maintain Gas Flows

Russia and Turkey are in talks to keep up the volumes of gas supplies from Gazprom PJSC as they negotiate the renewal of two major pipeline supply deals, according to people familiar with the matter.  The contracts between Russia’s gas giant and Turkey’s state company Botas for combined deliveries of as much as 21.75 billion cubic meters a year are set to expire on Dec. 31. Russia and Turkey are negotiating to keep the annual flows at about 22 billion cubic meters, the people said, asking not to be identified as the information isn’t public. Gazprom didn’t immediately respond to a Bloomberg request for comment sent during a public holiday in Russia. Turkey’s Energy Ministry didn’t comment. Botas didn’t reply to a query seeking comment. Gas market watchers have been questioning the future of Russian gas flows to Turkey amid growing pressure from US President Donald Trump’s administration to curb energy purchases that help the Kremlin fund its war on Ukraine. Following US sanctions on Russia’s two biggest oil producers last month, Turkey’s oil refiners have started cutting imports of Russian crude.  Turkey has previously pushed back on Western efforts to stop it from buying Russian gas, which is mostly traded through long-term contracts via extensive pipeline connections between the two countries. In September, however, Turkey agreed to a string of contracts to buy liquefied natural gas, including from the US. With Turkey’s own production from the Black Sea set to grow, it may end up with more gas than it needs.  Turkey’s large market has been a lifeline for Gazprom, which has all but lost the European gas market after the war triggered a push for diversification of supplies. This should give Turkey leverage to negotiate discounts in a renewal of supply deals.  Last year, Gazprom shipped 21.6 billion

Read More »

‘Disappointing’ Results for Melbana at Cuban Well

Melbana Energy Ltd said Wednesday flow testing at the Amistad-2 well in Cuba’s onshore Block 9 had failed to recover oil. “The testing of Amistad-2 is disappointing given the well was up-dip of known oil, but this can occur in the early-stage appraisal and development of new oilfields”, Melbana executive chair Andrew Purcell said in an online statement. “Oil shows were muted during the drilling, perhaps because the reservoir drilling fluid we have designed for these formations was in balance and doing its job, but well logs indicated good reservoir quality and reasonable oil saturation. Flow testing confirmed excellent reservoir quality, given the high rate of fluid recovery, but oil was residual at that location. “The rate of drilling was also quicker than prognosed, allowing us to continue drilling the encountered formation much deeper than originally planned”. The Sydney, Australia-based company exceeded its target total depth of 1,125 meters (3,690.94 feet) and reached 2,000 meters. Amistad-2 sits about 850 meters southwest and 200 meters up-dip of the already producing Alameda-2, also in Block 9, according to Melbana. However, pressure data from the latest drilling campaign “indicates that the reservoirs at the Amistad-2 location are not in communication with those at the Alameda-2 location”, Wednesday’s statement said. “Given the results of Amistad-2 consideration is now being given to Amistad-11 replacing Amistad-3 as the next well. This would be a shallow production well located on Pad 1, where good production characteristics have previously been obtained (peak flow of 1,903 bopd at a sustained rate of 1,235 bopd)”, Melbana added. “Production operations in Amistad-1 have been temporarily halted to prepare for the drilling of this well in case the joint operation approves this course of action”. Block 9 spans 2,344 square kilometers (905.02 square miles) on the north coast of Cuba, 140 kilometers

Read More »

Shell Commits to Long-Term Purchase from Ruwais LNG

Abu Dhabi National Oil Co PJSC (ADNOC) said Tuesday it has signed a 15-year deal with Shell PLC to supply the British company up to one million metric tons per annum (MMtpa) of liquefied natural gas (LNG) from the Ruwais LNG project in the United Arab Emirates. “Signed during ADIPEC, the deal marks ADNOC’s first long-term LNG sales agreement with Shell and the eighth long-term offtake agreement secured for the Ruwais LNG project”, ADNOC said in a press release. “This SPA [sale and purchase agreement] converts a previous heads of agreement into a definitive agreement and marks a significant step in ADNOC’s efforts to rapidly commercialize the Ruwais LNG project. “With this latest agreement, more than eight MMtpa of the project’s planned 9.6 MMtpa capacity is now secured through long-term deals with customers across Asia and Europe, just 16 months after the project’s final investment decision in July 2024”. Fatema Al Nuaimi, chief executive of ADNOC gas processing and sales arm ADNOC Gas PLC, said, “While the industry can take up to four or five years to market such volumes, Ruwais is advancing at record pace”. “In parallel, construction, contractor mobilization and site works are all on track for commissioning by the end of 2028”, Al Nuaimi added. The export plant in Al Ruwais Industrial City is planned to have two trains, each with a production capacity of 4.8 MMtpa. Targeted to be put into production 2028, the facility would more than double ADNOC’s LNG capacity. Shell already holds a 10 percent stake in the project through Shell Overseas Holdings Ltd, ADNOC confirmed Tuesday. Last year ADNOC penned separate agreements farming out a total of 40 percent in Ruwais LNG to Shell, BP PLC, Mitsui & Co Ltd and TotalEnergies SE. Japan’s Mitsui also penned an offtake of 600,000 metric tons a year,

Read More »

Oil Retreats on Strong Greenback

Oil fell, halting a four-session run of gains, pressured by a strong dollar and a backdrop of oversupply. West Texas Intermediate fell 0.8% to settle below $61 a barrel on Tuesday. A global equities rally hit a speed bump amid concerns about lofty valuations while the greenback climbed to the highest in more than five months, weighing on crude and other dollar-denominated commodities. Oil declined because of “the dollar funding stress and the second-order effect on global liquidity and, in turn, global growth,” said Jon Byrne, an analyst at Strategas Securities. The Organization of the Petroleum Exporting Countries and its allies said over the weekend they planned to hold back from lifting production quotas in the first quarter. The decision came as market observers brace for what is expected to be a global crude glut. The US oil benchmark has retreated almost 16% this year as OPEC+ and non-member nations ramped up production. Prices rebounded from five-month lows when the US recently announced sanctions on Rosneft PJSC and Lukoil PJSC, Russia’s two biggest oil companies, but have since surrendered some of those advances. Russian seaborne crude shipments fell sharply in the wake of the sanctions, dropping by the most since January 2024, according to data tracked by Bloomberg. Cargo discharges have been hit even harder than loadings, with oil held in tanker ships surging. Still, some are skeptical the restrictions will stop Russian oil from finding buyers. “Down the line, you will see that more and more of the disrupted Russian oil, one way or another, finds its way to the market,” Torbjörn Törnqvist, chief executive officer of Gunvor Group, said during an interview on Tuesday. “It always does somehow.” Eni SpA CEO Claudio Descalzi said Monday that any concerns about oversupply will be short-lived, the latest comments by an

Read More »

Space: The final frontier for data processing

There are, however, a couple of reasons why data centers in space are being considered. There are plenty of reports about how the increased amount of AI processing is affecting power consumption within data centers; the World Economic Forum has estimated that the power required to handle AI is increasing at a rate of between 26% and 36% annually. Therefore, it is not surprising that organizations are looking at other options. But an even more pressing reason for orbiting data centers is to handle the amount of data that is being produced by existing satellites, Judge said. “Essentially, satellites are gathering a lot more data than can be sent to earth, because downlinks are a bottleneck,” he noted. “With AI capacity in orbit, they could potentially analyze more of this data, extract more useful information, and send insights back to earth. My overall feeling is that any more data processing in space is going to be driven by space processing needs.” And China may already be ahead of the game. Last year, Guoxing Aerospace  launched 12 satellites, forming a space-based computing network dubbed the Three-Body Computing Constellation. When completed, it will contain 2,800 satellites, all handling the orchestration and processing of data, taking edge computing to a new dimension.

Read More »

Meta’s $27B Hyperion Campus: A New Blueprint for AI Infrastructure Finance

At the end of October, Meta announced a joint venture with funds managed by Blue Owl Capital to finance, develop, and operate the previously announced “Hyperion” project, a multi-building AI megacampus in Richland Parish, Louisiana. Under the new JV structure, Blue Owl will own 80 percent and Meta 20 percent, though Meta had announced the project long before Blue Owl’s involvement was confirmed. The venture anticipates roughly $27 billion in total development costs for the buildings and the long-lived power, cooling, and connectivity infrastructure. Blue Owl contributed about $7 billion in cash at formation; Meta received a $3 billion one-time distribution and contributed land and construction-in-progress to the vehicle. Rachel Peterson, VP of Data Centers at Meta, noted that construction on the project is already well underway, with thousands of workers on-site. Structuring Capital and Control Media coverage from Reuters and others characterizes the financing package as one of the largest private-capital deals ever for a single industrial campus, with debt placements led by PIMCO and additional institutional investors. Meta keeps the project largely off its balance sheet through the joint venture while retaining the development and property-management role and serving as the anchor tenant for the campus. The JV allows Meta to smooth its capital expenditures and manage risk while maintaining execution control over its most ambitious AI site to date. The structure incorporates lease agreements and a residual-value guarantee, according to Kirkland & Ellis (Blue Owl’s counsel), enabling lenders and equity holders to underwrite a very large, long-duration asset with multiple exit paths. For Blue Owl, Hyperion represents a utility-like digital-infrastructure platform with contracted cash flows to a single A-tier counterparty: a hyperscaler running mission-critical AI workloads for training and inference. As Barron’s and MarketWatch have noted, the deal underscores Wall Street’s ongoing appetite for AI-infrastructure investments at

Read More »

ZincFive targets AI data centers with new energy system

The system is engineered to absorb sharp transient loads from GPU clusters and AI training environments, while also providing reliable runtime support for conventional IT operations. By managing dynamic power at the UPS level, it reduces strain on upstream infrastructure, lowers capital expenditures (CAPEX), and improves grid interactions, according to ZincFive. “With BC 2 AI, we are delivering a safe, sustainable, and future-ready power solution designed to handle the most demanding AI workloads while continuing to support traditional IT backup. This is a defining moment not just for ZincFive, but for the entire data center industry as it adapts to the AI era,” Tod Higinbotham, CEO of ZincFive, said in a statement. Another benefit is its smaller design. Competing solutions can require two to four times more space to meet AI’s power surges, which can be up to 150% of UPS rated capacity. With BC 2 AI’s minimal footprint expansion, power can be handled more efficiently, ZincFive stated.

Read More »

Cisco centralizes customer experience around AI

The idea is to make sure enterprises are effectively choosing, implementing, and using the technologies they purchase to achieve their business goals, according to the company. Cisco CX offers a suite of services to help customers optimize their network infrastructure, security, collaboration, cloud and data center operations – from planning and design to implementation and maintenance. “For too long, the delivery of services has been fragmented, with support and professional services using different tools optimized for specific functions or lifecycle stages. This has led to a fragmented experience where customers, partners, and Cisco teams spend more time on data collection and tool maintenance than on high-value analysis,” wrote Bhaskar Jayakrishnan, senior vice president of engineering with the Cisco CX group in a blog about the new technology.  “Historically, the handoffs between these stages have been inefficient. Designs are interpreted by humans and then converted into code. Operational data is manually analyzed to inform optimizations. This process is slow, error-prone, and loses critical context at every step.” “Cisco IQ represents a shift from this tool-centric model to an intelligence-centric one. It is a multi-persona system, serving customers, partners, and our own services teams through an API-first architecture. Our objective is to turn decades of institutional knowledge into a living, adaptive system that makes your infrastructure smarter, more resilient, and more secure,” Jayakrishnan wrote.

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available) Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA and New York, NY.  Navy Nuke / Military Vets leaving service accepted! This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Montvale, NJ This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Dallas, TX or Chicago IL *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Data Center MEP Construction

Read More »

NVIDIA at GTC 2025: Building the AI Infrastructure of Everything

Omniverse DSX Blueprint Unveiled Also at the conference, NVIDIA released a blueprint for how other firms should build massive, gigascale AI data centers, or AI factories, in which Oracle, Microsoft, Google, and other leading tech firms are investing billions. The most powerful and efficient of those, company representatives said, will include NVIDIA chips and software. A new NVIDIA AI Factory Research Center in Virginia will use that technology. This new “mega” Omniverse DSX Blueprint is a comprehensive, open blueprint for designing and operating gigawatt-scale AI factories. It combines design, simulation, and operations across factory facilities, hardware, and software. • The blueprint expands to include libraries for building factory-scale digital twins, with Siemens’ Digital Twin software first to support the blueprint and FANUC and Foxconn Fii first to connect their robot models. • Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, Taiwan Semiconductor Manufacturing Co. (TSMC), and Wistron build Omniverse factory digital twins to accelerate AI-driven manufacturing. • Agility Robotics, Amazon Robotics, Figure, and Skild AI build a collaborative robot workforce using NVIDIA’s three-computer architecture. NVIDIA Quantum Gains  And then there’s quantum computing. It can help data centers become more energy-efficient and faster with specific tasks such as optimization and AI model training. Conversely, the unique infrastructure needs of quantum computers, such as power, cooling, and error correction, are driving the development of specialized quantum data centers. Huang said it’s now possible to make one logical qubit, or quantum bit, that’s coherent, stable, and error corrected.  However, these qubits—the units of information enabling quantum computers to process information in ways ordinary computers can’t—are “incredibly fragile,” creating a need for powerful technology to do quantum error correction and infer the qubit’s state. To connect quantum and GPU computing, Huang announced the release of NVIDIA NVQLink — a quantum‑GPU interconnect that enables real‑time CUDA‑Q calls from quantum

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »