Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Baker Hughes Bags Data Center Gas Turbine Deal

In a release sent to Rigzone on Thursday, Baker Hughes announced an award from Frontier Infrastructure Holdings for 16 NovaLT gas turbines to power its data center projects in Wyoming and Texas. Baker Hughes noted in the release that, as part of the award, it is supplying Frontier its NovaLT gas turbine technology and associated equipment, including gears and Brush Power Generation four-pole generators, to power dedicated energy islands at Frontier’s behind the meter (BTM) power generation sites. The NovaLT gas turbine is a multi-fuel solution that can start-up and run on different fuels, including natural gas, various blends of natural gas and hydrogen, and 100 percent hydrogen, Baker Hughes stated in the release.   “This award underscores our commitment to advancing sustainable energy development through reliable and efficient power solutions that cater to the diverse needs of the industry,” Ganesh Ramaswamy, Executive Vice President of Industrial and Energy Technology at Baker Hughes, said in the release. “Leveraging our comprehensive range of integrated power solutions for Frontier’s U.S. data center projects demonstrates innovative, scalable, and lower-carbon technologies helping to meet the growing demand for power,” Ramaswamy added. In a release posted on its site back in March, Baker Hughes announced a strategic partnership between the company and Frontier “to accelerate the deployment of large-scale carbon capture and storage (CCS) and power solutions in the United States”. Baker Hughes noted in that release that, as part of the agreement, it “will provide innovative technologies and resources in support of the development of large-scale CCS, power generation, and data center projects”. Lorenzo Simonelli, chairman and CEO of Baker Hughes, said in that release, “Baker Hughes is committed to delivering innovative solutions that support increasing energy demand, in part driven by the rapid adoption of AI, while ensuring we continue to enable the decarbonization of the industry”.

Read More »

Eni Eyes Biofuel Feedstock Production in Ivory Coast

Eni SpA has signed an agreement with Côte d’Ivoire’s Agriculture Ministry to explore the potential of cultivating biofuel crops in the West African country. The memorandum of understanding “aims to enhance the rubber (hevea) supply chain and to assess the introduction of oilseed crops on marginal and degraded lands, thereby contributing to the country’s sustainable agricultural development without competing with food production and forest ecosystem”, the Italian state-backed energy major said in an online statement. Eni said an existing project in collaboration with the Ivorian Federation of Rubber Producers is already “enabling the valorization of rubber residues – a crop widely cultivated in the country – by transforming them into raw materials for biofuel production, generating economic and social benefits for thousands of farmers”. Last year Eni expanded its hydrocarbon-focused presence in Ivory Coast, where it entered 2015, to also pursue biorefining opportunities through the new company Eni Natural Energies Côte d’Ivoire. The new company is “dedicated to developing sustainable supply chains of agricultural raw materials for the company’s biorefineries”, Eni said. “The initiative is part of Eni’s strategy for sustainable mobility and its broader commitment to supporting fair and inclusive growth in line with the objectives of Côte d’Ivoire’s National Development Plan”. Biorefining Expansion Eni, through subsidiary Enilive, has a biorefining production capacity of 1.65 million metric tons per annum (MMtpa), according to a statement by Eni March 27. Eni aims to raise this to over five MMtpa by 2030. It also aims to enable one MMtpa of sustainable aviation fuel production by next year and potentially double that level by the end of the decade. Last year Eni announced an organization restructuring for Enilive, involving KKR & Co. Inc., to bring in new capital. In the first quarter of 2025 the United States investor completed the purchase of a 25 percent

Read More »

Egypt Considers Securing Another LNG Vessel as Import Needs Jump

Egypt is considering adding yet another LNG import vessel, according to people familiar with the plan, as the nation that was exporting gas just a year ago is now rushing to lock in supplies to cover domestic demand. A new vessel would add to the Energos Power ship that arrived in the North African country’s Alexandria port earlier this week and the Hoegh Galleon operating in Ain Sokhna. Two others – Energos Eskimo arriving this summer and another from Turkish company Botas – have also been tied up. Egypt’s oil ministry didn’t immediately reply to a request for comment on the additional vessel.  The country has moved to rapidly lease import terminals, known as floating storage and regasification units, over the past 12 months as overseas purchases surged amid declining local gas output and rising demand. It is in talks with companies including Saudi Aramco, Trafigura Group and Vitol Group for LNG supplies until 2028, putting it on course to be a long-term importer and helping tighten global gas markets. Egypt is also expected to replace the Hoegh Galleon vessel with the Hoegh Gandria in the fourth quarter of 2026.  The FSRUs that have been secured are expected to be installed at or near the existing LNG import facility in Ain Sokhna. Work is also underway for import infrastructure near Alexandria on the Mediterranean Sea, according to the people, who asked not be identified discussing ongoing talks.  Exact timing and locations of the leased FSRUs could be subject to change, as well as details on where a new import vessel could be added, the people said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak

Read More »

OMV to Build Major Green Hydrogen Plant in Lower Austria

OMV AG has made a final investment decision to proceed with the construction of an electrolysis facility in Bruck an der Leitha, Lower Austria. The 140-MW electrolyzer – a facility that splits water molecules into hydrogen and oxygen through electricity – is planned to produce up to 23,000 metric tons a year of green hydrogen. Expected to start production 2027, the project will use wind, solar and hydro power. It would be the biggest European electrolytic facility to produce renewable hydrogen, OMV said. Hydrogen produced through electrolysis that runs on renewable power is called green or renewable. On June 30 OMV announced the start of production at its first commercial-scale green hydrogen facility, built with a capacity of 1,500 metric tons per annum at its Schwechat refinery near Vienna. The plant uses a 10-MW PEM (polymer electrolyte membrane) electrolyzer powered by hydro, solar and wind energy. The process avoids up to 15,000 metric tons of carbon dioxide (CO2) emissions a year, equivalent to the CO2 consumption of 2,000 persons per year based on a European Union average, according to OMV. Output from the newly inaugurated facility will be used to decarbonize the refinery and produce more sustainable fuels and chemicals including sustainable aviation fuel and renewable diesel. Martijn van Koten, OMV executive vice president for fuels and chemicals, said of the incoming project, “With this project, we are re-inventing the production of everyday essential fuels and chemical products – a groundbreaking step that demonstrates how industrial innovation and sustainability can go hand in hand”. “By using green hydrogen in the future, we are making the processes and production of fuels and chemical products more sustainable and are future-proofing our industry. “Our planned 140 MW electrolysis plant in Bruck an der Leitha will meet a significant share of the hydrogen demand at the OMV

Read More »

OKEA Discovers More Oil in Brage Field in Norwegian North Sea

OKEA ASA and its partners in production license 055 have made a discovery that is estimated to hold 300,000 to 2.8 million barrels of recoverable oil equivalent along the eastern flank of the already producing Brage field on Norway’s side of the North Sea. The discovery was made in the southern part of the Prince prospect in wildcat well 31/4-A-23 G. Well 31/4-A-23 F, in the northern part of the Prince prospect, turned up dry. “The licensees will now assess the deposit as part of the further development of the Brage field”, the Norwegian Offshore Directorate said in an online statement. The stakeholders are OKEA with a 35.2 percent stake, Lime Petroleum AS with 33.84 percent, DNO Norge AS with 14.26 percent, Petrolia NOCO AS with 12.26 percent and M Vest Energy AS with 4.44 percent. “The field has been in production for a long time, and work is under way to identify new methods to improve recovery”, the upstream regulator said. “New wells are being drilled, often combined with investigation of nearby prospects”. Well A-23 F aimed to prove petroleum in Upper Jurassic reservoir rocks in the Sognefjord Formation, while A-23 G aimed to delineate a potential discovery in A-23 F and delineate the northern part of 31/4-A-13 E (Kim). A-23 F, horizontally drilled, showed a sandstone layer in the Sognefjord Formation with a total measured thickness of 220 meters (721.78 feet) along the wellbore and 12 meters of vertical thickness with “good reservoir properties”, the Directorate reported. It was drilled to a measured depth of 6285 meters and a vertical depth of 2153 meters below sea level in the Sognefjord Formation. A-23 G was drilled horizontally at a vertical depth of 2,120-2,171 meters along the eastern flank of the Brage field. It encountered a sandstone layer three to four meters thick

Read More »

Eni to Develop Three PV Plants for Marelli

Eni S.p.A.’s renewables arm, Plenitude, has signed an agreement with Marelli Holdings to build three photovoltaic plants and an Energy Community. Eni said in a media release that the facilities will be located at Marelli’s production sites in Melfi (Potenza), Sulmona (L’Aquila), and Turin, with a total capacity of 5.4 megawatts-peak (MWp). The projects will be carried out under an EPC (Energy Performance Contract) model, allowing Marelli to obtain renewable energy at a fixed cost without any initial investment, Eni said. At the Melfi site, Plenitude has designed an Energy Community for Marelli under the Individual Remote Self-Consumption (AID) configuration. A photovoltaic park with a capacity of 999 kWp will be installed on Marelli’s land, allowing energy sharing with a neighboring company. The plant will benefit from 20-year state incentives allocated to support local social initiatives, Eni said. Plenitude is promoting Energy Communities to support the transition to a more sustainable and participatory energy system, allowing producers and consumers to share renewable energy. “We are excited to announce our collaboration with Marelli, a global leader in the automotive sector, and to support them in the challenge of the energy transition with solutions based on a renewable energy-sharing model in which we firmly believe”, Vincenzo Viganò, Head of Retail for the Italian Market at Plenitude, said. Eni said Plenitude will assist Marelli throughout every stage of the project, from the planning and building of the facilities to the application for incentives. It will also offer its technological platform, “Plenitude Comunità Energetiche,” which will facilitate the management and oversight of the AID configuration. Meanwhile at the production sites in Sulmona and Turin, the photovoltaic plants will have an installed capacity of 4 MWp and 400 kWp, respectively, contributing to potential energy cost savings for these sites, Eni said. To contact the author,

Read More »

Baker Hughes Bags Data Center Gas Turbine Deal

In a release sent to Rigzone on Thursday, Baker Hughes announced an award from Frontier Infrastructure Holdings for 16 NovaLT gas turbines to power its data center projects in Wyoming and Texas. Baker Hughes noted in the release that, as part of the award, it is supplying Frontier its NovaLT gas turbine technology and associated equipment, including gears and Brush Power Generation four-pole generators, to power dedicated energy islands at Frontier’s behind the meter (BTM) power generation sites. The NovaLT gas turbine is a multi-fuel solution that can start-up and run on different fuels, including natural gas, various blends of natural gas and hydrogen, and 100 percent hydrogen, Baker Hughes stated in the release.   “This award underscores our commitment to advancing sustainable energy development through reliable and efficient power solutions that cater to the diverse needs of the industry,” Ganesh Ramaswamy, Executive Vice President of Industrial and Energy Technology at Baker Hughes, said in the release. “Leveraging our comprehensive range of integrated power solutions for Frontier’s U.S. data center projects demonstrates innovative, scalable, and lower-carbon technologies helping to meet the growing demand for power,” Ramaswamy added. In a release posted on its site back in March, Baker Hughes announced a strategic partnership between the company and Frontier “to accelerate the deployment of large-scale carbon capture and storage (CCS) and power solutions in the United States”. Baker Hughes noted in that release that, as part of the agreement, it “will provide innovative technologies and resources in support of the development of large-scale CCS, power generation, and data center projects”. Lorenzo Simonelli, chairman and CEO of Baker Hughes, said in that release, “Baker Hughes is committed to delivering innovative solutions that support increasing energy demand, in part driven by the rapid adoption of AI, while ensuring we continue to enable the decarbonization of the industry”.

Read More »

Eni Eyes Biofuel Feedstock Production in Ivory Coast

Eni SpA has signed an agreement with Côte d’Ivoire’s Agriculture Ministry to explore the potential of cultivating biofuel crops in the West African country. The memorandum of understanding “aims to enhance the rubber (hevea) supply chain and to assess the introduction of oilseed crops on marginal and degraded lands, thereby contributing to the country’s sustainable agricultural development without competing with food production and forest ecosystem”, the Italian state-backed energy major said in an online statement. Eni said an existing project in collaboration with the Ivorian Federation of Rubber Producers is already “enabling the valorization of rubber residues – a crop widely cultivated in the country – by transforming them into raw materials for biofuel production, generating economic and social benefits for thousands of farmers”. Last year Eni expanded its hydrocarbon-focused presence in Ivory Coast, where it entered 2015, to also pursue biorefining opportunities through the new company Eni Natural Energies Côte d’Ivoire. The new company is “dedicated to developing sustainable supply chains of agricultural raw materials for the company’s biorefineries”, Eni said. “The initiative is part of Eni’s strategy for sustainable mobility and its broader commitment to supporting fair and inclusive growth in line with the objectives of Côte d’Ivoire’s National Development Plan”. Biorefining Expansion Eni, through subsidiary Enilive, has a biorefining production capacity of 1.65 million metric tons per annum (MMtpa), according to a statement by Eni March 27. Eni aims to raise this to over five MMtpa by 2030. It also aims to enable one MMtpa of sustainable aviation fuel production by next year and potentially double that level by the end of the decade. Last year Eni announced an organization restructuring for Enilive, involving KKR & Co. Inc., to bring in new capital. In the first quarter of 2025 the United States investor completed the purchase of a 25 percent

Read More »

Egypt Considers Securing Another LNG Vessel as Import Needs Jump

Egypt is considering adding yet another LNG import vessel, according to people familiar with the plan, as the nation that was exporting gas just a year ago is now rushing to lock in supplies to cover domestic demand. A new vessel would add to the Energos Power ship that arrived in the North African country’s Alexandria port earlier this week and the Hoegh Galleon operating in Ain Sokhna. Two others – Energos Eskimo arriving this summer and another from Turkish company Botas – have also been tied up. Egypt’s oil ministry didn’t immediately reply to a request for comment on the additional vessel.  The country has moved to rapidly lease import terminals, known as floating storage and regasification units, over the past 12 months as overseas purchases surged amid declining local gas output and rising demand. It is in talks with companies including Saudi Aramco, Trafigura Group and Vitol Group for LNG supplies until 2028, putting it on course to be a long-term importer and helping tighten global gas markets. Egypt is also expected to replace the Hoegh Galleon vessel with the Hoegh Gandria in the fourth quarter of 2026.  The FSRUs that have been secured are expected to be installed at or near the existing LNG import facility in Ain Sokhna. Work is also underway for import infrastructure near Alexandria on the Mediterranean Sea, according to the people, who asked not be identified discussing ongoing talks.  Exact timing and locations of the leased FSRUs could be subject to change, as well as details on where a new import vessel could be added, the people said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak

Read More »

OMV to Build Major Green Hydrogen Plant in Lower Austria

OMV AG has made a final investment decision to proceed with the construction of an electrolysis facility in Bruck an der Leitha, Lower Austria. The 140-MW electrolyzer – a facility that splits water molecules into hydrogen and oxygen through electricity – is planned to produce up to 23,000 metric tons a year of green hydrogen. Expected to start production 2027, the project will use wind, solar and hydro power. It would be the biggest European electrolytic facility to produce renewable hydrogen, OMV said. Hydrogen produced through electrolysis that runs on renewable power is called green or renewable. On June 30 OMV announced the start of production at its first commercial-scale green hydrogen facility, built with a capacity of 1,500 metric tons per annum at its Schwechat refinery near Vienna. The plant uses a 10-MW PEM (polymer electrolyte membrane) electrolyzer powered by hydro, solar and wind energy. The process avoids up to 15,000 metric tons of carbon dioxide (CO2) emissions a year, equivalent to the CO2 consumption of 2,000 persons per year based on a European Union average, according to OMV. Output from the newly inaugurated facility will be used to decarbonize the refinery and produce more sustainable fuels and chemicals including sustainable aviation fuel and renewable diesel. Martijn van Koten, OMV executive vice president for fuels and chemicals, said of the incoming project, “With this project, we are re-inventing the production of everyday essential fuels and chemical products – a groundbreaking step that demonstrates how industrial innovation and sustainability can go hand in hand”. “By using green hydrogen in the future, we are making the processes and production of fuels and chemical products more sustainable and are future-proofing our industry. “Our planned 140 MW electrolysis plant in Bruck an der Leitha will meet a significant share of the hydrogen demand at the OMV

Read More »

OKEA Discovers More Oil in Brage Field in Norwegian North Sea

OKEA ASA and its partners in production license 055 have made a discovery that is estimated to hold 300,000 to 2.8 million barrels of recoverable oil equivalent along the eastern flank of the already producing Brage field on Norway’s side of the North Sea. The discovery was made in the southern part of the Prince prospect in wildcat well 31/4-A-23 G. Well 31/4-A-23 F, in the northern part of the Prince prospect, turned up dry. “The licensees will now assess the deposit as part of the further development of the Brage field”, the Norwegian Offshore Directorate said in an online statement. The stakeholders are OKEA with a 35.2 percent stake, Lime Petroleum AS with 33.84 percent, DNO Norge AS with 14.26 percent, Petrolia NOCO AS with 12.26 percent and M Vest Energy AS with 4.44 percent. “The field has been in production for a long time, and work is under way to identify new methods to improve recovery”, the upstream regulator said. “New wells are being drilled, often combined with investigation of nearby prospects”. Well A-23 F aimed to prove petroleum in Upper Jurassic reservoir rocks in the Sognefjord Formation, while A-23 G aimed to delineate a potential discovery in A-23 F and delineate the northern part of 31/4-A-13 E (Kim). A-23 F, horizontally drilled, showed a sandstone layer in the Sognefjord Formation with a total measured thickness of 220 meters (721.78 feet) along the wellbore and 12 meters of vertical thickness with “good reservoir properties”, the Directorate reported. It was drilled to a measured depth of 6285 meters and a vertical depth of 2153 meters below sea level in the Sognefjord Formation. A-23 G was drilled horizontally at a vertical depth of 2,120-2,171 meters along the eastern flank of the Brage field. It encountered a sandstone layer three to four meters thick

Read More »

Eni to Develop Three PV Plants for Marelli

Eni S.p.A.’s renewables arm, Plenitude, has signed an agreement with Marelli Holdings to build three photovoltaic plants and an Energy Community. Eni said in a media release that the facilities will be located at Marelli’s production sites in Melfi (Potenza), Sulmona (L’Aquila), and Turin, with a total capacity of 5.4 megawatts-peak (MWp). The projects will be carried out under an EPC (Energy Performance Contract) model, allowing Marelli to obtain renewable energy at a fixed cost without any initial investment, Eni said. At the Melfi site, Plenitude has designed an Energy Community for Marelli under the Individual Remote Self-Consumption (AID) configuration. A photovoltaic park with a capacity of 999 kWp will be installed on Marelli’s land, allowing energy sharing with a neighboring company. The plant will benefit from 20-year state incentives allocated to support local social initiatives, Eni said. Plenitude is promoting Energy Communities to support the transition to a more sustainable and participatory energy system, allowing producers and consumers to share renewable energy. “We are excited to announce our collaboration with Marelli, a global leader in the automotive sector, and to support them in the challenge of the energy transition with solutions based on a renewable energy-sharing model in which we firmly believe”, Vincenzo Viganò, Head of Retail for the Italian Market at Plenitude, said. Eni said Plenitude will assist Marelli throughout every stage of the project, from the planning and building of the facilities to the application for incentives. It will also offer its technological platform, “Plenitude Comunità Energetiche,” which will facilitate the management and oversight of the AID configuration. Meanwhile at the production sites in Sulmona and Turin, the photovoltaic plants will have an installed capacity of 4 MWp and 400 kWp, respectively, contributing to potential energy cost savings for these sites, Eni said. To contact the author,

Read More »

IRA tax credits spur construction, manufacturing in red and blue states

Emmanuel Martin-Lauzer is director of business development and public affairs at Nexans. The jury is still out on whether the Inflation Reduction Act (IRA) has helped contain or reduce inflation. Nevertheless, certain provisions have delivered tangible benefits that deserve closer examination before any potential repeal. While some provisions may not have broad appeal, one success of the IRA has been its impact on strengthening U.S. energy production. The bill speaks more to renewable energy innovation and increase in energy independence to support U.S. economic growth than to direct economic impact. Repealing it wholesale risks far more than we might anticipate. At its core, the IRA tax credits for energy generation are driving significant investment in innovative energy production. Because renewable energy makes up around 21.4% of the energy mix, these incentives have been passed down the chain to the benefits of the ratepayer, while simultaneously sustaining the creation of entire industries. These investments have sparked construction and manufacturing jobs across both red and blue states, proving that clean energy isn’t just an environmental initiative. These tax credits have also bolstered America’s energy independence. Renewables like solar, onshore wind and offshore wind are integral to our domestic energy supply chain, reducing reliance on foreign sources, and making our own infrastructure more resilient. They’ve also driven initiatives to improve long-term cost competitiveness, incentivizing developers to innovate to reduce costs.   Our current grid infrastructure and energy generation systems are nearing obsolescence and over the next decade the demand on these systems is expected to skyrocket. Data centers alone are expected to double their electricity demand, and by 2035, over 71 million electric vehicles will require around 400 kWh to charge per month. Urbanization trends are compounding this demand as more people move to cities. Without the IRA tax credits, we risk slowing down our

Read More »

FERC ALJ order threatens competitive transmission cost caps: CAISO

An order by a Federal Energy Regulatory Commission administrative law judge threatens cost caps included in competitive transmission solicitations across the United States, according to the California Independent System Operator. A May 22 ruling by FERC ALJ Joel deJesus could also upend FERC’s framework for providing refunds to electricity customers when the agency finds a company has been overcollecting revenue, CAISO said in a filing with the commission on Tuesday. The California grid operator urged FERC to overturn deJesus’ findings, saying they “will harm ratepayers, undercut the consumer protections afforded by the Federal Power Act …, and cast doubt on the CAISO’s and customers’ ability to rely on voluntary, binding cost caps proposed and agreed to by project sponsors in competitive transmission planning processes.” The issue centers on a dispute over a proposal by a Lotus Infrastructure Partners affiliate to recover more than double a cost cap for the 500-kV Ten West Link transmission project between California and Arizona. CAISO selected the DCR Transmission project in 2014 following a solicitation that grew out of its transmission planning process. The transmission line started operating a year ago. DCR in June 2023 asked FERC to approve a transmission tariff based on a $553.3 million estimated project cost compared to a $259 million binding cost cap. Three months later, FERC accepted DCR’s proposal, subject to refund, but ordered hearings and settlement procedures, according to CAISO. The proceeding was moving under the Federal Power Act’s section 205, according to CAISO. However, deJesus said FERC’s initial order was “ambiguous” as to what FPA section the case should advance under. He contends FERC should have determined that the DCR rate filing was an “initial rate filing” to be handled under section 206 of the FPA and that FERC should have established a refund date under that

Read More »

Transformer, breaker backlogs persist, despite reshoring progress

In July, U.S. steelmaker Cleveland-Cliffs revealed plans to build an electrical distribution transformer plant on an idled industrial site in Weirton, West Virginia. The proposed project, which Cliffs said could eventually employ 600 union workers recently laid off from a neighboring facility, was a big enough deal to attract the state’s Republican governor to a press conference onsite. Less than a year later, Cliffs abandoned the project as part of a wider shift away from what CEO Lourenco Goncalves called “non-core markets” on a May 7 investor call. The decision highlighted ongoing challenges for a commercial electrical equipment supply chain that has yet to fully recover from the COVID-19 pandemic. On average, customers today wait three years for high-voltage transformers and one year for distribution transformers, said Adrienne Lotto, senior vice president of grid security, technical and operations services at the American Public Power Association. About 80% of the former and 40% to 50% of the latter are imported, according to Benjamin Boucher, a Wood Mackenzie senior data analyst focused on electrical supply chains. Meanwhile, the National Electrical Manufacturers Association predicts 2% annual electricity demand growth through 2050. Given load growth expectations, fully reshoring transformer production could take years and cost many billions of dollars — if it’s even feasible, Boucher said. But after years of caution, the industry is giving it a go. Not long before Cliffs announced its now-abandoned West Virginia project, the German industrial giant Siemens said it would spend $150 million to build its first high-voltage U.S. transformer factory in North Carolina, where Pennsylvania-based PTT plans a $103 million expansion of an existing transformer manufacturing facility. Last month, MGM Transformers and VanTran Transformers opened a 430,000 square-foot plant in central Texas. These and other capacity-boosting projects could help ease electrical equipment backlogs as U.S. utilities, data

Read More »

Bureau of Land Management Leases 3 Parcels in New Mexico

In a release posted on its site recently, the U.S. Bureau of Land Management announced that its New Mexico State Office leased three parcels, totaling 1,262 acres, for $576,982 in total receipts for its quarterly oil and gas lease sale. The Bureau said combined bonus bids and rentals from the leases will be distributed between the federal government and the State of New Mexico. It went on to note in the release that oil and gas leases are awarded “for a term of 10 years and as long thereafter as there is production of oil and gas in paying quantities”. The federal government receives a royalty of 16.67 percent of the value of production, the Bureau highlighted in the release. “Oil and gas lease sales support domestic energy production and American energy independence, while contributing to the nation’s economic and military security, “ the Bureau stated in the release. “Consistent with Executive Order 14154, ‘Unleashing American Energy’, the BLM’s lease sales help meet the energy needs of U.S. citizens and solidify the nation as a global energy leader long into the future,” it added. “Leasing is the first step in the process to develop federal oil and gas resources. The BLM ensures oil and gas development meets the requirements set forth by the National Environmental Policy Act of 1969 and other applicable legal authorities,” it continued. In a release posted on its site on April 29, the Bureau announced that its Montana-Dakotas State Office leased 11 parcels, totaling 4,266.06 acres, for $3,413,797 in total receipts for its quarterly oil and gas lease sale. The combined bonus bids and rentals from the leases will be distributed between the federal government and the States of Montana and North Dakota, the Bureau noted in that release. In a statement posted on its site on March

Read More »

Glenfarne Taps Worley for Final Engineering Works on Alaska LNG

Glenfarne Group LLC has selected Worley Ltd. to complete engineering works and update the cost estimate for the Alaska LNG project in preparation for a final investment decision (FID). Worley’s work “has commenced and will utilize and supplement the extensive package of previously completed engineering work and update the cost of the pipeline”, a joint statement said. “Worley has also been selected as the preferred engineering firm for the Cook Inlet Gateway LNG import terminal and project delivery advisor to Glenfarne across the Alaska LNG projects”. Concurrently New York City-based energy investor Glenfarne launched a process to select partners to deliver the project. An FID is expected by year-end. Alaska LNG, approved by the Federal Energy Regulatory Commission May 2020, will deliver natural gas from the state’s North Slope to both domestic and global markets. It is the only federally permitted liquefied natural gas (LNG) project on the United States Pacific Coast, according to co-developer Alaska Gasline Development Corp. (AGDC). Alaska LNG has three subprojects: an LNG export terminal with a capacity of 20 million metric tons per annum (MMtpa), an 807-mile 42-inch pipeline and a carbon capture plant with a storage capacity of 7 MMtpa. “Phase One will deliver natural gas approximately 765 miles from the North Slope to the Anchorage region. Phase Two adds compression equipment and approximately 42 miles of pipeline under Cook Inlet to the Alaska LNG Export Facility in Nikiski and will be constructed concurrently with the LNG export facility”, Glenfarne said. “Glenfarne is pushing Alaska LNG forward with expediency engaging prospective strategic partners”, said Brendan Duval, chief executive and founder of Glenfarne. “We are particularly proud to be expanding our relationship with Worley to Alaska LNG from our existing partnership on the Texas LNG project. Worley is one of the world’s largest and most

Read More »

California could nearly double generation capacity using surplus interconnection: Berkeley report

Dive Brief: California could accelerate the deployment of clean energy and save billions of dollars by adding more generation and storage at underutilized interconnections for existing power plants, according to a working paper by researchers at the University of California, Berkeley. The Federal Energy Regulatory Commission opened the door for new sources to use surplus interconnection with Order 845 in 2018.  Several experts said the research was promising, but the California Independent System Operator said the report likely “significantly overstates” commercial interest and feasibility. Dive Insight: Like many states, California is facing a congested interconnection queue, costly transmission upgrades and rising retail electricity rates. The Berkeley paper, which has not been peer-reviewed, claims to have analyzed hundreds of existing renewable and thermal plants in California and identified potential for 53 GW of additional clean energy capacity, including wind and solar, as well as 23 GW of storage, through surplus interconnection. In total, it says that adding 76 GW of clean energy capacity could nearly double the state’s installed generation capacity, which was 89 GW in 2024, according to the California Energy Commission. The report was published earlier this month along with an interactive map of surplus interconnection. The paper concludes that, in general, both renewable and fossil fuel generators underutilize their interconnections, but this is especially true of gas-powered peaker plants. Overall, the paper says about 16 GW of fossil fuel capacity is operating only 15% of the time or less. “So for 85% of the time, the connection where that gas plant is sending electricity to the grid is essentially idle,” said Umed Paliwal, one of the report’s authors and a senior scientist at Berkeley’s Goldman School of Public Policy. “What if you could add solar and wind near these underutilized interconnections? … This is a very fast way

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Akool Live Camera can translate video calls in real time, swap faces, and get live virtual avatars to mimic human movements

Akool Live Camera uses AI to capture human movement and mimic that movement with a generated virtual avatar in real time.

Akool can also translate speech in real time during a virtual meeting and also provide instant face swapping during a call. The AI technology listens to conversations in one language and instantly translates them into the selected target language, providing real-time, synchronized audio that matches the avatar’s lip movements and facial expressions.

This video generation technology owes its smarts to AI from Akool, a startup based in Palo Alto, California, said Jiajun “Jeff” Lu, CEO of Akool, in an interview with GamesBeat.

“Our main motivation is to enhance the real-time experience and live experiences. For example, you can use avatars to join meetings, you can use video translation to do real-time meeting translations, and lots of other things,” Lu said. “We want to make it so you can’t tell the avatar from the real person.”

The company also offers lip-syncing for avatars in real time, where the avatar lip movements can match the words being spoken by a person in real time, Lu said.

This Akool Live Camera tool is a part of the Akool Live Suite, a first-of-its-kind collection of products that features live, real-time video generation with minimal delay. The suite includes live avatars, live face swap, video translation, and real-time video generation.

“The products we offer are live AI avatars, video translation, face swap and image to video generation, and so on,” Lu said. “We definitely are very competitive in the landscape in terms of human centered videos and things that we do are now available to be in real time.”

It delivers the kind of hyper-realistic visuals you’d expect from OpenAI’s video generation model Sora, but created instantly and in real time, Lu said.

The implications of Akool Live Camera are pretty powerful. For the first time, a sales rep can present in perfect, lip-synced Spanish while speaking only English. A CEO can address global teams as a hyper-realistic digital avatar. A Twitch streamer can broadcast as an anime character without expensive motion-capture gear. And it all happens live in sub-100-millisecond latency across platforms like Zoom, Microsoft Teams, and Google Meet.

“Akool Live Camera sets a new standard in AI-powered video generation technology, going well beyond scripted, text-based prompts,” said Lu. “This opens up a new array of possibilities for virtual meetings and live streams, especially when connecting with international audiences.”

A new paradigm for live AI-powered video generation

Jiajun “Jeff” Lu is CEO of Akool.

Akool Live Camera isn’t merely another video generator. It’s an interactive engine that simulateshuman presence dynamically, analyzing live audio/visual inputs to generate responsive avatars withexpressions and contextual awareness.

Akool Live Camera thrives in unscripted environments where minimal latency makes synthetic humans indistinguishable from reality, such as live streams, virtual meetings, and augmented reality gaming. At least that’s the goal, said Lu.

The breakthrough lies in the technology’s ability to synthesize human interactions without preprocessing. Akool Live Camera’s edge-computing architecture processes live feeds instantaneously, allowing the avatars to adjust emotion, gestures, and speech cadence based on real-time audience analytics—a feat akin to an AI director improvising a film during live production.

Key features of Akool Live Camera, all in real-time include:

● AI Avatars: Seamless, photorealistic avatars that mirror a speaker’s expressions, gestures, and tone—reacting dynamically to audience cues in real time.● Video Translation: Instantly translates spoken language while preserving voice identity and syncing lip movements—enabling lifelike, multilingual communication during live events.● Live Face Swap: Swaps faces in real time with precision and emotion retention, allowing speakers to represent different identities while maintaining authentic performance. The company worked on applications with Coca-Cola and Qatar Airways.● AI Video Generation: Creates unscripted, hyper-realistic video on the fly—no pre-recording, scripting, or post-production needed. Content is generated live, based on context, tone, and audience interaction.

Key capabilities of Akool Live Camera include:

● Unmatched live interaction: Live face swap, avatar streaming and multilingual translation during calls/streams outpace other pre-recorded solutions.● Real-time multilingual translation: Break language barriers with synchronized voice translations that maintain the nuances of your original speech.● Dynamic expression and gesture mapping: Ensure your avatar reflects your real-time emotions and movements for authentic engagement.● Cross-platform versatility: Smooth and easy integration with Zoom, Microsoft Teams, Google Meet and more.● Privacy-forward design: Professional avatars protect user identities in sensitive meetings, with local facial data processing for added security.● Market- and audience-specific customization: Leverage anime, retro or business-centric avatars with robust outfit/persona swapping.

Lu said Akool Live Camera fundamentally changes the future of live video creation — no longer is it limited to just providing text prompts. The combination of Akool’s AI and intuitive design empowers creators, educators and enterprises to connect more authentically and efficiently than ever.

Slated for general availability in late 2025, Akool Live Camera is set to transform global communication through real-time, AI-powered interactions. Currently in beta and available to a select group of early adopters, the platform offers an exclusive glimpse into the future of live video.

You can secure your early access today at akool.com/live-camera and be among the first to experience the next era of live AI video generation. Secure your early access today by visiting https://akool.com/live-camera.

Origins

Founded in 2022, Akool has grown rapidly and invoiced tens of millions of dollars. Its product lineup includes video translation, real-time streaming avatars, studio-quality face swap, talking avatars, and the newly launched Akool Live Suite—a first-of-its-kind collection of real-time tools enabling live avatars, live face swap, and dynamic video generation with minimal delay.

Unlike Sora, which crafts narratives from text prompts, Akool Live Camera thrives in unscripted environments such as live streams, virtual meetings, and AR gaming. The goal is to take advantage of low latency to make synthetic humans created by Akool become indistinguishable from reality, Lu said.

The company has about 80 people now, with team members who used to work at Apple and Google. Lu himself worked at Google Cloud with a focus on cloud video processing. He also worked at Apple on Face ID. While the headquarters is in Palo Alto, Lu said the team is spread out.

He said the team hasn’t raised much money and is instead generating revenue from AI avatars, face swapping and video translation. Lu said the company can do a wide variety of languages in terms of real-time translation.

“Definitely AI video is moving at a faster pace of change. We are following that pace. In the long run, I believe that having a good user community will be pretty important in the coming years,” he said. “I predict the tech will get mature pretty quickly.”

As a small company, he said the focus is on developing models that are better for the tasks that we people care about.

“We are very ahead in this live game. Definitely, we have very strong engineers [who] are optimizing all the AI to make them run faster. We also have very strong engineers to optimize the whole pipeline to make them work well and have good experiences,” Lu said. “And we build our models from scratch ourselves. From model design to data collection to the whole pipeline, rather than leveraging some open source stuff.”

He said the company checks for copyrights when training models in order to avoid using IP for which it doesn’t have rights.

I asked what Lu thinks about the worries have about AI. He noted AI is getting “high attention” and his goal is to make AI work properly. The company puts watermarks into AI-generated content so it can’t be mistaken for being AI or human. The company also has content moderation tools.

Read More »

The Download: the story of OpenAI, and making magnesium

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI: The power and the pride OpenAI’s release of ChatGPT 3.5 set in motion an AI arms race that has changed the world. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it.In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. 
Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. Read the full review.—Mat Honan
This startup wants to make more climate-friendly metal in the US The news: A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions. Why it matters: Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. Read the full story. —Casey Crownhart A new sodium metal fuel cell could help clean up transportation A new type of fuel cell that runs on sodium metal could one day help clean up sectors where it’s difficult to replace fossil fuels, like rail, regional aviation, and short-distance shipping. The device represents a departure from technologies like lithium-based batteries and is more similar conceptually to hydrogen fuel cell systems. The sodium-air fuel cell has a higher energy density than lithium-ion batteries and doesn’t require the super-cold temperatures or high pressures that hydrogen does, making it potentially more practical for transport. Read the full story. —Casey Crownhart

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The US state department is considering vetting foreign students’ social mediaAfter ordering US embassies to suspend international students’ visa appointments. (Politico)+ Applicants’ posts, shares and comments could be assessed. (The Guardian)+ The Trump administration also wants to cut off Harvard’s funding. (NYT $) 2 SpaceX’s rocket exploded during its test flight It’s the third consecutive explosion the company has suffered this year. (CNBC)+ It was the first significant attempt to reuse Starship hardware. (Space)+ Elon Musk is fairly confident the problem with the engine bay has been resolved. (Ars Technica)3 The age of AI layoffs is hereAnd it’s taking place in conference rooms, not on factory floors. (Quartz)+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)4 Thousands of IVF embryos in Gaza were destroyed by Israeli strikesAn attack destroyed the fertility clinic where they were housed. (BBC)+ Inside the strange limbo facing millions of IVF embryos. (MIT Technology Review) 5 China’s overall greenhouse gas emissions have fallen for the first timeEven as energy demand has risen. (Vox)+ China’s complicated role in climate change. (MIT Technology Review) 6 The sun is damaging Starlink’s satellitesIts eruptions are reducing the satellite’s lifespans. (New Scientist $)+ Apple’s satellite connectivity dreams are being thwarted by Musk. (The Information $) 7 European companies are struggling to do business in ChinaEven the ones that have operated there for decades. (NYT $)+ The country’s economic slowdown is making things tough. (Bloomberg $)
8 US hospitals are embracing helpful robotsThey’re delivering medications and supplies so nurses don’t have to. (FT $)+ Will we ever trust robots? (MIT Technology Review) 9 Meet the people who write the text messages on your favorite show 💬They try to make messages as realistic, and intriguing, as possible. (The Guardian)
10 Robot dogs are delivering parcels in AustinWell, over 100 yard distances at least. (TechCrunch) Quote of the day “I wouldn’t say there’s hope. I wouldn’t bet on that.” —Michael Roll, a partner at law firm Roll & Harris, explains to Wired why businesses shouldn’t get their hopes up over obtaining refunds for Donald Trump’s tariff price hikes.
One more thing Is the digital dollar dead?In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects, including the US.How things change. The digital dollar—even though it doesn’t exist—has now become political red meat, as some politicians label it a dystopian tool for surveillance. So is the dream of the digital dollar dead? Read the full story. —Mike Orcutt
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Recently returned from vacation? Here’s how to cope with coming back to reality.+ Reconnecting with friends is one of life’s great joys.+ A new Parisian cocktail bar has done away with ice entirely in a bid to be more sustainable.+ Why being bored is good for you—no, really.

Read More »

This startup wants to make more climate-friendly metal in the US

A California-based company called Magrathea just turned on a new electrolyzer that can make magnesium metal from seawater. The technology has the potential to produce the material, which is used in vehicles and defense applications, with net-zero greenhouse-gas emissions. Magnesium is an incredibly light metal, and it’s used for parts in cars and planes, as well as in aluminum alloys like those in vehicles. The metal is also used in defense and industrial applications, including the production processes for steel and titanium. Today, China dominates production of magnesium, and the most common method generates a lot of the emissions that cause climate change. If Magrathea can scale up its process, it could help provide an alternative source of the metal and clean up industries that rely on it, including automotive manufacturing. The star of Magrathea’s process is an electrolyzer, a device that uses electricity to split a material into its constituent elements. Using an electrolyzer in magnesium production isn’t new, but Magrathea’s approach represents an update. “We really modernized it and brought it into the 21st century,” says Alex Grant, Magrathea’s cofounder and CEO.
The whole process starts with salty water. There are small amounts of magnesium in seawater, as well as in salt lakes and groundwater. (In seawater, the concentration is about 1,300 parts per million, so magnesium makes up about 0.1% of seawater by weight.) If you take that seawater or brine and clean it up, concentrate it, and dry it out, you get a solid magnesium chloride salt. Magrathea takes that salt (which it currently buys from Cargill) and puts it into the electrolyzer. The device reaches temperatures of about 700 °C (almost 1,300 °F) and runs electricity through the molten salt to split the magnesium from the chlorine, forming magnesium metal.
Typically, running an electrolyzer in this process would require a steady source of electricity. The temperature is generally kept just high enough to maintain the salt in a molten state. Allowing it to cool down too much would allow it to solidify, messing up the process and potentially damaging the equipment. Heating it up more than necessary would just waste energy.  Magrathea’s approach builds in flexibility. Basically, the company runs its electrolyzer about 100 °C higher than is necessary to keep the molten salt a liquid. It then uses the extra heat in inventive ways, including to dry out the magnesium salt that eventually goes into the reactor. This preparation can be done intermittently, so the company can take in electricity when it’s cheaper or when more renewables are available, cutting costs and emissions. In addition, the process will make a co-product, called magnesium oxide, that can be used to trap carbon dioxide from the atmosphere, helping to cancel out the remaining carbon pollution. The result could be a production process with net-zero emissions, according to an independent life cycle assessment completed in January. While it likely won’t reach this bar at first, the potential is there for a much more climate-friendly process than what’s used in the industry today, Grant says. Breaking into magnesium production won’t be simple, says Simon Jowitt, director of the Nevada Bureau of Mines and of the Center for Research in Economic Geology at the University of Nevada, Reno. China produces roughly 95% of the global supply as of 2024, according to data from the US Geological Survey. This dominant position means companies there can flood the market with cheap metal, making it difficult for others to compete. “The economics of all this is uncertain,” Jowitt says. The US has some trade protections in place, including an anti-dumping duty, but newer players with alternative processes can still face obstacles. US Magnesium, a company based in Utah, was the only company making magnesium in the US in recent years, but it shut down production in 2022 after equipment failures and a history of environmental concerns.  Magrathea plans to start building a demonstration plant in Utah in late 2025 or early 2026, which will have a capacity of roughly 1,000 tons per year and should be running in 2027. In February the company announced that it signed an agreement with a major automaker, though it declined to share its name on the record. The automaker pre-purchased material from the demonstration plant and will incorporate it into existing products. After the demonstration plant is running, the next step would be to build a commercial plant with a larger capacity of around 50,000 tons annually.

Read More »

OpenAI: The power and the pride

In April, Paul Graham, the founder of the tech startup accelerator Y Combinator, sent a tweet in response to former YC president and current OpenAI CEO Sam Altman. Altman had just bid a public goodbye to GPT-4 on X, and Graham had a follow-up question.  “If you had [GPT-4’s model weights ] etched on a piece of metal in the most compressed form,” Graham wrote, referring to the values that determine the model’s behavior. “how big would the piece of metal have to be? This is a mostly serious question. These models are history, and by default digital data evaporates.”  There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution.  In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen Hao of the Atlantic tells the story of the company’s rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal’s Keach Hagey, homes in more on Altman’s personal life, from his childhood through the present day, in order to tell the story of OpenAI. Both paint complex pictures and show Altman in particular as a brilliantly effective yet deeply flawed creature of Silicon Valley—someone capable of always getting what he wants, but often by manipulating others. 
Hao, who was formerly a reporter with MIT Technology Review, began reporting on OpenAI while at this publication and remains an occasional contributor. One chapter of her book grew directly out of that reporting. And in fact, as Hao says in the acknowledgments of Empire of AI, some of her reporting for MIT Technology Review, a series on AI colonialism, “laid the groundwork for the thesis and, ultimately, the title of this book.” So you can take this as a kind of disclaimer that we are predisposed to look favorably on Hao’s work.  With that said, Empire of AI is a powerful work, bristling not only with great reporting but also with big ideas. This comes across in service to two main themes. 
The first is simple: It is the story of ambition overriding ethics. The history of OpenAI as Hao tells it (and as Hagey does too) is very much a tale of a company that was founded on the idealistic desire to create a safety-focused artificial general intelligence but instead became more interested in winning. This is a story we’ve seen many times before in Big Tech. See Theranos, which was going to make diagnostics easier, or Uber, which was founded to break the cartel of “Big Taxi.” But the closest analogue might be Google, which went from “Don’t be evil” to (at least in the eyes of the courts) illegal monopolist. For that matter, consider how Google went from holding off on releasing its language model as a consumer product due to an abundance of caution, to rushing a chatbot out the door to catch up with and beat OpenAI. In Silicon Valley, no matter what one’s original intent, it always comes back to winning.   The second theme is more complex and forms the book’s thesis about what Hao calls AI colonialism. The idea is that the large AI companies act like traditional empires, siphoning wealth from the bottom rungs of society in the forms of labor, creative works, raw materials, and the like to fuel their ambition and enrich those at the top of the ladder. “I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires,” she writes. “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.” She goes on to chronicle her own growing disillusionment with the industry. “With increasing clarity,” she writes, “I realized that the very revolution promising to bring a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.”  To document this, Hao steps away from her desk and goes out into the world to see the effects of this empire as it sprawls across the planet. She travels to Colombia to meet with data labelers tasked with teaching AI what various images show, one of whom she describes sprinting back to her apartment for the chance to make a few dollars. She documents how workers in Kenya who performed data-labeling content moderation for OpenAI came away traumatized by seeing so much disturbing material. In Chile she documents how the industry extracts precious resources—water, power, copper, lithium—to build out data centers.  She lands on the ways people are pushing back against the empire of AI across the world. Hao draws lessons from New Zealand, where Maori people are attempting to save their language using a small language model of their own making. Trained on volunteers’ voice recordings and running on just two graphics processing units, or GPUs, rather than the thousands employed by the likes of OpenAI, it’s meant to benefit the community, not exploit it.  Hao writes that she is not against AI. Rather: “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed will ever emerge from—a vision of the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project … [The New Zealand model] shows us another way. It imagines how AI could be exactly the opposite. Models can be small and task-specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.”  Hagey’s book is more squarely focused on Altman’s ambition, which she traces back to his childhood. Yet interestingly, she also  zeroes in on the OpenAI CEO’s attempt to create an empire. Indeed, “Altman’s departure from YC had not slowed his civilization-building ambitions,” Hagey writes. She goes on to chronicle how Altman, who had previously mulled a run for governor of California, set up experiments with income distribution via Tools for Humanity, the parent company of Worldcoin. Hagey quotes Altman saying of it, “I thought it would be interesting to see … just how far technology could accomplish some of the goals that used to be done by nation-states.”  Overall, The Optimist is the more straightforward business biography of the two. Hagey has packed it full with scoops and insights and behind-the-scenes intrigue. It is immensely readable as a result, especially in the second half ,when OpenAI really takes over the story. Hagey also seems to have been given far more access to Altman and his inner circles, personal and professional, than Hao did, and that allows for a fuller telling of the CEO’s story in places. For example, both writers cover the tragic story of Altman’s sister Annie, her estrangement from the family, and her accusations in particular about suffering sexual abuse at the hands of Sam (something he and the rest of the Altman family vehemently deny). Hagey’s telling provides a more nuanced picture of the situation, with more insight into family dynamics.  Hagey concludes by describing Altman’s reckoning with his role in the long arc of human history and what it will mean to create a “superintelligence.” His place in that sweep is something that clearly has consumed the CEO’s thoughts. When Paul Graham asked about preserving GPT-4, for example, Altman had a response at the ready. He replied that the company had already considered this, and that the sheet of metal would need to be 100 meters square.

Read More »

Everyone’s looking to get in on vibe coding — and Google is no different with Stitch, its follow-up to Jules

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Vibe coding is arguably one of the hottest trends in tech right now, as it reflects a wider adoption of AI and natural language prompts for basic code completion (challenging the conventional coding mindset that humans should complete downstream tasks). Google is releasing Stitch, a new experiment from Google Labs, to compete with Microsoft, AWS, and other existing end-to-end coding tools. Now in beta, the platform designs user interfaces (UIs) with one prompt—and some developers are already gushing.  “Google dropped the most powerful UI designer in the world,” Brendan Jowett, owner of voice AI company Inflate AI, posted on X.  The use of AI in programming and development certainly isn’t new, but the concept of “vibe coding” — coined by OpenAI cofounder Andrej Karpathy earlier this year — is a newer concept incorporating generative AI to automate coding tasks typically done manually. This goes beyond existing AI assistants and drag-and-drop no-code and low-code tools: The focus is on the end result, not the journey there.  “You finally give into the vibes, embrace exponentials and forget that code even exists,” Karpathy wrote on X. Top players in the integrated development environment (IDE) space include Windsurf (formerly Codeium), Cursor, Replit, Lovable, Bolt, Devin and Aider. Anthropic also recently launched its command-line AI agent Claude Code.    Larger players in addition to Google are looking to stake their claim, as well: Amazon Web Services (AWS) is offering its Amazon Q Developer AI assistant as an add-on for developers to access directly at any point in their coding; Microsoft released GitHub Copilot agent mode; OpenAI is looking to extend its capabilities in vibe coding with its Codex update and intended $3 billion purchase of Windsurf; and

Read More »

Spott’s AI-native recruiting platform scores $3.2M to end hiring software chaos

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Spott has raised $3.2 million in seed funding to build an AI-native platform that promises to transform how recruitment agencies operate. The San Francisco-based startup announced the funding round today, led by Base10 Partners with participation from Y Combinator, Fortino, True Equity, and several angel investors. The capital injection follows Spott’s recent completion of Y Combinator’s Winter 2025 accelerator program. “For too long, recruitment firms have relied on outdated software to manage daily operations,” said Lander Degreve, co-founder and CEO at Spott, in an exclusive interview with VentureBeat. “Spott solves the problem of outdated, passive, and fragmented recruitment software by offering an all-in-one AI-native platform that actively automates entire workflows, enabling recruiters to focus on what matters most and make more placements.” The recruitment technology space has seen a flood of point solutions in recent years, especially since generative AI tools began proliferating in 2022. However, most of these tools address single pain points rather than reimagining the entire recruitment workflow. How Spott’s all-in-one AI platform challenges recruitment tech fragmentation Unlike narrowly-focused AI tools that create more tech fragmentation, Spott is building a comprehensive operating system that manages everything from candidate sourcing and screening to placement. The platform already claims to have generated over 1,000 candidate reports for its customers, which include Stanton Chase, a global executive search firm. “Existing recruitment software is largely a passive system of record, often requiring multiple integrations just to take notes, search & match candidates, run outbound campaigns or reformat CVs,” said Degreve. “As AI enters the space, the number of disconnected point solutions is only increasing. Spott brings all these capabilities, including cutting-edge AI, together in one end-to-end recruitment platform by default.” The company’s

Read More »

Baker Hughes Bags Data Center Gas Turbine Deal

In a release sent to Rigzone on Thursday, Baker Hughes announced an award from Frontier Infrastructure Holdings for 16 NovaLT gas turbines to power its data center projects in Wyoming and Texas. Baker Hughes noted in the release that, as part of the award, it is supplying Frontier its NovaLT gas turbine technology and associated equipment, including gears and Brush Power Generation four-pole generators, to power dedicated energy islands at Frontier’s behind the meter (BTM) power generation sites. The NovaLT gas turbine is a multi-fuel solution that can start-up and run on different fuels, including natural gas, various blends of natural gas and hydrogen, and 100 percent hydrogen, Baker Hughes stated in the release.   “This award underscores our commitment to advancing sustainable energy development through reliable and efficient power solutions that cater to the diverse needs of the industry,” Ganesh Ramaswamy, Executive Vice President of Industrial and Energy Technology at Baker Hughes, said in the release. “Leveraging our comprehensive range of integrated power solutions for Frontier’s U.S. data center projects demonstrates innovative, scalable, and lower-carbon technologies helping to meet the growing demand for power,” Ramaswamy added. In a release posted on its site back in March, Baker Hughes announced a strategic partnership between the company and Frontier “to accelerate the deployment of large-scale carbon capture and storage (CCS) and power solutions in the United States”. Baker Hughes noted in that release that, as part of the agreement, it “will provide innovative technologies and resources in support of the development of large-scale CCS, power generation, and data center projects”. Lorenzo Simonelli, chairman and CEO of Baker Hughes, said in that release, “Baker Hughes is committed to delivering innovative solutions that support increasing energy demand, in part driven by the rapid adoption of AI, while ensuring we continue to enable the decarbonization of the industry”.

Read More »

Eni Eyes Biofuel Feedstock Production in Ivory Coast

Eni SpA has signed an agreement with Côte d’Ivoire’s Agriculture Ministry to explore the potential of cultivating biofuel crops in the West African country. The memorandum of understanding “aims to enhance the rubber (hevea) supply chain and to assess the introduction of oilseed crops on marginal and degraded lands, thereby contributing to the country’s sustainable agricultural development without competing with food production and forest ecosystem”, the Italian state-backed energy major said in an online statement. Eni said an existing project in collaboration with the Ivorian Federation of Rubber Producers is already “enabling the valorization of rubber residues – a crop widely cultivated in the country – by transforming them into raw materials for biofuel production, generating economic and social benefits for thousands of farmers”. Last year Eni expanded its hydrocarbon-focused presence in Ivory Coast, where it entered 2015, to also pursue biorefining opportunities through the new company Eni Natural Energies Côte d’Ivoire. The new company is “dedicated to developing sustainable supply chains of agricultural raw materials for the company’s biorefineries”, Eni said. “The initiative is part of Eni’s strategy for sustainable mobility and its broader commitment to supporting fair and inclusive growth in line with the objectives of Côte d’Ivoire’s National Development Plan”. Biorefining Expansion Eni, through subsidiary Enilive, has a biorefining production capacity of 1.65 million metric tons per annum (MMtpa), according to a statement by Eni March 27. Eni aims to raise this to over five MMtpa by 2030. It also aims to enable one MMtpa of sustainable aviation fuel production by next year and potentially double that level by the end of the decade. Last year Eni announced an organization restructuring for Enilive, involving KKR & Co. Inc., to bring in new capital. In the first quarter of 2025 the United States investor completed the purchase of a 25 percent

Read More »

Egypt Considers Securing Another LNG Vessel as Import Needs Jump

Egypt is considering adding yet another LNG import vessel, according to people familiar with the plan, as the nation that was exporting gas just a year ago is now rushing to lock in supplies to cover domestic demand. A new vessel would add to the Energos Power ship that arrived in the North African country’s Alexandria port earlier this week and the Hoegh Galleon operating in Ain Sokhna. Two others – Energos Eskimo arriving this summer and another from Turkish company Botas – have also been tied up. Egypt’s oil ministry didn’t immediately reply to a request for comment on the additional vessel.  The country has moved to rapidly lease import terminals, known as floating storage and regasification units, over the past 12 months as overseas purchases surged amid declining local gas output and rising demand. It is in talks with companies including Saudi Aramco, Trafigura Group and Vitol Group for LNG supplies until 2028, putting it on course to be a long-term importer and helping tighten global gas markets. Egypt is also expected to replace the Hoegh Galleon vessel with the Hoegh Gandria in the fourth quarter of 2026.  The FSRUs that have been secured are expected to be installed at or near the existing LNG import facility in Ain Sokhna. Work is also underway for import infrastructure near Alexandria on the Mediterranean Sea, according to the people, who asked not be identified discussing ongoing talks.  Exact timing and locations of the leased FSRUs could be subject to change, as well as details on where a new import vessel could be added, the people said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak

Read More »

OMV to Build Major Green Hydrogen Plant in Lower Austria

OMV AG has made a final investment decision to proceed with the construction of an electrolysis facility in Bruck an der Leitha, Lower Austria. The 140-MW electrolyzer – a facility that splits water molecules into hydrogen and oxygen through electricity – is planned to produce up to 23,000 metric tons a year of green hydrogen. Expected to start production 2027, the project will use wind, solar and hydro power. It would be the biggest European electrolytic facility to produce renewable hydrogen, OMV said. Hydrogen produced through electrolysis that runs on renewable power is called green or renewable. On June 30 OMV announced the start of production at its first commercial-scale green hydrogen facility, built with a capacity of 1,500 metric tons per annum at its Schwechat refinery near Vienna. The plant uses a 10-MW PEM (polymer electrolyte membrane) electrolyzer powered by hydro, solar and wind energy. The process avoids up to 15,000 metric tons of carbon dioxide (CO2) emissions a year, equivalent to the CO2 consumption of 2,000 persons per year based on a European Union average, according to OMV. Output from the newly inaugurated facility will be used to decarbonize the refinery and produce more sustainable fuels and chemicals including sustainable aviation fuel and renewable diesel. Martijn van Koten, OMV executive vice president for fuels and chemicals, said of the incoming project, “With this project, we are re-inventing the production of everyday essential fuels and chemical products – a groundbreaking step that demonstrates how industrial innovation and sustainability can go hand in hand”. “By using green hydrogen in the future, we are making the processes and production of fuels and chemical products more sustainable and are future-proofing our industry. “Our planned 140 MW electrolysis plant in Bruck an der Leitha will meet a significant share of the hydrogen demand at the OMV

Read More »

OKEA Discovers More Oil in Brage Field in Norwegian North Sea

OKEA ASA and its partners in production license 055 have made a discovery that is estimated to hold 300,000 to 2.8 million barrels of recoverable oil equivalent along the eastern flank of the already producing Brage field on Norway’s side of the North Sea. The discovery was made in the southern part of the Prince prospect in wildcat well 31/4-A-23 G. Well 31/4-A-23 F, in the northern part of the Prince prospect, turned up dry. “The licensees will now assess the deposit as part of the further development of the Brage field”, the Norwegian Offshore Directorate said in an online statement. The stakeholders are OKEA with a 35.2 percent stake, Lime Petroleum AS with 33.84 percent, DNO Norge AS with 14.26 percent, Petrolia NOCO AS with 12.26 percent and M Vest Energy AS with 4.44 percent. “The field has been in production for a long time, and work is under way to identify new methods to improve recovery”, the upstream regulator said. “New wells are being drilled, often combined with investigation of nearby prospects”. Well A-23 F aimed to prove petroleum in Upper Jurassic reservoir rocks in the Sognefjord Formation, while A-23 G aimed to delineate a potential discovery in A-23 F and delineate the northern part of 31/4-A-13 E (Kim). A-23 F, horizontally drilled, showed a sandstone layer in the Sognefjord Formation with a total measured thickness of 220 meters (721.78 feet) along the wellbore and 12 meters of vertical thickness with “good reservoir properties”, the Directorate reported. It was drilled to a measured depth of 6285 meters and a vertical depth of 2153 meters below sea level in the Sognefjord Formation. A-23 G was drilled horizontally at a vertical depth of 2,120-2,171 meters along the eastern flank of the Brage field. It encountered a sandstone layer three to four meters thick

Read More »

Eni to Develop Three PV Plants for Marelli

Eni S.p.A.’s renewables arm, Plenitude, has signed an agreement with Marelli Holdings to build three photovoltaic plants and an Energy Community. Eni said in a media release that the facilities will be located at Marelli’s production sites in Melfi (Potenza), Sulmona (L’Aquila), and Turin, with a total capacity of 5.4 megawatts-peak (MWp). The projects will be carried out under an EPC (Energy Performance Contract) model, allowing Marelli to obtain renewable energy at a fixed cost without any initial investment, Eni said. At the Melfi site, Plenitude has designed an Energy Community for Marelli under the Individual Remote Self-Consumption (AID) configuration. A photovoltaic park with a capacity of 999 kWp will be installed on Marelli’s land, allowing energy sharing with a neighboring company. The plant will benefit from 20-year state incentives allocated to support local social initiatives, Eni said. Plenitude is promoting Energy Communities to support the transition to a more sustainable and participatory energy system, allowing producers and consumers to share renewable energy. “We are excited to announce our collaboration with Marelli, a global leader in the automotive sector, and to support them in the challenge of the energy transition with solutions based on a renewable energy-sharing model in which we firmly believe”, Vincenzo Viganò, Head of Retail for the Italian Market at Plenitude, said. Eni said Plenitude will assist Marelli throughout every stage of the project, from the planning and building of the facilities to the application for incentives. It will also offer its technological platform, “Plenitude Comunità Energetiche,” which will facilitate the management and oversight of the AID configuration. Meanwhile at the production sites in Sulmona and Turin, the photovoltaic plants will have an installed capacity of 4 MWp and 400 kWp, respectively, contributing to potential energy cost savings for these sites, Eni said. To contact the author,

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE