Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

The first human test of a rejuvenation method will begin “shortly”
When Elon Musk was at Davos last week, an interviewer asked him if he thought aging could be reversed. Musk said he hasn’t put much time into the problem but suspects it is “very solvable” and that when scientists discover why we age, it’s going to be something “obvious.” Not long after, the Harvard professor and life-extension evangelist David Sinclair jumped into the conversation on X to strongly agree with the world’s richest man. “Aging has a relatively simple explanation and is apparently reversible,” wrote Sinclair. “Clinical Trials begin shortly.” “ER-100?” Musk asked. “Yes” replied Sinclair.
ER-100 turns out to be the code name of a treatment created by Life Biosciences, a small Boston startup that Sinclair cofounded and which he confirmed today has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers. The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech.
The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off. “Reprogramming is like the AI of the bio world. It’s the thing everyone is funding,” says Karl Pfleger, an investor who backs a smaller UK startup, Shift Bioscience. He says Sinclair’s company has recently been seeking additional funds to keep advancing its treatment. Reprogramming is so powerful that it sometimes creates risks, even causing cancer in lab animals, but the version of the technique being advanced by Life Biosciences passed initial safety tests in animals. But it’s still very complex. The trial will initially test the treatment on about a dozen patients with glaucoma, a condition where high pressure inside the eye damages the optic nerve. In the tests, viruses carrying three powerful reprogramming genes will be injected into one eye of each patient, according to a description of the study first posted in December. To help make sure the process doesn’t go too far, the reprogramming genes will be under the control of a special genetic switch that turns them on only while the patients take a low dose of the antibiotic doxycycline. Initially, they will take the antibiotic for about two months while the effects are monitored. Executives at the company have said for months that a trial could begin this year, sometimes characterizing it as a starting bell for a new era of age reversal. “It’s an incredibly big deal for us as an industry,” Michael Ringel, chief operating officer at Life Biosciences, said at an event this fall. “It’ll be the first time in human history, in the millennia of human history, of looking for something that rejuvenates … So watch this space.” The technology is based on the Nobel Prize–winning discovery, 20 years ago, that introducing a few potent genes into a cell will cause it to turn back into a stem cell, just like those found in an early embryo that develop into the different specialized cell types. These genes, known as Yamanaka factors, have been likened to a “factory reset” button for cells. But they’re dangerous, too. When turned on in a living animal, they can cause an eruption of tumors.
That is what led scientists to a new idea, termed “partial” or “transient” reprogramming. The idea is to limit exposure to the potent genes—or use only a subset of them—in the hope of making cells act younger without giving them complete amnesia about what their role in the body is. In 2020, Sinclair claimed that such partial reprogramming could restore vision to mice after their optic nerves were smashed, saying there was even evidence that the nerves regrew. His report appeared on the cover of the influential journal Nature alongside the headline “Turning Back Time.” Not all scientists agree that reprogramming really counts as age reversal. But Sinclair has doubled down. He’s been advancing the theory that the gradual loss of correct epigenetic information in our cells is, in fact, the ultimate cause of aging—just the kind of root cause that Musk was alluding to. “Elon does seem to be paying attention to the field and [is] seemingly in sync with [my theory],” Sinclair said in an email. Reprogramming isn’t the first longevity fix championed by Sinclair, who’s written best-selling books and commands stratospheric fees on the longevity lecture circuit. Previously, he touted the longevity benefits of molecules called sirtuins as well as resveratrol, a molecule found in red wine. But some critics say he greatly exaggerates scientific progress, pushback that culminated in a 2024 Wall Street Journal story that dubbed him a “reverse-aging guru” whose companies “have not panned out.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it mattersLife Biosciences has been among those struggling companies. Initially formed in 2017, it at first had a strategy of launching subsidiaries, each intended to pursue one aspect of the aging problem. But after these made limited progress, in 2021 it hired a new CEO, Jerry McLaughlin, who has refocused its efforts on Sinclair’s mouse vision results and the push toward a human trial. The company has discussed the possibility of reprogramming other organs, including the brain. And Ringel, like Sinclair, entertains the idea that someday even whole-body rejuvenation might be feasible. But for now, it’s better to think of the study as a proof of concept that’s still far from a fountain of youth. “The optimistic case is this solves some blindness for certain people and catalyzes work in other indications,” says Pfleger, the investor. “It’s not like your doctor will be writing a prescription for a pill that will rejuvenate you.” Life’s treatment also relies on an antibiotic switching mechanism that, while often used in lab animals, hasn’t been tried in humans before. Since the switch is built from gene components taken from E. coli and the herpes virus, it’s possible that it could cause an immune reaction in humans, scientists say. “I was always thinking that for widespread use you might need a different system,” says Noah Davidsohn, who helped Sinclair implement the technique and is now chief scientist at a different company, Rejuvenate Bio. And Life’s choice of reprogramming factors—it’s picked three, which go by the acronym OSK—may also be risky. They are expected to turn on hundreds of other genes, and in some circumstances the combination can cause cells to revert to a very primitive, stem-cell-like state. Other companies studying reprogramming say their focus is on researching which genes to use, in order to achieve time reversal without unwanted side effects. New Limit, which has been carrying out an extensive search for such genes, says it won’t be ready for a human study for two years. At Shift, experiments on animals are only beginning now. “Are their factors the best version of rejuvenation? We don’t think they are. I think they are working with what they’ve got,” Daniel Ives, the CEO of Shift, says of Life Biosciences. “But I think they’re way ahead of anybody else in terms of getting into humans. They have found a route forward in the eye, which is a nice self-contained system. If it goes wrong, you’ve still got one left.”

OpenAI’s latest product let’s you vibe code science
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science. Kevin Weil, head of OpenAI for Science, pushes that analogy himself. “I think 2026 will be for AI and science what 2025 was for AI in software engineering,” he said at a press briefing yesterday. “We’re starting to see that same kind of inflection.” OpenAI claims that around 1.3 million scientists around the world submit more than 8 million queries a week to ChatGPT on advanced topics in science and math. “That tells us that AI is moving from curiosity to core workflow for scientists,” Weil said.
Prism is a response to that user behavior. It can also be seen as a bid to lock in more scientists to OpenAI’s products in a marketplace full of rival chatbots. “I mostly use GPT-5 for writing code,” says Roland Dunbrack, a professor of biology at the Fox Chase Cancer Center in Philadelphia, who is not connected to OpenAI. “Occasionally, I ask LLMs a scientific question, basically hoping it can find information in the literature faster than I can. It used to hallucinate references but does not seem to do that very much anymore.”
Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says GPT-5 has already become an important tool in his work. “It sometimes helps polish the text of papers, catching mathematical typos or bugs, and provides generally useful feedback,” he says. “It is extremely helpful for quick summarization of research articles, making interaction with the scientific literature smoother.” By combining a chatbot with an everyday piece of software, Prism follows a trend set by products such as OpenAI’s Atlas, which embeds ChatGPT in a web browser, as well as LLM-powered office tools from firms such as Microsoft and Google DeepMind. Prism incorporates GPT-5.2, the company’s best model yet for mathematical and scientific problem-solving, into an editor for writing documents in LaTeX, a common coding language that scientists use for formatting scientific papers. A ChatGPT chat box sits at the bottom of the screen, below a view of the article being written. Scientists can call on ChatGPT for anything they want. It can help them draft the text, summarize related articles, manage their citations, turn photos of whiteboard scribbles into equations or diagrams, or talk through hypotheses or mathematical proofs. It’s clear that Prism could be a huge time saver. It’s also clear that a lot of people may be disappointed, especially after weeks of high-profile social media chatter from researchers at the firm about how good GPT-5 is at solving math problems. Science is drowning in AI slop: Won’t this just make it worse? Where is OpenAI’s fully automated AI scientist? And when will GPT-5 make a stunning new discovery? That’s not the mission, says Weil. He would love to see GPT-5 make a discovery. But he doesn’t think that’s what will have the biggest impact on science, at least not in the near term. “I think more powerfully—and with 100% probability—there’s going to be 10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly, and AI will have been a contributor to that,” Weil told MIT Technology Review in an exclusive interview this week. “It won’t be this shining beacon—it will just be an incremental, compounding acceleration.”

Mexico Shelves Planned Shipment of Oil to Cuba
Mexico’s state oil company backtracked on plans to send a much-needed shipment of crude oil to Cuba, a long-time ally of ousted Venezuelan leader Nicolas Maduro. Petroleos Mexicanos, which was expected to send a shipment this month, removed the cargo from its schedule, according to documents seen by Bloomberg. The shipment was set to load in mid-January and would have arrived in Cuba before the end of the month under the original schedule. Pemex and Mexico’s Energy Ministry didn’t immediately return a message seeking comment. While it’s unclear why the cargo was shelved, the removal comes as the administration of US President Donald Trump increases pressure on the Caribbean island. “THERE WILL BE NO MORE OIL OR MONEY GOING TO CUBA – ZERO! I strongly suggest they make a deal, BEFORE IT IS TOO LATE,” Trump said in a Truth Social post a week after Maduro’s capture by US forces. Before Trump’s comments on Cuba, President Claudia Sheinbaum had said Mexico planned to continue supplying oil to Cuba as part of humanitarian aid to the island, a country plagued by chronic power outages, food and fuel shortages. Mexico started sending oil to Cuba in 2023, when Venezuela reduced supplies amid its falling oil production. Pemex sent an average of one ship per month, or the equivalent of 20,000 barrels a day of crude oil last year, according to data compiled by Bloomberg. The canceled shipment was expected to load in mid-January on board the vessel Swift Galaxy, according to the document. It was removed from the schedule without an explanation. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Gauging the real impact of AI agents
That creates the primary network issue for AI agents, which is dealing with implicit and creeping data. There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. The enterprises with the most experience with AI agents say it would be smart to expect some data center network upgrades to link agents to databases, and if the agents are distributed away from the data center, it may be necessary to improve the agent sites’ connection to the corporate VPN. As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. Right now, these tend to exist within a fairly small circle—a plant, a campus, perhaps a city or town—but over time, key enterprises say that their new-service interest could span a metro area. They point out that the real-time edge applications

Baker Hughes Sees Record Year for Industrial, Energy Tech Bookings
Baker Hughes Co has reported record orders of $14.87 billion from its industrial and energy technology (IET) business for 2025, including $4.02 billion for the fourth quarter. “IET achieved a record backlog of $32.4 billion at year-end, and book-to-bill exceeded 1x”, chair and chief executive Lorenzo Simonelli said in an online statement. “For the second consecutive year, non-LNG equipment orders represented approximately 85 percent of total IET orders, which highlights the end-market diversity and versatility of our IET portfolio”. IET delivered $3.81 billion in revenue for October-December 2025, up 13 percent from the prior quarter and nine percent year-on-year. “The increase was driven by gas technology equipment, up $189 million, or 11 percent year-over-year, [and] gas technology services, up $86 million, or 11 percent year-over-year”, Baker Hughes said. Q4 2025 IET orders totaled $4.02 billion, down three percent against the prior three-month period but up seven percent compared to Q4 2024. “The [year-over-year] increase was driven by continued strength in climate technology solutions, industrial technology, and gas technology services”, the Houston, Texas-based company said. Segment EBITDA came at $761 million, up 20 percent sequentially and 19 percent year-on-year. “The year-over-year increase in EBITDA was driven by productivity, volume, price and FX [foreign exchange], partially offset by inflation”, Baker Hughes said. Its other segment, oilfield services and equipment (OFSE), logged $3.57 billion in revenue for Q4 2025, down two percent quarter-on-quarter and eight percent year-on-year. That was driven by declines in its main markets, North America and the Middle East/Asia, with both regions registering quarter-on-quarter and year-on-year drops in revenue. OFSE orders in Q4 2025 totaled $3.86 billion, down five percent quarter-on-quarter but up three percent year-on-year. OFSE EBITDA landed at $647 million, down four percent quarter-on-quarter and 14 percent year-on-year. IET “more than offset continued macro‑driven softness in OFSE, where margins remained resilient

Analysts Explain Tuesday’s USA NatGas Price Drop
In separate exclusive interviews with Rigzone on Tuesday, Phil Flynn, a senior market analyst at the PRICE Futures Group, and Art Hogan, Chief Market Strategist at B. Riley Wealth, explained today’s U.S. natural gas price drop. “Natural gas is pulling back after the worst of the cold has passed,” Flynn told Rigzone. “We’ve lifted some of the winter storm warnings, and this should allow some of the freeze-offs in the basins to get production back up,” he added. “We saw [a] significant drop in production because of the cold weather and now some of that will be coming back online,” he continued. In his interview with Rigzone, Flynn warned that the weather is still going to be “key”. “Some forecasters are predicting a warm-up, but then after that another blast of the cold,” he said. “If that’s the case … these huge moves in natural gas may be far from over”, Flynn told Rigzone. He added, however, that, “at least in the short term, [a] return to more moderate temperatures from what we had experienced should allow for the market to recover as far as production goes, and exports”. When he was asked to explain the U.S. natural gas price drop today, Hogan told Rigzone that “trees don’t grow to the sky”. “U.S. natural gas prices dipped today amid profit-taking by traders, after soaring by over 117 percent in the five days to Monday,” he said. “The benchmark jumped by 30 percent on Monday alone. Last week, gas prices went up by as much as 70 percent amid frigid weather that apparently took gas traders by surprise,” he added. “This surprise led to frantic short-covering and position exits at a hefty loss. Currently, natural gas is trading at over $6.60 per million British thermal units [MMBtu], which is the highest in

The first human test of a rejuvenation method will begin “shortly”
When Elon Musk was at Davos last week, an interviewer asked him if he thought aging could be reversed. Musk said he hasn’t put much time into the problem but suspects it is “very solvable” and that when scientists discover why we age, it’s going to be something “obvious.” Not long after, the Harvard professor and life-extension evangelist David Sinclair jumped into the conversation on X to strongly agree with the world’s richest man. “Aging has a relatively simple explanation and is apparently reversible,” wrote Sinclair. “Clinical Trials begin shortly.” “ER-100?” Musk asked. “Yes” replied Sinclair.
ER-100 turns out to be the code name of a treatment created by Life Biosciences, a small Boston startup that Sinclair cofounded and which he confirmed today has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers. The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech.
The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off. “Reprogramming is like the AI of the bio world. It’s the thing everyone is funding,” says Karl Pfleger, an investor who backs a smaller UK startup, Shift Bioscience. He says Sinclair’s company has recently been seeking additional funds to keep advancing its treatment. Reprogramming is so powerful that it sometimes creates risks, even causing cancer in lab animals, but the version of the technique being advanced by Life Biosciences passed initial safety tests in animals. But it’s still very complex. The trial will initially test the treatment on about a dozen patients with glaucoma, a condition where high pressure inside the eye damages the optic nerve. In the tests, viruses carrying three powerful reprogramming genes will be injected into one eye of each patient, according to a description of the study first posted in December. To help make sure the process doesn’t go too far, the reprogramming genes will be under the control of a special genetic switch that turns them on only while the patients take a low dose of the antibiotic doxycycline. Initially, they will take the antibiotic for about two months while the effects are monitored. Executives at the company have said for months that a trial could begin this year, sometimes characterizing it as a starting bell for a new era of age reversal. “It’s an incredibly big deal for us as an industry,” Michael Ringel, chief operating officer at Life Biosciences, said at an event this fall. “It’ll be the first time in human history, in the millennia of human history, of looking for something that rejuvenates … So watch this space.” The technology is based on the Nobel Prize–winning discovery, 20 years ago, that introducing a few potent genes into a cell will cause it to turn back into a stem cell, just like those found in an early embryo that develop into the different specialized cell types. These genes, known as Yamanaka factors, have been likened to a “factory reset” button for cells. But they’re dangerous, too. When turned on in a living animal, they can cause an eruption of tumors.
That is what led scientists to a new idea, termed “partial” or “transient” reprogramming. The idea is to limit exposure to the potent genes—or use only a subset of them—in the hope of making cells act younger without giving them complete amnesia about what their role in the body is. In 2020, Sinclair claimed that such partial reprogramming could restore vision to mice after their optic nerves were smashed, saying there was even evidence that the nerves regrew. His report appeared on the cover of the influential journal Nature alongside the headline “Turning Back Time.” Not all scientists agree that reprogramming really counts as age reversal. But Sinclair has doubled down. He’s been advancing the theory that the gradual loss of correct epigenetic information in our cells is, in fact, the ultimate cause of aging—just the kind of root cause that Musk was alluding to. “Elon does seem to be paying attention to the field and [is] seemingly in sync with [my theory],” Sinclair said in an email. Reprogramming isn’t the first longevity fix championed by Sinclair, who’s written best-selling books and commands stratospheric fees on the longevity lecture circuit. Previously, he touted the longevity benefits of molecules called sirtuins as well as resveratrol, a molecule found in red wine. But some critics say he greatly exaggerates scientific progress, pushback that culminated in a 2024 Wall Street Journal story that dubbed him a “reverse-aging guru” whose companies “have not panned out.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it mattersLife Biosciences has been among those struggling companies. Initially formed in 2017, it at first had a strategy of launching subsidiaries, each intended to pursue one aspect of the aging problem. But after these made limited progress, in 2021 it hired a new CEO, Jerry McLaughlin, who has refocused its efforts on Sinclair’s mouse vision results and the push toward a human trial. The company has discussed the possibility of reprogramming other organs, including the brain. And Ringel, like Sinclair, entertains the idea that someday even whole-body rejuvenation might be feasible. But for now, it’s better to think of the study as a proof of concept that’s still far from a fountain of youth. “The optimistic case is this solves some blindness for certain people and catalyzes work in other indications,” says Pfleger, the investor. “It’s not like your doctor will be writing a prescription for a pill that will rejuvenate you.” Life’s treatment also relies on an antibiotic switching mechanism that, while often used in lab animals, hasn’t been tried in humans before. Since the switch is built from gene components taken from E. coli and the herpes virus, it’s possible that it could cause an immune reaction in humans, scientists say. “I was always thinking that for widespread use you might need a different system,” says Noah Davidsohn, who helped Sinclair implement the technique and is now chief scientist at a different company, Rejuvenate Bio. And Life’s choice of reprogramming factors—it’s picked three, which go by the acronym OSK—may also be risky. They are expected to turn on hundreds of other genes, and in some circumstances the combination can cause cells to revert to a very primitive, stem-cell-like state. Other companies studying reprogramming say their focus is on researching which genes to use, in order to achieve time reversal without unwanted side effects. New Limit, which has been carrying out an extensive search for such genes, says it won’t be ready for a human study for two years. At Shift, experiments on animals are only beginning now. “Are their factors the best version of rejuvenation? We don’t think they are. I think they are working with what they’ve got,” Daniel Ives, the CEO of Shift, says of Life Biosciences. “But I think they’re way ahead of anybody else in terms of getting into humans. They have found a route forward in the eye, which is a nice self-contained system. If it goes wrong, you’ve still got one left.”

OpenAI’s latest product let’s you vibe code science
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science. Kevin Weil, head of OpenAI for Science, pushes that analogy himself. “I think 2026 will be for AI and science what 2025 was for AI in software engineering,” he said at a press briefing yesterday. “We’re starting to see that same kind of inflection.” OpenAI claims that around 1.3 million scientists around the world submit more than 8 million queries a week to ChatGPT on advanced topics in science and math. “That tells us that AI is moving from curiosity to core workflow for scientists,” Weil said.
Prism is a response to that user behavior. It can also be seen as a bid to lock in more scientists to OpenAI’s products in a marketplace full of rival chatbots. “I mostly use GPT-5 for writing code,” says Roland Dunbrack, a professor of biology at the Fox Chase Cancer Center in Philadelphia, who is not connected to OpenAI. “Occasionally, I ask LLMs a scientific question, basically hoping it can find information in the literature faster than I can. It used to hallucinate references but does not seem to do that very much anymore.”
Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says GPT-5 has already become an important tool in his work. “It sometimes helps polish the text of papers, catching mathematical typos or bugs, and provides generally useful feedback,” he says. “It is extremely helpful for quick summarization of research articles, making interaction with the scientific literature smoother.” By combining a chatbot with an everyday piece of software, Prism follows a trend set by products such as OpenAI’s Atlas, which embeds ChatGPT in a web browser, as well as LLM-powered office tools from firms such as Microsoft and Google DeepMind. Prism incorporates GPT-5.2, the company’s best model yet for mathematical and scientific problem-solving, into an editor for writing documents in LaTeX, a common coding language that scientists use for formatting scientific papers. A ChatGPT chat box sits at the bottom of the screen, below a view of the article being written. Scientists can call on ChatGPT for anything they want. It can help them draft the text, summarize related articles, manage their citations, turn photos of whiteboard scribbles into equations or diagrams, or talk through hypotheses or mathematical proofs. It’s clear that Prism could be a huge time saver. It’s also clear that a lot of people may be disappointed, especially after weeks of high-profile social media chatter from researchers at the firm about how good GPT-5 is at solving math problems. Science is drowning in AI slop: Won’t this just make it worse? Where is OpenAI’s fully automated AI scientist? And when will GPT-5 make a stunning new discovery? That’s not the mission, says Weil. He would love to see GPT-5 make a discovery. But he doesn’t think that’s what will have the biggest impact on science, at least not in the near term. “I think more powerfully—and with 100% probability—there’s going to be 10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly, and AI will have been a contributor to that,” Weil told MIT Technology Review in an exclusive interview this week. “It won’t be this shining beacon—it will just be an incremental, compounding acceleration.”

Mexico Shelves Planned Shipment of Oil to Cuba
Mexico’s state oil company backtracked on plans to send a much-needed shipment of crude oil to Cuba, a long-time ally of ousted Venezuelan leader Nicolas Maduro. Petroleos Mexicanos, which was expected to send a shipment this month, removed the cargo from its schedule, according to documents seen by Bloomberg. The shipment was set to load in mid-January and would have arrived in Cuba before the end of the month under the original schedule. Pemex and Mexico’s Energy Ministry didn’t immediately return a message seeking comment. While it’s unclear why the cargo was shelved, the removal comes as the administration of US President Donald Trump increases pressure on the Caribbean island. “THERE WILL BE NO MORE OIL OR MONEY GOING TO CUBA – ZERO! I strongly suggest they make a deal, BEFORE IT IS TOO LATE,” Trump said in a Truth Social post a week after Maduro’s capture by US forces. Before Trump’s comments on Cuba, President Claudia Sheinbaum had said Mexico planned to continue supplying oil to Cuba as part of humanitarian aid to the island, a country plagued by chronic power outages, food and fuel shortages. Mexico started sending oil to Cuba in 2023, when Venezuela reduced supplies amid its falling oil production. Pemex sent an average of one ship per month, or the equivalent of 20,000 barrels a day of crude oil last year, according to data compiled by Bloomberg. The canceled shipment was expected to load in mid-January on board the vessel Swift Galaxy, according to the document. It was removed from the schedule without an explanation. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Gauging the real impact of AI agents
That creates the primary network issue for AI agents, which is dealing with implicit and creeping data. There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. The enterprises with the most experience with AI agents say it would be smart to expect some data center network upgrades to link agents to databases, and if the agents are distributed away from the data center, it may be necessary to improve the agent sites’ connection to the corporate VPN. As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. Right now, these tend to exist within a fairly small circle—a plant, a campus, perhaps a city or town—but over time, key enterprises say that their new-service interest could span a metro area. They point out that the real-time edge applications

Baker Hughes Sees Record Year for Industrial, Energy Tech Bookings
Baker Hughes Co has reported record orders of $14.87 billion from its industrial and energy technology (IET) business for 2025, including $4.02 billion for the fourth quarter. “IET achieved a record backlog of $32.4 billion at year-end, and book-to-bill exceeded 1x”, chair and chief executive Lorenzo Simonelli said in an online statement. “For the second consecutive year, non-LNG equipment orders represented approximately 85 percent of total IET orders, which highlights the end-market diversity and versatility of our IET portfolio”. IET delivered $3.81 billion in revenue for October-December 2025, up 13 percent from the prior quarter and nine percent year-on-year. “The increase was driven by gas technology equipment, up $189 million, or 11 percent year-over-year, [and] gas technology services, up $86 million, or 11 percent year-over-year”, Baker Hughes said. Q4 2025 IET orders totaled $4.02 billion, down three percent against the prior three-month period but up seven percent compared to Q4 2024. “The [year-over-year] increase was driven by continued strength in climate technology solutions, industrial technology, and gas technology services”, the Houston, Texas-based company said. Segment EBITDA came at $761 million, up 20 percent sequentially and 19 percent year-on-year. “The year-over-year increase in EBITDA was driven by productivity, volume, price and FX [foreign exchange], partially offset by inflation”, Baker Hughes said. Its other segment, oilfield services and equipment (OFSE), logged $3.57 billion in revenue for Q4 2025, down two percent quarter-on-quarter and eight percent year-on-year. That was driven by declines in its main markets, North America and the Middle East/Asia, with both regions registering quarter-on-quarter and year-on-year drops in revenue. OFSE orders in Q4 2025 totaled $3.86 billion, down five percent quarter-on-quarter but up three percent year-on-year. OFSE EBITDA landed at $647 million, down four percent quarter-on-quarter and 14 percent year-on-year. IET “more than offset continued macro‑driven softness in OFSE, where margins remained resilient

Analysts Explain Tuesday’s USA NatGas Price Drop
In separate exclusive interviews with Rigzone on Tuesday, Phil Flynn, a senior market analyst at the PRICE Futures Group, and Art Hogan, Chief Market Strategist at B. Riley Wealth, explained today’s U.S. natural gas price drop. “Natural gas is pulling back after the worst of the cold has passed,” Flynn told Rigzone. “We’ve lifted some of the winter storm warnings, and this should allow some of the freeze-offs in the basins to get production back up,” he added. “We saw [a] significant drop in production because of the cold weather and now some of that will be coming back online,” he continued. In his interview with Rigzone, Flynn warned that the weather is still going to be “key”. “Some forecasters are predicting a warm-up, but then after that another blast of the cold,” he said. “If that’s the case … these huge moves in natural gas may be far from over”, Flynn told Rigzone. He added, however, that, “at least in the short term, [a] return to more moderate temperatures from what we had experienced should allow for the market to recover as far as production goes, and exports”. When he was asked to explain the U.S. natural gas price drop today, Hogan told Rigzone that “trees don’t grow to the sky”. “U.S. natural gas prices dipped today amid profit-taking by traders, after soaring by over 117 percent in the five days to Monday,” he said. “The benchmark jumped by 30 percent on Monday alone. Last week, gas prices went up by as much as 70 percent amid frigid weather that apparently took gas traders by surprise,” he added. “This surprise led to frantic short-covering and position exits at a hefty loss. Currently, natural gas is trading at over $6.60 per million British thermal units [MMBtu], which is the highest in

Mexico Shelves Planned Shipment of Oil to Cuba
Mexico’s state oil company backtracked on plans to send a much-needed shipment of crude oil to Cuba, a long-time ally of ousted Venezuelan leader Nicolas Maduro. Petroleos Mexicanos, which was expected to send a shipment this month, removed the cargo from its schedule, according to documents seen by Bloomberg. The shipment was set to load in mid-January and would have arrived in Cuba before the end of the month under the original schedule. Pemex and Mexico’s Energy Ministry didn’t immediately return a message seeking comment. While it’s unclear why the cargo was shelved, the removal comes as the administration of US President Donald Trump increases pressure on the Caribbean island. “THERE WILL BE NO MORE OIL OR MONEY GOING TO CUBA – ZERO! I strongly suggest they make a deal, BEFORE IT IS TOO LATE,” Trump said in a Truth Social post a week after Maduro’s capture by US forces. Before Trump’s comments on Cuba, President Claudia Sheinbaum had said Mexico planned to continue supplying oil to Cuba as part of humanitarian aid to the island, a country plagued by chronic power outages, food and fuel shortages. Mexico started sending oil to Cuba in 2023, when Venezuela reduced supplies amid its falling oil production. Pemex sent an average of one ship per month, or the equivalent of 20,000 barrels a day of crude oil last year, according to data compiled by Bloomberg. The canceled shipment was expected to load in mid-January on board the vessel Swift Galaxy, according to the document. It was removed from the schedule without an explanation. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Baker Hughes Sees Record Year for Industrial, Energy Tech Bookings
Baker Hughes Co has reported record orders of $14.87 billion from its industrial and energy technology (IET) business for 2025, including $4.02 billion for the fourth quarter. “IET achieved a record backlog of $32.4 billion at year-end, and book-to-bill exceeded 1x”, chair and chief executive Lorenzo Simonelli said in an online statement. “For the second consecutive year, non-LNG equipment orders represented approximately 85 percent of total IET orders, which highlights the end-market diversity and versatility of our IET portfolio”. IET delivered $3.81 billion in revenue for October-December 2025, up 13 percent from the prior quarter and nine percent year-on-year. “The increase was driven by gas technology equipment, up $189 million, or 11 percent year-over-year, [and] gas technology services, up $86 million, or 11 percent year-over-year”, Baker Hughes said. Q4 2025 IET orders totaled $4.02 billion, down three percent against the prior three-month period but up seven percent compared to Q4 2024. “The [year-over-year] increase was driven by continued strength in climate technology solutions, industrial technology, and gas technology services”, the Houston, Texas-based company said. Segment EBITDA came at $761 million, up 20 percent sequentially and 19 percent year-on-year. “The year-over-year increase in EBITDA was driven by productivity, volume, price and FX [foreign exchange], partially offset by inflation”, Baker Hughes said. Its other segment, oilfield services and equipment (OFSE), logged $3.57 billion in revenue for Q4 2025, down two percent quarter-on-quarter and eight percent year-on-year. That was driven by declines in its main markets, North America and the Middle East/Asia, with both regions registering quarter-on-quarter and year-on-year drops in revenue. OFSE orders in Q4 2025 totaled $3.86 billion, down five percent quarter-on-quarter but up three percent year-on-year. OFSE EBITDA landed at $647 million, down four percent quarter-on-quarter and 14 percent year-on-year. IET “more than offset continued macro‑driven softness in OFSE, where margins remained resilient

Analysts Explain Tuesday’s USA NatGas Price Drop
In separate exclusive interviews with Rigzone on Tuesday, Phil Flynn, a senior market analyst at the PRICE Futures Group, and Art Hogan, Chief Market Strategist at B. Riley Wealth, explained today’s U.S. natural gas price drop. “Natural gas is pulling back after the worst of the cold has passed,” Flynn told Rigzone. “We’ve lifted some of the winter storm warnings, and this should allow some of the freeze-offs in the basins to get production back up,” he added. “We saw [a] significant drop in production because of the cold weather and now some of that will be coming back online,” he continued. In his interview with Rigzone, Flynn warned that the weather is still going to be “key”. “Some forecasters are predicting a warm-up, but then after that another blast of the cold,” he said. “If that’s the case … these huge moves in natural gas may be far from over”, Flynn told Rigzone. He added, however, that, “at least in the short term, [a] return to more moderate temperatures from what we had experienced should allow for the market to recover as far as production goes, and exports”. When he was asked to explain the U.S. natural gas price drop today, Hogan told Rigzone that “trees don’t grow to the sky”. “U.S. natural gas prices dipped today amid profit-taking by traders, after soaring by over 117 percent in the five days to Monday,” he said. “The benchmark jumped by 30 percent on Monday alone. Last week, gas prices went up by as much as 70 percent amid frigid weather that apparently took gas traders by surprise,” he added. “This surprise led to frantic short-covering and position exits at a hefty loss. Currently, natural gas is trading at over $6.60 per million British thermal units [MMBtu], which is the highest in

In 2026, virtual power plants must scale or risk being left behind
Listen to the article 13 min This audio is auto-generated. Please let us know if you have feedback. Rising demand and new technologies are forcing utilities to coordinate distributed energy resources on an unprecedented scale, a trend likely to continue in 2026, analysts and stakeholders say. But intimidating demand forecasts from power-hungry data centers, coupled with aggressive policy shifts away from renewables and efficiency standards, are turning power providers toward large-scale generation like nuclear, geothermal, gas and coal — possibly to the detriment of aggregation and demand response programs, they say. “Utilities are shifting away from DER to focus on [utility-scale] wind and solar in the near term and then new natural gas, [extending the life of] aging coal, and [restarting] shuttered nuclear plants,” said Sally Jacquemin, vice president of power and utilities at AspenTech Digital Grid Management, Emerson. Investment in distribution system modernization is also growing, but DER “is a lower priority,” she added. But grid advocates and utility leaders say distributed resources could provide crucial benefits at a time of rising prices and accelerate the interconnection of large loads, which is a priority of the Trump administration. In order to do that, virtual power plants must evolve and scale more rapidly or skyrocketing electricity demand and costs will force attention to traditional resources, industry sources say. The value of DER to the system will be determined by policies set by states, grid operators, federal regulators and officials in the Trump administration. Allison Wannop, vice president of regulatory affairs and wholesale markets for Sparkfund, predicted that demand growth and affordability challenges will drive innovation to make the most of distribution system resources. “20th century solutions will not build a 21st century grid,” she said. ‘Visibility will be key to VPP proliferation’ 2025 was a good year for distributed energy

3D Energi Runs Out of Cash for Victoria Drill Campaign, Suspends Trading
3D Energi Ltd said Tuesday it has voluntarily halted trading on the Australian Securities Exchange (ASX), having defaulted on the payment of its share of costs in a ConocoPhillips-led exploration campaign in the Otway basin offshore Victoria. “Joint venture cash calls for the drilling program are higher than originally forecast and a balance of approximately $2.5 million remains outstanding by the company which it does not currently have”, Melbourne-based 3D Energi said in a stock filing. “A default notice has been issued by the joint venture operator to the company with a remedy period to 6th February. “Additional forecast company drilling program expenditure subject to cash calls due on 6th February is currently estimated at approximately $5.3 million, which if not paid by that date may well become the subject of an additional default notice and remedy period. “Consequently, the company is implementing a suspension of the trading of its shares on ASX while it addresses its funding position and the implications of payment default on the level of its ongoing interest in the permit”. 3D Energi plans to resume ASX trading in the first week of February. Earlier this month it announced the Charlemont-1 gas discovery, the joint venture’s second discovery under the VIC/P79 exploration after Essington-1. The newest well targeted the penultimate prospect in the Charlemont trend, which culminates with the La Bella discovery, according to 3D Energi. “Phase 1 of the Otway Exploration Drilling Program has identified important new natural gas resources close to existing offshore gas production and processing infrastructure in the Otway basin, supplying the Australian domestic gas market”, 3D Energi executive chair Noel Newell said in a statement January 14 announcing the second discovery. “This enhances the strategic significance of the discovery and supports future development optionality, subject to further technical and commercial evaluation,

EIA Sees NatGas Price Dropping in 2026 and Rising in 2027
The U.S. Energy Information Administration (EIA) projected that the U.S. natural gas Henry Hub spot price will drop this year and rise next year in its latest short term energy outlook (STEO). In the EIA’s January STEO, which was released on January 13 and completed its forecast on January 8, the EIA forecast that the commodity will average $3.46 per million British thermal units (MMBtu) in 2026 and $4.59 per MMBtu in 2027. The U.S. natural gas Henry Hub spot price averaged $3.53 per MMBtu in 2025, the EIA’s latest STEO showed. According to a quarterly breakdown included in its latest STEO, the EIA sees the U.S. natural gas Henry Hub spot price coming in at $3.38 per MMBtu in the first quarter of 2026, $2.75 per MMBtu in the second quarter, $3.42 per MMBtu in the third quarter, $4.28 per MMBtu in the fourth quarter, $4.78 per MMBtu in the first quarter of 2027, $4.30 per MMBtu in the second quarter, $4.43 per MMBtu in the third quarter, and $4.84 per MMBtu in the fourth quarter of next year. Last year, the Henry Hub spot price averaged $4.15 per MMBtu in the first quarter, $3.19 per MMBtu in the second quarter, $3.03 per MMBtu in the third quarter, and $3.75 per MMBtu in the fourth quarter, the EIA’s January STEO showed. “On an annual basis, U.S. natural gas prices are relatively flat in 2026 before rising in 2027 as market conditions tighten,” the EIA said in its latest STEO. “We expect the Henry Hub natural gas spot price will average just under $3.50 per million British thermal units (MMBtu) this year, a two percent decrease from 2025, and then rise by 33 percent in 2027 to an annual average of almost $4.60 per MMBtu,” it added. In its STEO,

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

The first human test of a rejuvenation method will begin “shortly”
When Elon Musk was at Davos last week, an interviewer asked him if he thought aging could be reversed. Musk said he hasn’t put much time into the problem but suspects it is “very solvable” and that when scientists discover why we age, it’s going to be something “obvious.” Not long after, the Harvard professor and life-extension evangelist David Sinclair jumped into the conversation on X to strongly agree with the world’s richest man. “Aging has a relatively simple explanation and is apparently reversible,” wrote Sinclair. “Clinical Trials begin shortly.” “ER-100?” Musk asked. “Yes” replied Sinclair.
ER-100 turns out to be the code name of a treatment created by Life Biosciences, a small Boston startup that Sinclair cofounded and which he confirmed today has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers. The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech.
The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off. “Reprogramming is like the AI of the bio world. It’s the thing everyone is funding,” says Karl Pfleger, an investor who backs a smaller UK startup, Shift Bioscience. He says Sinclair’s company has recently been seeking additional funds to keep advancing its treatment. Reprogramming is so powerful that it sometimes creates risks, even causing cancer in lab animals, but the version of the technique being advanced by Life Biosciences passed initial safety tests in animals. But it’s still very complex. The trial will initially test the treatment on about a dozen patients with glaucoma, a condition where high pressure inside the eye damages the optic nerve. In the tests, viruses carrying three powerful reprogramming genes will be injected into one eye of each patient, according to a description of the study first posted in December. To help make sure the process doesn’t go too far, the reprogramming genes will be under the control of a special genetic switch that turns them on only while the patients take a low dose of the antibiotic doxycycline. Initially, they will take the antibiotic for about two months while the effects are monitored. Executives at the company have said for months that a trial could begin this year, sometimes characterizing it as a starting bell for a new era of age reversal. “It’s an incredibly big deal for us as an industry,” Michael Ringel, chief operating officer at Life Biosciences, said at an event this fall. “It’ll be the first time in human history, in the millennia of human history, of looking for something that rejuvenates … So watch this space.” The technology is based on the Nobel Prize–winning discovery, 20 years ago, that introducing a few potent genes into a cell will cause it to turn back into a stem cell, just like those found in an early embryo that develop into the different specialized cell types. These genes, known as Yamanaka factors, have been likened to a “factory reset” button for cells. But they’re dangerous, too. When turned on in a living animal, they can cause an eruption of tumors.
That is what led scientists to a new idea, termed “partial” or “transient” reprogramming. The idea is to limit exposure to the potent genes—or use only a subset of them—in the hope of making cells act younger without giving them complete amnesia about what their role in the body is. In 2020, Sinclair claimed that such partial reprogramming could restore vision to mice after their optic nerves were smashed, saying there was even evidence that the nerves regrew. His report appeared on the cover of the influential journal Nature alongside the headline “Turning Back Time.” Not all scientists agree that reprogramming really counts as age reversal. But Sinclair has doubled down. He’s been advancing the theory that the gradual loss of correct epigenetic information in our cells is, in fact, the ultimate cause of aging—just the kind of root cause that Musk was alluding to. “Elon does seem to be paying attention to the field and [is] seemingly in sync with [my theory],” Sinclair said in an email. Reprogramming isn’t the first longevity fix championed by Sinclair, who’s written best-selling books and commands stratospheric fees on the longevity lecture circuit. Previously, he touted the longevity benefits of molecules called sirtuins as well as resveratrol, a molecule found in red wine. But some critics say he greatly exaggerates scientific progress, pushback that culminated in a 2024 Wall Street Journal story that dubbed him a “reverse-aging guru” whose companies “have not panned out.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it mattersLife Biosciences has been among those struggling companies. Initially formed in 2017, it at first had a strategy of launching subsidiaries, each intended to pursue one aspect of the aging problem. But after these made limited progress, in 2021 it hired a new CEO, Jerry McLaughlin, who has refocused its efforts on Sinclair’s mouse vision results and the push toward a human trial. The company has discussed the possibility of reprogramming other organs, including the brain. And Ringel, like Sinclair, entertains the idea that someday even whole-body rejuvenation might be feasible. But for now, it’s better to think of the study as a proof of concept that’s still far from a fountain of youth. “The optimistic case is this solves some blindness for certain people and catalyzes work in other indications,” says Pfleger, the investor. “It’s not like your doctor will be writing a prescription for a pill that will rejuvenate you.” Life’s treatment also relies on an antibiotic switching mechanism that, while often used in lab animals, hasn’t been tried in humans before. Since the switch is built from gene components taken from E. coli and the herpes virus, it’s possible that it could cause an immune reaction in humans, scientists say. “I was always thinking that for widespread use you might need a different system,” says Noah Davidsohn, who helped Sinclair implement the technique and is now chief scientist at a different company, Rejuvenate Bio. And Life’s choice of reprogramming factors—it’s picked three, which go by the acronym OSK—may also be risky. They are expected to turn on hundreds of other genes, and in some circumstances the combination can cause cells to revert to a very primitive, stem-cell-like state. Other companies studying reprogramming say their focus is on researching which genes to use, in order to achieve time reversal without unwanted side effects. New Limit, which has been carrying out an extensive search for such genes, says it won’t be ready for a human study for two years. At Shift, experiments on animals are only beginning now. “Are their factors the best version of rejuvenation? We don’t think they are. I think they are working with what they’ve got,” Daniel Ives, the CEO of Shift, says of Life Biosciences. “But I think they’re way ahead of anybody else in terms of getting into humans. They have found a route forward in the eye, which is a nice self-contained system. If it goes wrong, you’ve still got one left.”

OpenAI’s latest product let’s you vibe code science
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science. Kevin Weil, head of OpenAI for Science, pushes that analogy himself. “I think 2026 will be for AI and science what 2025 was for AI in software engineering,” he said at a press briefing yesterday. “We’re starting to see that same kind of inflection.” OpenAI claims that around 1.3 million scientists around the world submit more than 8 million queries a week to ChatGPT on advanced topics in science and math. “That tells us that AI is moving from curiosity to core workflow for scientists,” Weil said.
Prism is a response to that user behavior. It can also be seen as a bid to lock in more scientists to OpenAI’s products in a marketplace full of rival chatbots. “I mostly use GPT-5 for writing code,” says Roland Dunbrack, a professor of biology at the Fox Chase Cancer Center in Philadelphia, who is not connected to OpenAI. “Occasionally, I ask LLMs a scientific question, basically hoping it can find information in the literature faster than I can. It used to hallucinate references but does not seem to do that very much anymore.”
Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says GPT-5 has already become an important tool in his work. “It sometimes helps polish the text of papers, catching mathematical typos or bugs, and provides generally useful feedback,” he says. “It is extremely helpful for quick summarization of research articles, making interaction with the scientific literature smoother.” By combining a chatbot with an everyday piece of software, Prism follows a trend set by products such as OpenAI’s Atlas, which embeds ChatGPT in a web browser, as well as LLM-powered office tools from firms such as Microsoft and Google DeepMind. Prism incorporates GPT-5.2, the company’s best model yet for mathematical and scientific problem-solving, into an editor for writing documents in LaTeX, a common coding language that scientists use for formatting scientific papers. A ChatGPT chat box sits at the bottom of the screen, below a view of the article being written. Scientists can call on ChatGPT for anything they want. It can help them draft the text, summarize related articles, manage their citations, turn photos of whiteboard scribbles into equations or diagrams, or talk through hypotheses or mathematical proofs. It’s clear that Prism could be a huge time saver. It’s also clear that a lot of people may be disappointed, especially after weeks of high-profile social media chatter from researchers at the firm about how good GPT-5 is at solving math problems. Science is drowning in AI slop: Won’t this just make it worse? Where is OpenAI’s fully automated AI scientist? And when will GPT-5 make a stunning new discovery? That’s not the mission, says Weil. He would love to see GPT-5 make a discovery. But he doesn’t think that’s what will have the biggest impact on science, at least not in the near term. “I think more powerfully—and with 100% probability—there’s going to be 10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly, and AI will have been a contributor to that,” Weil told MIT Technology Review in an exclusive interview this week. “It won’t be this shining beacon—it will just be an incremental, compounding acceleration.”

Stratospheric internet could finally start taking off this year
Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery. Even with nearly 10,000 active Starlink satellites in orbit and the OneWeb constellation of 650 satellites, solid internet coverage is not a given across vast swathes of the planet. One of the most prominent efforts to plug the connectivity gap was Google X’s Loon project. Launched in 2011, it aimed to deliver access using high-altitude balloons stationed above predetermined spots on Earth. But the project faced literal headwinds—the Loons kept drifting away and new ones had to be released constantly, making the venture economically unfeasible. Although Google shuttered the high-profile Loon in 2021, work on other kinds of high-altitude platform stations (HAPS) has continued behind the scenes. Now, several companies claim they have solved Loon’s problems with different designs—in particular, steerable airships and fixed-wing UAVs (unmanned aerial vehicles)—and are getting ready to prove the tech’s internet beaming potential starting this year, in tests above Japan and Indonesia.
Regulators, too, seem to be thinking seriously about HAPS. In mid-December, for example, the US Federal Aviation Administration released a 50-page document outlining how large numbers of HAPS could be integrated into American airspace. According to the US Census Bureau’s 2024 American Community Survey (ACS) data, some 8 million US households (4.5% of the population) still live completely offline, and HAPS proponents think the technology might get them connected more cheaply than alternatives. Despite the optimism of the companies involved, though, some analysts remain cautious.
“The HAPS market has been really slow and challenging to develop,” says Dallas Kasaboski, a space industry analyst at the consultancy Analysis Mason. After all, Kasaboski says, the approach has struggled before: “A few companies were very interested in it, very ambitious about it, and then it just didn’t happen.” Beaming down connections Hovering in the thin air at altitudes above 12 miles, HAPS have a unique vantage point to beam down low-latency, high-speed connectivity directly to smartphone users in places too remote and too sparsely populated to justify the cost of laying fiber-optic cables or building ground-based cellular base stations. “Mobile network operators have some commitment to provide coverage, but they frequently prefer to pay a fine than cover these remote areas,” says Pierre-Antoine Aubourg, chief technology officer of Aalto HAPS, a spinoff from the European aerospace manufacturer Airbus. “With HAPS, we make this remote connectivity case profitable.” Aalto HAPS has built a solar-powered UAV with a 25-meter wingspan that has conducted many long-duration test flights in recent years. In April 2025 the craft, called Zephyr, broke a HAPS record by staying afloat for 67 consecutive days. The first months of 2026 will be busy for the company, according to Aubourg; Zephyr will do a test run over southern Japan to trial connectivity delivery to residents of some of the country’s smallest and most poorly connected inhabited islands. Because of its unique geography, Japan is a perfect test bed for HAPS. Many of the country’s roughly 430 inhabited islands are remote, mountainous, and sparsely populated, making them too costly to connect with terrestrial cell towers. Aalto HAPS is partnering with Japan’s largest mobile network operators, NTT DOCOMO and the telecom satellite operator Space Compass, which want to use Zephyr as part of next-generation telecommunication infrastructure. “Non-terrestrial networks have the potential to transform Japan’s communications ecosystem, addressing access to connectivity in hard-to-reach areas while supporting our country’s response to emergencies,” Shigehiro Hori, co-CEO of Space Compass, said in a statement. Zephyr, Aubourg explains, will function like another cell tower in the NTT DOCOMO network, only it will be located well above the planet instead of on its surface. It will beam high-speed 5G connectivity to smartphone users without the need for the specialized terminals that are usually required to receive satellite internet. “For the user on the ground, there is no difference when they switch from the terrestrial network to the HAPS network,” Aubourg says. “It’s exactly the same frequency and the same network.” New Mexico–based Sceye, which has developed a solar-powered helium-filled airship, is also eyeing Japan for pre-commercial trials of its stratospheric connectivity service this year. The firm, which extensively tested its slick 65-meter-long vehicle in 2025, is working with the Japanese telecommunications giant SoftBank. Just like NTT DOCOMO, Softbank is betting on HAPS to take its networks to another level.
Mikkel Frandsen, Sceye’s founder and CEO, says that his firm succeeded where Loon failed by betting on the advantages offered by the more controllable airship shape, intelligent avionics, and innovative batteries that can power an electric fan to keep the aircraft in place. “Google’s Loon was groundbreaking, but they used a balloon form factor, and despite advanced algorithms—and the ability to change altitude to find desired wind directions and wind speeds—Loon’s system relied on favorable winds to stay over a target area, resulting in unpredictable station-seeking performance,” Frandsen says. “This required a large amount of balloons in the air to have relative certainty that one would stay over the area of operation, which was financially unviable.” He adds that Sceye’s airship can “point into the wind” and more effectively maintain its position. “We have significant surface area, providing enough physical space to lift 250-plus kilograms and host solar panels and batteries,” he says, “allowing Sceye to maintain power through day-night cycles, and therefore staying over an area of operation while maintaining altitude.” The persistent digital divide Satellite internet currently comes at a price tag that can be too high for people in developing countries, says Kasaboski. For example, Starlink subscriptions start at $10 per month in Africa, but millions of people in these regions are surviving on a mere $2 a day. Frandsen and Aubourg both claim that HAPS can connect the world’s unconnected more cheaply. Because satellites in low Earth orbit circle the planet at very high speeds, they quickly disappear from a ground terminal’s view, meaning large quantities of those satellites are needed to provide continuous coverage. HAPS can hover, affording a constant view of a region, and more HAPS can be launched to meet higher demand. “If you want to deliver connectivity with a low-Earth-orbit constellation into one place, you still need a complete constellation,” says Aubourg. “We can deliver connectivity with one aircraft to one location. And then we can tailor much more the size of the fleet according to the market coverage that we need.” Starlink gets a lot of attention, but satellite internet has some major drawbacks, says Frandsen. A big one is that its bandwidth gets diluted once the number of users in an area grows.
In a recent interview, Starlink cofounder Elon Musk compared the Starlink beams to a flashlight. Given the distance at which those satellites orbit the planet, the cone is wide, covering a large area. That’s okay when users are few and far between, but it can become a problem with higher densities of users. For example, Ukrainian defense technologists have said that Starlink bandwidth can drop on the front line to a mere 10 megabits per second, compared with the peak offering of 220 Mbps when drones and ground robots are in heavy use. Users in Indonesia, which like Japan is an island nation, also began reporting problems with Starlink shortly after the service was introduced in the country in 2024. Again, bandwidth declined as the number of subscribers grew.
In fact, Frandsen says, Starlink’s performance is less than optimal once the number of users exceeds one person per square kilometer. And that can happen almost anywhere—even relatively isolated island communities can have hundreds or thousands of residents in a small area. “There is a relationship between the altitude and the population you can serve,” Frandsen says. “You can’t bring space closer to the surface of the planet. So the telco companies want to use the stratosphere so that they can get out to more rural populations than they could otherwise serve.” Starlink did not respond to our queries about these challenges. Cheaper and faster Sceye and Aalto HAPS see their stratospheric vehicles as part of integrated telecom networks that include both terrestrial cell towers and satellites. But they’re far from the only game in town. World Mobile, a telecommunications company headquartered in London, thinks its hydrogen-powered high-altitude UAV can compete directly with satellite mega-constellations. The company acquired the HAPS developer Stratospheric Platforms last year. This year, it plans to flight-test an innovative phased array antenna, which it claims will be able to deliver bandwidth of 200 megabits per second (enough to enable ultra-HD video streaming to 500,000 users at the same time over an area of 15,000 square kilometers—equivalent to the coverage of more than 500 terrestrial cell towers, the company says). Last year, World Mobile also signed a partnership with the Indonesian telecom operator Protelindo to build a prototype Stratomast aircraft, with tests scheduled to begin in late 2027. Richard Deakin, CEO of World Mobile’s HAPS division World Mobile Stratospheric, says that just nine Stratomasts could supply Scotland’s 5.5 million residents with high-speed internet connectivity at a cost of £40 million ($54 million) per year. That’s equivalent to about 60 pence (80 cents) per person per month, he says. Starlink subscriptions in the UK, of which Scotland is a part, come at £75 ($100) per month. A troubled past Companies working on HAPS also extol the convenience of prompt deployments in areas struck by war or natural disasters like Hurricane Maria in Puerto Rico, after which Loon played an important role. And they say that HAPS could make it possible for smaller nations to obtain complete control over their celestial internet-beaming infrastructure rather than relying on mega-constellations controlled by larger nations, a major boon at a time of rising geopolitical tensions and crumbling political alliances.
Analysts, however, remain cautious, projecting a HAPS market totaling a modest $1.9 billion by 2033. The satellite internet industry, on the other hand, is expected to be worth $33.44 billion by 2030, according to some estimates. The use of HAPS for internet delivery to remote locations has been explored since the 1990s, about as long as the concept of low-Earth-orbit mega-constellations. The seemingly more cost-effective stratospheric technology, however, lost to the space fleets thanks to the falling cost of space launches and ambitious investment by Musk’s SpaceX. Google wasn’t the only tech giant to explore the HAPS idea. Facebook also had a project, called Aquila, that was discontinued after it too faced technical difficulties. Although the current cohort of HAPS makers claim they have solved the challenges that killed their predecessors, Kasaboski warns that they’re playing a different game: catching up with now-established internet-beaming mega constellations. By the end of this year, it’ll be much clearer whether they stand a good chance of doing so.

The Download: OpenAI’s plans for science, and chatbot age verification
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Inside OpenAI’s big play for science —Will Douglas Heaven In the three years since ChatGPT’s explosive debut, OpenAI’s technology has upended a remarkable range of everyday activities at home, at work, and in schools.Now OpenAI is making an explicit play for scientists. In October, the firm announced that it had launched a whole new team, called OpenAI for Science, dedicated to exploring how its large language models could help scientists and tweaking its tools to support them.
So why now? How does a push into science fit with OpenAI’s wider mission? And what exactly is the firm hoping to achieve? I put these questions to Kevin Weil, a vice president at OpenAI who leads the new OpenAI for Science team, in an exclusive interview. Read the full story.
Why chatbots are starting to check your age How do tech companies check if their users are kids?This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren’t required to moderate content accordingly.Now, two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates. Read the full story. —James O’Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. TR10: Commercial space stations Humans have long dreamed of living among the stars, and for two decades hundreds of us have done so aboard the International Space Station (ISS). But a new era is about to begin in which private companies operate orbital outposts—with the promise of much greater access to space than before. The ISS is aging and is expected to be brought down from orbit into the ocean in 2031. To replace it, NASA has awarded more than $500 million to several companies to develop private space stations, while others have built versions on their own. Read why we made them one of our 10 Breakthrough Technologies this year, and check out the rest of the list.
The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Tech workers are pressuring their bosses to condemn ICE The biggest companies and their leaders have remained largely silent so far. (Axios)+ Hundreds of employees have signed an anti-ICE letter. (NYT $)+ Formerly politically-neutral online spaces have become battlegrounds. (WP $)2 The US Department of Transport plans to use AI to write new safety rulesPlease don’t do this. (ProPublica)+ Failure to catch any errors could lead to civilian deaths. (Ars Technica) 3 The FBI is investigating Minnesota Signal chats tracking federal agentsBut free speech advocates claim the information is legally obtained. (NBC News)+ A judge has ordered a briefing on whether Minnesota is being illegally punished. (Wired $) 4 TikTok users claim they’re unable to send “Epstein” in direct messagesBut the company says it doesn’t know why. (NPR)+ Users are also experiencing difficulty uploading anti-ICE videos. (CNN)+ TikTok’s first weekend under US ownership hasn’t gone well. (The Verge)+ Gavin Newsom wants to probe whether TikTok is censoring Trump-critical content. (Politico)5 Grok is not safe for children or teensThat’s the finding of a new report digging into the chatbot’s safety measures. (TechCrunch)+ The EU is investigating whether it disseminates illegal content, too. (Reuters) 6 The US is on the verge of losing its measles-free statusFollowing a year of extensive outbreaks. (Undark)+ Measles is surging in the US. Wastewater tracking could help. (MIT Technology Review) 7 Georgia has become the latest US state to consider banning data centersJoining Maryland and Oklahoma’s stance. (The Guardian)+ Data centers are amazing. Everyone hates them. (MIT Technology Review)
8 The future of Saudi Arabia’s futuristic city is in perilThe Line was supposed to house 9 million people. Instead, it could become a data center hub. (FT $)+ We got an exclusive first look at it back in 2022. (MIT Technology Review) 9 Where do Earth’s lighter elements go? 🌍New research suggests they might be hiding deep inside its core. (Knowable Magazine)10 AI-generated influencers are getting increasingly surrealFeaturing virtual conjoined twins, and triple-breasted women. (404 Media)+ Why ‘nudifying’ tech is getting steadily more dangerous. (Wired $)
Quote of the day “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” —Anthropic CEO Dario Amodei sounds the alarm about what he sees as the imminent dangers of AI superintelligence in a new 38-page essay, Axios reports. One more thing
Why one developer won’t quit fighting to connect the US’s gridsMichael Skelly hasn’t learned to take no for an answer. For much of the last 15 years, the energy entrepreneur has worked to develop long-haul transmission lines to carry wind power across the Great Plains, Midwest, and Southwest. But so far, he has little to show for the effort. Skelly has long argued that building such lines and linking together the nation’s grids would accelerate the shift from coal- and natural-gas-fueled power plants to the renewables needed to cut the pollution driving climate change. But his previous business shut down in 2019, after halting two of its projects and selling off interests in three more. Skelly contends he was early, not wrong. And he has a point: markets and policymakers are increasingly coming around to his perspective. Read the full story. —James Temple

Inside OpenAI’s big play for science
In the three years since ChatGPT’s explosive debut, OpenAI’s technology has upended a remarkable range of everyday activities at home, at work, in schools—anywhere people have a browser open or a phone out, which is everywhere. Now OpenAI is making an explicit play for scientists. In October, the firm announced that it had launched a whole new team, called OpenAI for Science, dedicated to exploring how its large language models could help scientists and tweaking its tools to support them. The last couple of months have seen a slew of social media posts and academic publications in which mathematicians, physicists, biologists, and others have described how LLMs (and OpenAI’s GPT-5 in particular) have helped them make a discovery or nudged them toward a solution they might otherwise have missed. In part, OpenAI for Science was set up to engage with this community. And yet OpenAI is also late to the party. Google DeepMind, the rival firm behind groundbreaking scientific models such as AlphaFold and AlphaEvolve, has had an AI-for-science team for years. (When I spoke to Google DeepMind’s CEO and cofounder Demis Hassabis in 2023 about that team, he told me: “This is the reason I started DeepMind … In fact, it’s why I’ve worked my whole career in AI.”)
So why now? How does a push into science fit with OpenAI’s wider mission? And what exactly is the firm hoping to achieve? I put these questions to Kevin Weil, a vice president at OpenAI who leads the new OpenAI for Science team, in an exclusive interview last week.
On mission Weil is a product guy. He joined OpenAI a couple of years ago as chief product officer after being head of product at Twitter and Instagram. But he started out as a scientist. He got two-thirds of the way through a PhD in particle physics at Stanford University before ditching academia for the Silicon Valley dream. Weil is keen to highlight his pedigree: “I thought I was going to be a physics professor for the rest of my life,” he says. “I still read math books on vacation.” Asked how OpenAI for Science fits with the firm’s existing lineup of white-collar productivity tools or the viral video app Sora, Weil recites the company mantra: “The mission of OpenAI is to try and build artificial general intelligence and, you know, make it beneficial for all of humanity.” Just imagine the future impact this technology could have on science he says: New medicines, new materials, new devices. “Think about it helping us understand the nature of reality, helping us think through open problems. Maybe the biggest, most positive impact we’re going to see from AGI will actually be from its ability to accelerate science.” He adds: “With GPT-5, we saw that becoming possible.” As Weil tells it, LLMs are now good enough to be useful scientific collaborators. They can spitball ideas, suggest novel directions to explore, and find fruitful parallels between new problems and old solutions published in obscure journals decades ago or in foreign languages. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters That wasn’t the case a year or so ago. Since it announced its first so-called reasoning model—a type of LLM that can break down problems into multiple steps and work through them one by one—in December 2024, OpenAI has been pushing the envelope of what the technology can do. Reasoning models have made LLMs far better at solving math and logic problems than they used to be. “You go back a few years and we were all collectively mind-blown that the models could get an 800 on the SAT,” says Weil. But soon LLMs were acing math competitions and solving graduate-level physics problems. Last year, OpenAI and Google DeepMind both announced that their LLMs had achieved gold-medal-level performance in the International Math Olympiad, one of the toughest math contests in the world. “These models are no longer just better than 90% of grad students,” says Weil. “They’re really at the frontier of human abilities.” That’s a huge claim, and it comes with caveats. Still, there’s no doubt that GPT-5, which includes a reasoning model, is a big improvement on GPT-4 when it comes to complicated problem-solving. Measured against an industry benchmark known as GPQA, which includes more than 400 multiple-choice questions that test PhD-level knowledge in biology, physics, and chemistry, GPT-4 scores 39%, well below the human-expert baseline of around 70%. According to OpenAI, GPT-5.2 (the latest update to the model, released in December) scores 92%.
Overhyped The excitement is evident—and perhaps excessive. In October, senior figures at OpenAI, including Weil, boasted on X that GPT-5 had found solutions to several unsolved math problems. Mathematicians were quick to point out that in fact what GPT-5 appeared to have done was dig up existing solutions in old research papers, including at least one written in German. That was still useful, but it wasn’t the achievement OpenAI seemed to have claimed. Weil and his colleagues deleted their posts. Now Weil is more careful. It is often enough to find answers that exist but have been forgotten, he says: “We collectively stand on the shoulders of giants, and if LLMs can kind of accumulate that knowledge so that we don’t spend time struggling on a problem that is already solved, that’s an acceleration all of its own.” He plays down the idea that LLMs are about to come up with a game-changing new discovery. “I don’t think models are there yet,” he says. “Maybe they’ll get there. I’m optimistic that they will.” But, he insists, that’s not the mission: “Our mission is to accelerate science. And I don’t think the bar for the acceleration of science is, like, Einstein-level reimagining of an entire field.” For Weil, the question is this: “Does science actually happen faster because scientists plus models can do much more, and do it more quickly, than scientists alone? I think we’re already seeing that.” In November, OpenAI published a series of anecdotal case studies contributed by scientists, both inside and outside the company, that illustrated how they had used GPT-5 and how it had helped. “Most of the cases were scientists that were already using GPT-5 directly in their research and had come to us one way or another saying, ‘Look at what I’m able to do with these tools,’” says Weil. The key things that GPT-5 seems to be good at are finding references and connections to existing work that scientists were not aware of, which sometimes sparks new ideas; helping scientists sketch mathematical proofs; and suggesting ways for scientists to test hypotheses in the lab. “GPT 5.2 has read substantially every paper written in the last 30 years,” says Weil. “And it understands not just the field that a particular scientist is working in; it can bring together analogies from other, unrelated fields.”
“That’s incredibly powerful,” he continues. “You can always find a human collaborator in an adjacent field, but it’s difficult to find, you know, a thousand collaborators in all thousand adjacent fields that might matter. And in addition to that, I can work with the model late at night—it doesn’t sleep—and I can ask it 10 things in parallel, which is kind of awkward to do to a human.” Solving problems Most of the scientists OpenAI reached out to back up Weil’s position.
Robert Scherrer, a professor of physics and astronomy at Vanderbilt University, only played around with ChatGPT for fun (“I used to it rewrite the theme song for Gilligan’s Island in the style of Beowulf, which it did very well,” he tells me) until his Vanderbilt colleague Alex Lupsasca, a fellow physicist who now works at OpenAI, told him that GPT-5 had helped solve a problem he’d been working on. Lupsasca gave Scherrer access to GPT-5 Pro, OpenAI’s $200-a-month premium subscription. “It managed to solve a problem that I and my graduate student could not solve despite working on it for several months,” says Scherrer. It’s not perfect, he says: “GTP-5 still makes dumb mistakes. Of course, I do too, but the mistakes GPT-5 makes are even dumber.” And yet it keeps getting better, he says: “If current trends continue—and that’s a big if—I suspect that all scientists will be using LLMs soon.” Derya Unutmaz, a professor of biology at the Jackson Laboratory, a nonprofit research institute, uses GPT-5 to brainstorm ideas, summarize papers, and plan experiments in his work studying the immune system. In the case study he shared with OpenAI, Unutmaz used GPT-5 to analyze an old data set that his team had previously looked at. The model came up with fresh insights and interpretations. “LLMs are already essential for scientists,” he says. “When you can complete analysis of data sets that used to take months, not using them is not an option anymore.” Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says he has been using LLMs in his research since the first version of ChatGPT came out.
Like Scherrer, he finds LLMs most useful when they highlight unexpected connections between his own work and existing results he did not know about. “I believe that LLMs are becoming an essential technical tool for scientists, much like computers and the internet did before,” he says. “I expect a long-term disadvantage for those who do not use them.” But he does not expect LLMs to make novel discoveries anytime soon. “I have seen very few genuinely fresh ideas or arguments that would be worth a publication on their own,” he says. “So far, they seem to mainly combine existing results, sometimes incorrectly, rather than produce genuinely new approaches.” I also contacted a handful of scientists who are not connected to OpenAI. Andy Cooper, a professor of chemistry at the University of Liverpool and director of the Leverhulme Research Centre for Functional Materials Design, is less enthusiastic. “We have not found, yet, that LLMs are fundamentally changing the way that science is done,” he says. “But our recent results suggest that they do have a place.”
Cooper is leading a project to develop a so-called AI scientist that can fully automate parts of the scientific workflow. He says that his team doesn’t use LLMs to come up with ideas. But the tech is starting to prove useful as part of a wider automated system where an LLM can help direct robots, for example. “My guess is that LLMs might stick more in robotic workflows, at least initially, because I’m not sure that people are ready to be told what to do by an LLM,” says Cooper. “I’m certainly not.” Making errors LLMs may be becoming more and more useful, but caution is still key. In December, Jonathan Oppenheim, a scientist who works on quantum mechanics, called out a mistake that had made its way into a scientific journal. “OpenAI leadership are promoting a paper in Physics Letters B where GPT-5 proposed the main idea—possibly the first peer-reviewed paper where an LLM generated the core contribution,” Oppenheim posted on X. “One small problem: GPT-5’s idea tests the wrong thing.” He continued: “GPT-5 was asked for a test that detects nonlinear theories. It provided a test that detects nonlocal ones. Related-sounding, but different. It’s like asking for a COVID test, and the LLM cheerfully hands you a test for chickenpox.” It is clear that a lot of scientists are finding innovative and intuitive ways to engage with LLMs. It is also clear that the technology makes mistakes that can be so subtle even experts miss them. Part of the problem is the way ChatGPT can flatter you into letting down your guard. As Oppenheim put it: “A core issue is that LLMs are being trained to validate the user, while science needs tools that challenge us.” In an extreme case, one individual (who was not a scientist) was persuaded by ChatGPT into thinking for months that he’d invented a new branch of mathematics. Of course, Weil is well aware of the problem of hallucination. But he insists that newer models are hallucinating less and less. Even so, focusing on hallucination might be missing the point, he says. “One of my teammates here, an ex math professor, said something that stuck with me,” says Weil. “He said: ‘When I’m doing research, if I’m bouncing ideas off a colleague, I’m wrong 90% of the time and that’s kind of the point. We’re both spitballing ideas and trying to find something that works.’” “That’s actually a desirable place to be,” says Weil. “If you say enough wrong things and then somebody stumbles on a grain of truth and then the other person seizes on it and says, ‘Oh, yeah, that’s not quite right, but what if we—’ You gradually kind of find your trail through the woods.” This is Weil’s core vision for OpenAI for Science. GPT-5 is good, but it is not an oracle. The value of this technology is in pointing people in new directions, not coming up with definitive answers, he says. In fact, one of the things OpenAI is now looking at is making GPT-5 dial down its confidence when it delivers a response. Instead of saying Here’s the answer, it might tell scientists: Here’s something to consider. “That’s actually something that we are spending a bunch of time on,” says Weil. “Trying to make sure that the model has some sort of epistemological humility.” Watching the watchers Another thing OpenAI is looking at is how to use GPT-5 to fact-check GPT-5. It’s often the case that if you feed one of GPT-5’s answers back into the model, it will pick it apart and highlight mistakes. “You can kind of hook the model up as its own critic,” says Weil. “Then you can get a workflow where the model is thinking and then it goes to another model, and if that model finds things that it could improve, then it passes it back to the original model and says, ‘Hey, wait a minute—this part wasn’t right, but this part was interesting. Keep it.’ It’s almost like a couple of agents working together and you only see the output once it passes the critic.” What Weil is describing also sounds a lot like what Google DeepMind did with AlphaEvolve, a tool that wrapped the firms LLM, Gemini, inside a wider system that filtered out the good responses from the bad and fed them back in again to be improved on. Google DeepMind has used AlphaEvolve to solve several real-world problems. OpenAI faces stiff competition from rival firms, whose own LLMs can do most, if not all, of the things it claims for its own models. If that’s the case, why should scientists use GPT-5 instead of Gemini or Anthropic’s Claude, families of models that are themselves improving every year? Ultimately, OpenAI for Science may be as much an effort to plant a flag in new territory as anything else. The real innovations are still to come. “I think 2026 will be for science what 2025 was for software engineering,” says Weil. “At the beginning of 2025, if you were using AI to write most of your code, you were an early adopter. Whereas 12 months later, if you’re not using AI to write most of your code, you’re probably falling behind. We’re now seeing those same early flashes for science as we did for code.” He continues: “I think that in a year, if you’re a scientist and you’re not heavily using AI, you’ll be missing an opportunity to increase the quality and pace of your thinking.”

Why chatbots are starting to check your age
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. How do tech companies check if their users are kids? This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren’t required to moderate content accordingly. Two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates. In one corner is the Republican Party, which has supported laws passed in several states that require sites with adult content to verify users’ ages. Critics say this provides cover to block anything deemed “harmful to minors,” which could include sex education. Other states, like California, are coming after AI companies with laws to protect kids who talk to chatbots (by requiring them to verify who’s a kid). Meanwhile, President Trump is attempting to keep AI regulation a national issue rather than allowing states to make their own rules. Support for various bills in Congress is constantly in flux.
So what might happen? The debate is quickly moving away from whether age verification is necessary and toward who will be responsible for it. This responsibility is a hot potato that no company wants to hold. In a blog post last Tuesday, OpenAI revealed that it plans to roll out automatic age prediction. In short, the company will apply a model that uses factors like the time of day, among others, to predict whether a person chatting is under 18. For those identified as teens or children, ChatGPT will apply filters to “reduce exposure” to content like graphic violence or sexual role-play. YouTube launched something similar last year.
If you support age verification but are concerned about privacy, this might sound like a win. But there’s a catch. The system is not perfect, of course, so it could classify a child as an adult or vice versa. People who are wrongly labeled under 18 can verify their identity by submitting a selfie or government ID to a company called Persona. Selfie verifications have issues: They fail more often for people of color and those with certain disabilities. Sameer Hinduja, who co-directs the Cyberbullying Research Center, says the fact that Persona will need to hold millions of government IDs and masses of biometric data is another weak point. “When those get breached, we’ve exposed massive populations all at once,” he says. Hinduja instead advocates for device-level verification, where a parent specifies a child’s age when setting up the child’s phone for the first time. This information is then kept on the device and shared securely with apps and websites. That’s more or less what Tim Cook, the CEO of Apple, recently lobbied US lawmakers to call for. Cook was fighting lawmakers who wanted to require app stores to verify ages, which would saddle Apple with lots of liability. More signals of where this is all headed will come on Wednesday, when the Federal Trade Commission—the agency that would be responsible for enforcing these new laws—is holding an all-day workshop on age verification. Apple’s head of government affairs, Nick Rossi, will be there. He’ll be joined by higher-ups in child safety at Google and Meta, as well as a company that specializes in marketing to children. The FTC has become increasingly politicized under President Trump (his firing of the sole Democratic commissioner was struck down by a federal court, a decision that is now pending review by the US Supreme Court). In July, I wrote about signals that the agency is softening its stance toward AI companies. Indeed, in December, the FTC overturned a Biden-era ruling against an AI company that allowed people to flood the internet with fake product reviews, writing that it clashed with President Trump’s AI Action Plan. Wednesday’s workshop may shed light on how partisan the FTC’s approach to age verification will be. Red states favor laws that require porn websites to verify ages (but critics warn this could be used to block a much wider range of content). Bethany Soye, a Republican state representative who is leading an effort to pass such a bill in her state of South Dakota, is scheduled to speak at the FTC meeting. The ACLU generally opposes laws requiring IDs to visit websites and has instead advocated for an expansion of existing parental controls. While all this gets debated, though, AI has set the world of child safety on fire. We’re dealing with increased generation of child sexual abuse material, concerns (and lawsuits) about suicides and self-harm following chatbot conversations, and troubling evidence of kids’ forming attachments to AI companions. Colliding stances on privacy, politics, free expression, and surveillance will complicate any effort to find a solution. Write to me with your thoughts.

The first human test of a rejuvenation method will begin “shortly”
When Elon Musk was at Davos last week, an interviewer asked him if he thought aging could be reversed. Musk said he hasn’t put much time into the problem but suspects it is “very solvable” and that when scientists discover why we age, it’s going to be something “obvious.” Not long after, the Harvard professor and life-extension evangelist David Sinclair jumped into the conversation on X to strongly agree with the world’s richest man. “Aging has a relatively simple explanation and is apparently reversible,” wrote Sinclair. “Clinical Trials begin shortly.” “ER-100?” Musk asked. “Yes” replied Sinclair.
ER-100 turns out to be the code name of a treatment created by Life Biosciences, a small Boston startup that Sinclair cofounded and which he confirmed today has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers. The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech.
The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off. “Reprogramming is like the AI of the bio world. It’s the thing everyone is funding,” says Karl Pfleger, an investor who backs a smaller UK startup, Shift Bioscience. He says Sinclair’s company has recently been seeking additional funds to keep advancing its treatment. Reprogramming is so powerful that it sometimes creates risks, even causing cancer in lab animals, but the version of the technique being advanced by Life Biosciences passed initial safety tests in animals. But it’s still very complex. The trial will initially test the treatment on about a dozen patients with glaucoma, a condition where high pressure inside the eye damages the optic nerve. In the tests, viruses carrying three powerful reprogramming genes will be injected into one eye of each patient, according to a description of the study first posted in December. To help make sure the process doesn’t go too far, the reprogramming genes will be under the control of a special genetic switch that turns them on only while the patients take a low dose of the antibiotic doxycycline. Initially, they will take the antibiotic for about two months while the effects are monitored. Executives at the company have said for months that a trial could begin this year, sometimes characterizing it as a starting bell for a new era of age reversal. “It’s an incredibly big deal for us as an industry,” Michael Ringel, chief operating officer at Life Biosciences, said at an event this fall. “It’ll be the first time in human history, in the millennia of human history, of looking for something that rejuvenates … So watch this space.” The technology is based on the Nobel Prize–winning discovery, 20 years ago, that introducing a few potent genes into a cell will cause it to turn back into a stem cell, just like those found in an early embryo that develop into the different specialized cell types. These genes, known as Yamanaka factors, have been likened to a “factory reset” button for cells. But they’re dangerous, too. When turned on in a living animal, they can cause an eruption of tumors.
That is what led scientists to a new idea, termed “partial” or “transient” reprogramming. The idea is to limit exposure to the potent genes—or use only a subset of them—in the hope of making cells act younger without giving them complete amnesia about what their role in the body is. In 2020, Sinclair claimed that such partial reprogramming could restore vision to mice after their optic nerves were smashed, saying there was even evidence that the nerves regrew. His report appeared on the cover of the influential journal Nature alongside the headline “Turning Back Time.” Not all scientists agree that reprogramming really counts as age reversal. But Sinclair has doubled down. He’s been advancing the theory that the gradual loss of correct epigenetic information in our cells is, in fact, the ultimate cause of aging—just the kind of root cause that Musk was alluding to. “Elon does seem to be paying attention to the field and [is] seemingly in sync with [my theory],” Sinclair said in an email. Reprogramming isn’t the first longevity fix championed by Sinclair, who’s written best-selling books and commands stratospheric fees on the longevity lecture circuit. Previously, he touted the longevity benefits of molecules called sirtuins as well as resveratrol, a molecule found in red wine. But some critics say he greatly exaggerates scientific progress, pushback that culminated in a 2024 Wall Street Journal story that dubbed him a “reverse-aging guru” whose companies “have not panned out.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it mattersLife Biosciences has been among those struggling companies. Initially formed in 2017, it at first had a strategy of launching subsidiaries, each intended to pursue one aspect of the aging problem. But after these made limited progress, in 2021 it hired a new CEO, Jerry McLaughlin, who has refocused its efforts on Sinclair’s mouse vision results and the push toward a human trial. The company has discussed the possibility of reprogramming other organs, including the brain. And Ringel, like Sinclair, entertains the idea that someday even whole-body rejuvenation might be feasible. But for now, it’s better to think of the study as a proof of concept that’s still far from a fountain of youth. “The optimistic case is this solves some blindness for certain people and catalyzes work in other indications,” says Pfleger, the investor. “It’s not like your doctor will be writing a prescription for a pill that will rejuvenate you.” Life’s treatment also relies on an antibiotic switching mechanism that, while often used in lab animals, hasn’t been tried in humans before. Since the switch is built from gene components taken from E. coli and the herpes virus, it’s possible that it could cause an immune reaction in humans, scientists say. “I was always thinking that for widespread use you might need a different system,” says Noah Davidsohn, who helped Sinclair implement the technique and is now chief scientist at a different company, Rejuvenate Bio. And Life’s choice of reprogramming factors—it’s picked three, which go by the acronym OSK—may also be risky. They are expected to turn on hundreds of other genes, and in some circumstances the combination can cause cells to revert to a very primitive, stem-cell-like state. Other companies studying reprogramming say their focus is on researching which genes to use, in order to achieve time reversal without unwanted side effects. New Limit, which has been carrying out an extensive search for such genes, says it won’t be ready for a human study for two years. At Shift, experiments on animals are only beginning now. “Are their factors the best version of rejuvenation? We don’t think they are. I think they are working with what they’ve got,” Daniel Ives, the CEO of Shift, says of Life Biosciences. “But I think they’re way ahead of anybody else in terms of getting into humans. They have found a route forward in the eye, which is a nice self-contained system. If it goes wrong, you’ve still got one left.”

OpenAI’s latest product let’s you vibe code science
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science. Kevin Weil, head of OpenAI for Science, pushes that analogy himself. “I think 2026 will be for AI and science what 2025 was for AI in software engineering,” he said at a press briefing yesterday. “We’re starting to see that same kind of inflection.” OpenAI claims that around 1.3 million scientists around the world submit more than 8 million queries a week to ChatGPT on advanced topics in science and math. “That tells us that AI is moving from curiosity to core workflow for scientists,” Weil said.
Prism is a response to that user behavior. It can also be seen as a bid to lock in more scientists to OpenAI’s products in a marketplace full of rival chatbots. “I mostly use GPT-5 for writing code,” says Roland Dunbrack, a professor of biology at the Fox Chase Cancer Center in Philadelphia, who is not connected to OpenAI. “Occasionally, I ask LLMs a scientific question, basically hoping it can find information in the literature faster than I can. It used to hallucinate references but does not seem to do that very much anymore.”
Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says GPT-5 has already become an important tool in his work. “It sometimes helps polish the text of papers, catching mathematical typos or bugs, and provides generally useful feedback,” he says. “It is extremely helpful for quick summarization of research articles, making interaction with the scientific literature smoother.” By combining a chatbot with an everyday piece of software, Prism follows a trend set by products such as OpenAI’s Atlas, which embeds ChatGPT in a web browser, as well as LLM-powered office tools from firms such as Microsoft and Google DeepMind. Prism incorporates GPT-5.2, the company’s best model yet for mathematical and scientific problem-solving, into an editor for writing documents in LaTeX, a common coding language that scientists use for formatting scientific papers. A ChatGPT chat box sits at the bottom of the screen, below a view of the article being written. Scientists can call on ChatGPT for anything they want. It can help them draft the text, summarize related articles, manage their citations, turn photos of whiteboard scribbles into equations or diagrams, or talk through hypotheses or mathematical proofs. It’s clear that Prism could be a huge time saver. It’s also clear that a lot of people may be disappointed, especially after weeks of high-profile social media chatter from researchers at the firm about how good GPT-5 is at solving math problems. Science is drowning in AI slop: Won’t this just make it worse? Where is OpenAI’s fully automated AI scientist? And when will GPT-5 make a stunning new discovery? That’s not the mission, says Weil. He would love to see GPT-5 make a discovery. But he doesn’t think that’s what will have the biggest impact on science, at least not in the near term. “I think more powerfully—and with 100% probability—there’s going to be 10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly, and AI will have been a contributor to that,” Weil told MIT Technology Review in an exclusive interview this week. “It won’t be this shining beacon—it will just be an incremental, compounding acceleration.”

Mexico Shelves Planned Shipment of Oil to Cuba
Mexico’s state oil company backtracked on plans to send a much-needed shipment of crude oil to Cuba, a long-time ally of ousted Venezuelan leader Nicolas Maduro. Petroleos Mexicanos, which was expected to send a shipment this month, removed the cargo from its schedule, according to documents seen by Bloomberg. The shipment was set to load in mid-January and would have arrived in Cuba before the end of the month under the original schedule. Pemex and Mexico’s Energy Ministry didn’t immediately return a message seeking comment. While it’s unclear why the cargo was shelved, the removal comes as the administration of US President Donald Trump increases pressure on the Caribbean island. “THERE WILL BE NO MORE OIL OR MONEY GOING TO CUBA – ZERO! I strongly suggest they make a deal, BEFORE IT IS TOO LATE,” Trump said in a Truth Social post a week after Maduro’s capture by US forces. Before Trump’s comments on Cuba, President Claudia Sheinbaum had said Mexico planned to continue supplying oil to Cuba as part of humanitarian aid to the island, a country plagued by chronic power outages, food and fuel shortages. Mexico started sending oil to Cuba in 2023, when Venezuela reduced supplies amid its falling oil production. Pemex sent an average of one ship per month, or the equivalent of 20,000 barrels a day of crude oil last year, according to data compiled by Bloomberg. The canceled shipment was expected to load in mid-January on board the vessel Swift Galaxy, according to the document. It was removed from the schedule without an explanation. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Gauging the real impact of AI agents
That creates the primary network issue for AI agents, which is dealing with implicit and creeping data. There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. The enterprises with the most experience with AI agents say it would be smart to expect some data center network upgrades to link agents to databases, and if the agents are distributed away from the data center, it may be necessary to improve the agent sites’ connection to the corporate VPN. As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. Right now, these tend to exist within a fairly small circle—a plant, a campus, perhaps a city or town—but over time, key enterprises say that their new-service interest could span a metro area. They point out that the real-time edge applications

Baker Hughes Sees Record Year for Industrial, Energy Tech Bookings
Baker Hughes Co has reported record orders of $14.87 billion from its industrial and energy technology (IET) business for 2025, including $4.02 billion for the fourth quarter. “IET achieved a record backlog of $32.4 billion at year-end, and book-to-bill exceeded 1x”, chair and chief executive Lorenzo Simonelli said in an online statement. “For the second consecutive year, non-LNG equipment orders represented approximately 85 percent of total IET orders, which highlights the end-market diversity and versatility of our IET portfolio”. IET delivered $3.81 billion in revenue for October-December 2025, up 13 percent from the prior quarter and nine percent year-on-year. “The increase was driven by gas technology equipment, up $189 million, or 11 percent year-over-year, [and] gas technology services, up $86 million, or 11 percent year-over-year”, Baker Hughes said. Q4 2025 IET orders totaled $4.02 billion, down three percent against the prior three-month period but up seven percent compared to Q4 2024. “The [year-over-year] increase was driven by continued strength in climate technology solutions, industrial technology, and gas technology services”, the Houston, Texas-based company said. Segment EBITDA came at $761 million, up 20 percent sequentially and 19 percent year-on-year. “The year-over-year increase in EBITDA was driven by productivity, volume, price and FX [foreign exchange], partially offset by inflation”, Baker Hughes said. Its other segment, oilfield services and equipment (OFSE), logged $3.57 billion in revenue for Q4 2025, down two percent quarter-on-quarter and eight percent year-on-year. That was driven by declines in its main markets, North America and the Middle East/Asia, with both regions registering quarter-on-quarter and year-on-year drops in revenue. OFSE orders in Q4 2025 totaled $3.86 billion, down five percent quarter-on-quarter but up three percent year-on-year. OFSE EBITDA landed at $647 million, down four percent quarter-on-quarter and 14 percent year-on-year. IET “more than offset continued macro‑driven softness in OFSE, where margins remained resilient

Analysts Explain Tuesday’s USA NatGas Price Drop
In separate exclusive interviews with Rigzone on Tuesday, Phil Flynn, a senior market analyst at the PRICE Futures Group, and Art Hogan, Chief Market Strategist at B. Riley Wealth, explained today’s U.S. natural gas price drop. “Natural gas is pulling back after the worst of the cold has passed,” Flynn told Rigzone. “We’ve lifted some of the winter storm warnings, and this should allow some of the freeze-offs in the basins to get production back up,” he added. “We saw [a] significant drop in production because of the cold weather and now some of that will be coming back online,” he continued. In his interview with Rigzone, Flynn warned that the weather is still going to be “key”. “Some forecasters are predicting a warm-up, but then after that another blast of the cold,” he said. “If that’s the case … these huge moves in natural gas may be far from over”, Flynn told Rigzone. He added, however, that, “at least in the short term, [a] return to more moderate temperatures from what we had experienced should allow for the market to recover as far as production goes, and exports”. When he was asked to explain the U.S. natural gas price drop today, Hogan told Rigzone that “trees don’t grow to the sky”. “U.S. natural gas prices dipped today amid profit-taking by traders, after soaring by over 117 percent in the five days to Monday,” he said. “The benchmark jumped by 30 percent on Monday alone. Last week, gas prices went up by as much as 70 percent amid frigid weather that apparently took gas traders by surprise,” he added. “This surprise led to frantic short-covering and position exits at a hefty loss. Currently, natural gas is trading at over $6.60 per million British thermal units [MMBtu], which is the highest in
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.