Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Cisco fixes critical IMC auth bypass present in many products

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboard management controller (BMC), a dedicated controller embedded into server motherboards with its own RAM and network interface that gives administrators monitoring and management capabilities as if they were physically connected to the server with a keyboard, monitor, and mouse (KVM). Because BMCs run their own firmware independently of the OS, they can be used to perform operations even when the OS is shut down, including reinstalling it.

Read More »

Kyndryl service targets AI agent automation, security

Understand agents, serving as a single source of truth to help mitigate the risks associated with shadow AI. Validate each agent before launch by testing for security, resilience, and policy compliance to ensure they meet your standards before going live. Maintain control with real-time guardrails that keep agents operating within approved boundaries. Security testing, validation, and threat modeling should be incorporated into development pipelines, Kyndryl stated. “Additionally, runtime protections such as anomaly detection, guardian agents, and rapid isolation capabilities can help contain incidents before they escalate. By making security and governance foundational rather than treating them as afterthoughts, organizations can confidently scale agentic AI, knowing that risks are proactively managed, and trust is maintained with customers, partners, and regulators,” Kyndryl stated. The new service is just one of the platforms the vendor offers to manage AI agents. Last year Kyndral introduced its Agentic AI Framework. That package offers an orchestration system built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. Specialized agents are deployed to gather IT information, such as data analysis, compliance checks, incident response or service desk ticket resolution. Over time, agents learn from data and outcomes to improve decision-making and adapt workflows autonomously, and an orchestration engine parses that data to let enterprise systems adjust to changing conditions in real time, Kyndryl stated. The platform defines what actions agents can and cannot do, basically setting policy across the enterprise.

Read More »

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Why can’t we have nice routers anymore?

In the Volt Typhoon and Flax Typhoon attacks, the routers themselves weren’t compromised because they were foreign-made routers. Far from it! They were compromised because they were unpatched, Internet-exposed, and end-of-life. The router manufacturers were no more guilty of opening the doors to these attacks than Microsoft is for your company’s Windows 7 PCs being hacked in 2026. Only the Salt Typhoon assault on Cisco IOS XE software, which was running on enterprise-grade routers—specifically, ASR 1000 Series, ISR 4000 Series, and Catalyst 8000 Series edge platforms—can be linked directly to Chinese-made routers. Guess what, though? You can still buy, use, and deploy this Cisco hardware, which is used as core routers by top American telecoms such as AT&T, Verizon, and T-Mobile. Uncle Joe wants to replace his router with a brand-new Wi-Fi 7 model router? Nope, he can’t do it. Multi-billion-dollar companies decide to replace vital infrastructure routers that carry billions of messages every day? Sure, go for it! You know, if it were me, I’d be taking a long, hard look at the actual modern enterprise networking gear that we know has been breached. Why isn’t the FCC doing this? Darned if I know. Even the FCC acknowledges that some of Cisco’s problems have nothing to do with who made the hardware and where it was built. For example, the truly awful CVE-2023-20198 vulnerability, with its CVSS score of 10, was all about a boneheaded security hole in Cisco IOS XE Web UI, not the firmware or hardware. The FCC argues, however, that consumer routers pose unique risks because they’re deployed in millions of homes with minimal security oversight, thus making them ideal for botnet infrastructure. I can’t argue with that. But that has nothing to do with who made these devices and where.

Read More »

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

Amazon was contacted for comment on the latest Bahrain drone incident, but said it had nothing to add beyond the statement in its current advisory. Denial of infrastructure Doing the damage is the Shaheed 136, a small and unsophisticated drone designed to overwhelm defenders with numbers. If only one in twenty reaches its target, the price-performance still exceeds that of more expensive systems. When aimed at critical infrastructure such as datacenters, the effect is also psychological; the threat of an attack on its own can be enough to make it difficult for organizations to continue using an at-risk facility.  Iran’s targeting of the Bahrain datacenter is unlikely to be random. Amazon opened its ME-SOUTH-1 AWS presence in 2019, and it is still believed to be the company’s largest site in the Middle East. Earlier this week, the Islamic Revolutionary Guard Corps (IRGC) Telegram channel explicitly threatened to target at least 18 US companies operating in the region, including Microsoft, Google, Nvidia, and Apple. This follows similar threats to an even longer list of US companies made on the IRGC-affiliated Tasnim News Agency in recent weeks. That strategy doesn’t bode well for US companies that have made large investments in Middle Eastern datacenter infrastructure in recent years, drawn by the growing wealth and influence of countries in the region. This includes Amazon, which has announced plans to build a $5.3 billion datacenter in Saudi Arabia, due to become available in 2026. If this is now under threat, whether by warfare or the hypothetical possibility of attack, that will create uncertainty.

Read More »

Cisco fixes critical IMC auth bypass present in many products

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboard management controller (BMC), a dedicated controller embedded into server motherboards with its own RAM and network interface that gives administrators monitoring and management capabilities as if they were physically connected to the server with a keyboard, monitor, and mouse (KVM). Because BMCs run their own firmware independently of the OS, they can be used to perform operations even when the OS is shut down, including reinstalling it.

Read More »

Kyndryl service targets AI agent automation, security

Understand agents, serving as a single source of truth to help mitigate the risks associated with shadow AI. Validate each agent before launch by testing for security, resilience, and policy compliance to ensure they meet your standards before going live. Maintain control with real-time guardrails that keep agents operating within approved boundaries. Security testing, validation, and threat modeling should be incorporated into development pipelines, Kyndryl stated. “Additionally, runtime protections such as anomaly detection, guardian agents, and rapid isolation capabilities can help contain incidents before they escalate. By making security and governance foundational rather than treating them as afterthoughts, organizations can confidently scale agentic AI, knowing that risks are proactively managed, and trust is maintained with customers, partners, and regulators,” Kyndryl stated. The new service is just one of the platforms the vendor offers to manage AI agents. Last year Kyndral introduced its Agentic AI Framework. That package offers an orchestration system built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. Specialized agents are deployed to gather IT information, such as data analysis, compliance checks, incident response or service desk ticket resolution. Over time, agents learn from data and outcomes to improve decision-making and adapt workflows autonomously, and an orchestration engine parses that data to let enterprise systems adjust to changing conditions in real time, Kyndryl stated. The platform defines what actions agents can and cannot do, basically setting policy across the enterprise.

Read More »

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Why can’t we have nice routers anymore?

In the Volt Typhoon and Flax Typhoon attacks, the routers themselves weren’t compromised because they were foreign-made routers. Far from it! They were compromised because they were unpatched, Internet-exposed, and end-of-life. The router manufacturers were no more guilty of opening the doors to these attacks than Microsoft is for your company’s Windows 7 PCs being hacked in 2026. Only the Salt Typhoon assault on Cisco IOS XE software, which was running on enterprise-grade routers—specifically, ASR 1000 Series, ISR 4000 Series, and Catalyst 8000 Series edge platforms—can be linked directly to Chinese-made routers. Guess what, though? You can still buy, use, and deploy this Cisco hardware, which is used as core routers by top American telecoms such as AT&T, Verizon, and T-Mobile. Uncle Joe wants to replace his router with a brand-new Wi-Fi 7 model router? Nope, he can’t do it. Multi-billion-dollar companies decide to replace vital infrastructure routers that carry billions of messages every day? Sure, go for it! You know, if it were me, I’d be taking a long, hard look at the actual modern enterprise networking gear that we know has been breached. Why isn’t the FCC doing this? Darned if I know. Even the FCC acknowledges that some of Cisco’s problems have nothing to do with who made the hardware and where it was built. For example, the truly awful CVE-2023-20198 vulnerability, with its CVSS score of 10, was all about a boneheaded security hole in Cisco IOS XE Web UI, not the firmware or hardware. The FCC argues, however, that consumer routers pose unique risks because they’re deployed in millions of homes with minimal security oversight, thus making them ideal for botnet infrastructure. I can’t argue with that. But that has nothing to do with who made these devices and where.

Read More »

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

Amazon was contacted for comment on the latest Bahrain drone incident, but said it had nothing to add beyond the statement in its current advisory. Denial of infrastructure Doing the damage is the Shaheed 136, a small and unsophisticated drone designed to overwhelm defenders with numbers. If only one in twenty reaches its target, the price-performance still exceeds that of more expensive systems. When aimed at critical infrastructure such as datacenters, the effect is also psychological; the threat of an attack on its own can be enough to make it difficult for organizations to continue using an at-risk facility.  Iran’s targeting of the Bahrain datacenter is unlikely to be random. Amazon opened its ME-SOUTH-1 AWS presence in 2019, and it is still believed to be the company’s largest site in the Middle East. Earlier this week, the Islamic Revolutionary Guard Corps (IRGC) Telegram channel explicitly threatened to target at least 18 US companies operating in the region, including Microsoft, Google, Nvidia, and Apple. This follows similar threats to an even longer list of US companies made on the IRGC-affiliated Tasnim News Agency in recent weeks. That strategy doesn’t bode well for US companies that have made large investments in Middle Eastern datacenter infrastructure in recent years, drawn by the growing wealth and influence of countries in the region. This includes Amazon, which has announced plans to build a $5.3 billion datacenter in Saudi Arabia, due to become available in 2026. If this is now under threat, whether by warfare or the hypothetical possibility of attack, that will create uncertainty.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Energy Department Initiates Additional Strategic Petroleum Reserve Emergency Exchange to Stabilize Global Oil Supply

WASHINGTON—The U.S. Department of Energy (DOE) issued a Request for Proposal (RFP) today for an emergency exchange of 10-million-barrels from the Strategic Petroleum Reserve (SPR). This action is part of the coordinated release of 400-million-barrels from IEA member nations’ strategic reserves President Trump previously announced. The United States continues to deliver on its 172-million-barrel release commitment.  The crude oil will originate from the Strategic Petroleum Reserve’s (SPR) Bryan Mound site. Today’s action builds on the initial phase of the Emergency Exchange, which moved quickly to award 45.2 million barrels from the Bayou Choctaw, Bryan Mound, and West Hackberry SPR sites. The 10-million-barrel exchange leverages the full capabilities of the SPR, alongside the President’s limited Jones Act waiver, to accelerate critical near-term oil flows into the market.  “Today’s action furthers the United States’ efforts to move oil quickly to the market and mitigate short-term supply disruptions,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office Kyle Haustveit. “Thanks to President Trump, America is managing our national security assets responsibly again. Through this exchange, we will continue to refill the Strategic Petroleum Reserve by bringing additional barrels back at a later date through this pragmatic exchange structure, strengthening its long-term readiness and all at no cost to the American taxpayer.”  Under DOE’s exchange authority, participating companies will return the borrowed 10 million barrels with additional premium barrels by next year. This exchange delivers immediate crude to refiners and the market while generating additional barrels for the American people at no cost to taxpayers.   Bids for the solicitation are due no later than 11:00 A.M. CT on Monday, April 6, 2026.    For more information on the SPR, please visit DOE’s website.   

Read More »

Trump Administration Keeps Colorado Coal Plant Open to Ensure Affordable, Reliable and Secure Power in Colorado

WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order to keep a Colorado coal plant operational to ensure Americans maintain access to affordable, reliable and secure electricity. The order directs Tri-State Generation and Transmission Association (Tri-State), Platte River Power Authority, Salt River Project, PacifiCorp, and Public Service Company of Colorado (Xcel Energy), in coordination with the Western Area Power Administration (WAPA) Rocky Mountain Region and Southwest Power Pool (SPP), to take all measures necessary to ensure that Unit 1 at the Craig Station in Craig, Colorado is available to operate. Unit One of the coal plant was scheduled to shut down at the end of 2025 but on December 30, 2025, Secretary Wright issued an emergency order directing Tri-State and the co-owners to ensure that Unit 1 at the Craig Station remains available to operate. “The last administration’s energy subtraction policies threatened America’s energy security and positioned our nation to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” Thanks to President Trump’s leadership, coal plants across the country are reversing plans to shut down. In 2025, more than 17 gigawatts (GW) of coal-power electricity generation were saved. On April 1, once Tri-State and the WAPA Rocky Mountain Region join the SPP RTO West expansion, SPP is directed to take every step to employ economic dispatch to minimize costs to ratepayers. According to DOE’s Resource Adequacy Report, blackouts were on track to potentially increase 100 times by 2030 if the U.S. continued to take reliable

Read More »

NextDecade contractor Bechtel awards ABB more Rio Grande LNG automation work

NextDecade Corp. contractor Bechtel Corp. has awarded ABB Ltd. additional integrated automation and electrical solution orders, extending its scope to Trains 4 and 5 of NextDecade’s 30-million tonne/year (tpy)  Rio Grande LNG (RGLNG) plant in Brownsville, Tex. The orders were booked in third- and fourth-quarters 2025 and build on ABB’s Phase 1 work with Trains 1-3, totaling 17 million tpy.  The scope for RGLNG Trains 4 and 5 includes deployment of an integrated control and safety system consisting of a distributed control system, emergency shutdown, and fire and gas systems. An electrical controls and monitoring system will provide unified visibility of the plant’s electrical infrastructure. These two overarching solutions will provide a common automation platform. ABB will also supply medium-voltage drives, synchronous motors, transformers, motor controllers and switchgear.  The orders also include local equipment buildings—two for Train 4 and one for Train 5— housing critical control and electrical systems in prefabricated modules to streamline installation and commissioning on site. The solutions being delivered to Bechtel use ABB adaptive execution, a methodology for capital projects designed to optimize engineering work and reduce delivery timelines. Phase 1 of RGLNG is under construction and expected to begin operations in 2027. Operations at Train 4 are expected in 2030 and Train 5 in 2031. ABB’s senior vice-president for the Americas, Scott McCay, confirmed to Oil & Gas Journal at CERAWeek by S&P Global in Houston that the company is doing similar work through Tecnimont for Argent LNG’s planned 25-million tpy plant in Port Fourchon, La.; 10-million tpy Phase 1 and 15-million tpy Phase 2. Argent is targeting 2030 completion for its plant.

Read More »

Persistent oil flow imbalances drive Enverus to increase crude price forecast

Citing impacts from the Iran war, near-zero flows through the Strait of Hormuz, accelerating global stock draws, and expectations for a muted US production response despite higher prices, Enverus Intelligence Research (EIR) raised its Brent crude oil price forecast. EIR now expects Brent to average $95/bbl for the remainder of 2026 and $100/bbl in 2027, reflecting what it described as a persistent global oil flow imbalance that continues to draw down inventories. “The world has an oil flow problem that is draining stocks,” said Al Salazar, director of research at EIR. “Whenever that oil flow problem is resolved, the world is left with low stocks. That’s what drives our oil price outlook higher for longer.” The outlook assumes the Strait of Hormuz remains largely closed for 3 months. EIR estimates that each month of constrained flows shifts the price outlook by about $10–15/bbl, underscoring the scale of the disruption and uncertainty around its duration. Despite West Texas Intermediate (WTI) prices of $90–100/bbl, EIR does not expect US producers to materially increase output. The firm forecasts US liquids production growth of 370,000 b/d by end-2026 and 580,000 b/d by end-2027, citing drilling-to-production lags, industry consolidation, and continued capital discipline. Global oil demand growth for 2026 has been reduced to about 500,000 b/d from 1.0 million b/d as higher energy prices and anticipated supply disruptions weigh on economic activity. Cumulative global oil stock draws are estimated at roughly 1 billion bbl through 2027, with non-OECD inventories—particularly in Asia—absorbing nearly half of the impact. A 60-day Jones Act waiver may provide limited short-term US shipping flexibility, but EIR said the measure is unlikely to materially affect global oil prices given broader market forces.

Read More »

Equinor begins drilling $9-billion natural gas development project offshore Brazil

Equinor has started drilling the Raia natural gas project in the Campos basin presalt offshore Brazil. The $9-billion project is Equinor’s largest international investment, its largest project under execution, and marks the deepest water depth operation in its portfolio. The drilling campaign, which began Mar. 24 with the Valaris DS‑17 drillship, includes six wells in the Raia area 200 km offshore in water depths of around 2,900 m. The area is expected to hold recoverable natural gas and condensate reserves of over 1 billion boe. Raia’s development concept is based on production through wells connected to a 126,000-b/d floating production, storage and offloading unit (FPSO), which will treat produced oil/condensate and gas. Natural gas will be transported through a 200‑km pipeline from the FPSO to Cabiúnas, in the city of Macaé, Rio de Janeiro state. Once in operation, expected in 2028, the project will have the capacity to export up to 16 million cu m/day of natural gas, which could represent 15% of Brazil’s natural gas demand, the company said in a release Mar. 24. “While drilling takes place, integration and commissioning activities on the FPSO are progressing well putting us on track towards a safe start of operations in 2028,” said Geir Tungesvik, executive vice-president, projects, drilling and procurement, Equinor. The Raia project is operated by Equinor (35%), in partnership with Repsol Sinopec Brasil (35%) and Petrobras (30%).

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

Gemma 4: Byte for byte, the most capable open models

At the edge, our E2B and E4B models redefine on-device utility, prioritizing multimodal capabilities, low-latency processing and seamless ecosystem integration over raw parameter count.Powerful, accessible, openTo power the next generation of pioneering research and products, we’ve sized the Gemma 4 models specifically to run and fine-tune efficiently on hardware — from billions of Android devices worldwide, to laptop GPUs, all the way up to developer workstations and accelerators.By using these highly optimized models, you can fine-tune Gemma 4 to achieve state-of-the-art performance on your specific tasks. We’ve already seen incredible success with this approach; for instance, INSAIT created a pioneering Bulgarian-first language model (BgGPT), and we worked with Yale University on Cell2Sentence-Scale to discover new pathways for cancer therapy, among many others.Here is what makes Gemma 4 our most capable open model family yet:Advanced reasoning: Capable of multi-step planning and deep logic, Gemma 4 demonstrates significant improvements in math and instruction-following benchmarks that require it.Agentic workflows: Native support for function-calling, structured JSON output, and native system instructions enables you to build autonomous agents that can interact with different tools and APIs and execute workflows reliably.Code generation: Gemma 4 supports high-quality offline code, turning your workstation into a local-first AI code assistant.Vision and audio: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.Longer context: Process long-form content seamlessly. The edge models feature a 128K context window, while the larger models offer up to 256K, allowing you to pass repositories or long documents in a single prompt.140+ languages: Natively trained on over 140 languages, Gemma 4 helps developers build inclusive, high-performance applications for a global audience.Versatile models for diverse hardwareWe are releasing the Gemma 4 model weights in sizes tailored for specific hardware and use cases, ensuring you get frontier-class reasoning wherever you need it:26B and 31B models: Frontier intelligence, offline on your personal computersOptimized to provide researchers and developers with state-of-the-art reasoning on accessible hardware, our unquantized bfloat16 weights fit efficiently on a single 80GB NVIDIA H100 GPU. For local setups, quantized versions run natively on consumer GPUs to power your IDEs, coding assistants and agentic workflows. Our 26B Mixture of Experts (MoE) focus on latency, activating only 3.8 billion of its total parameters during inference to deliver exceptionally fast tokens-per-second, while our 31B Dense is maximizing raw quality and provides a powerful foundation for fine-tuning.

Read More »

The Download: plastic’s problem with fuel prices, and SpaceX’s blockbuster IPO

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Fuel prices are soaring. Plastic could be next.  As the war in Iran continues, one of the most visible global economic ripple effects has been fossil-fuel prices. But looking ahead, further consequences could be looming for plastics.  Plastics are made from petrochemicals, and the supply chain impacts from the conflict are starting to build up. Americans will likely feel the ripples.   Read the full story to grasp the unpredictable impacts. 
—Casey Crownhart  This story is from The Spark, our weekly climate newsletter. Sign up to get it in your inbox every Wednesday. 
The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 SpaceX has filed for an IPO It’s set to be the largest ever, targeting a $1.75 trillion valuation. (NYT $)  + Which would make Elon Musk the world’s first trillionaire. (Al Jazeera) + But the IPO could hinge on the success of Moon missions. (LA Times $) + And the conflicts of interest are staggering. (The Next Web) + Meanwhile, rivals are rising to challenge SpaceX. (MIT Technology Review)   2 Artemis II is on its way to the Moon NASA successfully launched the four astronauts on its rocket yesterday. (Axios) + The lunar plans could violate international law. (The Verge) + But the potential scientific advances are tremendous. (Nature)  + Check out our roundtable on the next era of space exploration. (MIT Technology Review)   3 Iran has struck Amazon’s cloud business in Bahrain again It promised to hit US companies only yesterday. (FT $) + Other targets include Google, Microsoft, Apple, and Nvidia. (CNBC) + AWS data centers in Bahrain were also hit last month. (Reuters $)  4 OpenAI was secretly behind a child safety campaign group It pushed for age verification requirements for AI. (The San Francisco Standard $) + OpenAI had backed the legislation as a compromise measure. (WSJ $) + Coincidentally, Sam Altman heads a company providing age verification. (Engadget)  5 Anthropic is scrambling to limit the Claude Code leak It’s trying to remove 8,000 copies of the exposed code from GitHub. (Gizmodo) + An executive blamed the leak on “process errors.” (Bloomberg $) + Here’s what it reveals about Anthropic’s plans. (Ars Technica) + AI is making online crimes easier—and it could get much worse. (MIT Technology Review)  6 A new Russian “super-app” aims to emulate China’s WeChat And give the Kremlin new surveillance powers. (WSJ $) 

7 America’s AI boom is leaving the rest of the world behind  And it’s concentrating power and wealth in a handful of companies. (Rest of World)  8 Chinese chipmakers have claimed nearly half the country’s market Nvidia’s lead is shrinking rapidly. (Reuters $)  9 The first quantum computer to break encryption is imminent  New research reveals how it could happen. (New Scientist)  10 The world’s oldest tortoise has been embroiled in a crypto scam Reports that Jonathan died at just 194 years old are thankfully false. (Guardian)  Quote of the day  “Starlink is the only reason this valuation is defensible.”  —Shay Boloor, chief market strategist at Futurum Equities, tells Reuters why SpaceX has such high hopes for its IPO.  One More Thing  These companies are creating food out of thin air  Dried cells—it’s what’s for dinner. At least that’s what a new crop of biotech startups, armed with carbon-guzzling bacteria and plenty of capital, are hoping to convince us.  
Their claims sound too good to be true: they say they can make food out of thin air. But that’s exactly how certain soil-dwelling bacteria work.  Startups are replicating the process to turn abundant carbon dioxide into nutritious “air protein.” They believe it could dramatically lower farming emissions—and even disrupt agriculture altogether. Read the full story. 
—Claire L. Evans  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Need more Artemis II in your life? This site takes you inside the flight. + Here’s a fascinating look at the recording errors that improved songs. + Good news: the elusive Nightjar bird is making a comeback. + Finally, a master chef has baked clam chowder donuts. 

Read More »

Fuel prices are soaring. Plastic could be next.

As the war in Iran continues to engulf the Middle East and the Strait of Hormuz stays closed, one of the most visible global economic ripple effects has been fossil-fuel prices. In particular, you can’t get away from news about the price of gasoline, which just topped an average of $4 a gallon in the US, its highest level since 2022. But looking ahead, further consequences for the global economy could be looming in plastics. Plastics are made using petrochemicals, and the supply chain impacts of the oil bottleneck near Iran are starting to build up.  Plastic production accounts for roughly 5% of global carbon dioxide emissions today. And our current moment shows just how embedded oil and gas products are in our lives. It goes far beyond their use for energy.  As I write this, I’m wearing clothes that contain plastic fibers, typing on a plastic keyboard, and looking through the plastic lenses of my glasses. It’s hard to imagine what our world looks like without plastic. And in some ways, moving away from fossil-derived plastic could prove even more complicated than decarbonizing our energy system. 
Crude oil prices have been on a roller-coaster in recent weeks, and prices have recently topped $100 a barrel. Crude oil contains a huge range of hydrocarbons, and it’s typically refined by putting it through a distillation unit that separates the raw material into different fractions according to their boiling point. Those fractions then go on to be further processed into everything from jet fuel to asphalt binder. We’ve already seen the price spikes for some materials pulled out of crude oil, like gasoline and jet fuel.
Let’s zoom in on another component, naphtha. It can be added to gasoline and jet fuel to improve performance. It can also be used as a solvent or as a raw material to make plastics. The Middle East currently accounts for about 20% of global naphtha production­ and supplies about 40% of the market in Asia, where prices are already up by 50% over the last month. We’re starting to see these effects trickle down already. The price of polypropylene (which is made from naphtha and used for food containers, bottle caps, and even automotive parts) is climbing, especially in Asia.   Typically, manufacturers have a bit of stock built up, but that’ll be exhausted soon, likely in the coming weeks. The largest supplier of water bottles in India recently announced that it would raise prices by 11% after its packaging costs went up by over 70%, according to reporting from Reuters. Toys could be more expensive this holiday season as manufacturers grapple with supply chain concerns. Americans will likely feel these ripples especially hard if disruptions continue. The average US resident used over 250 kilograms of new plastics in 2019, according to a 2022 report from the Organization for Economic Cooperation and Development. That’s an absolutely massive number—the global average is just 60 kilograms. The effects of higher prices for both fuels and feedstocks could compound and multiply, and alternatives aren’t widely available. Bio-based plastics made with materials like plant sugars exist, but they still make up a vanishingly tiny portion of the market. As of 2025, global plastics production totaled over 431 million metric tons per year. Bio-based and bio-degradable plastics made up about 0.5% of that, a share that could reach 1% by 2030. Bio-based plastics are much more expensive than their fossil-derived counterparts. And many are made using agricultural raw materials, so scaling them up too much could be harmful for the environment and might compete with other industries like food production. Recycling isn’t the easy answer either. Mechanical recycling is the current standard method used for materials like the plastics that make up water bottles and disposable coffee cups. But that degrades the materials over time, so they can’t be used infinitely. Chemical recycling has its own host of issues—the facilities that do it can be highly polluting, and today plastics that go into advanced recycling plants largely don’t actually go into new plastics.

There’s been a lot of talk in recent weeks about how this energy crisis is going to push the world more toward renewable energy. Solar panels, electric vehicles, and batteries could suddenly become more attractive as we face the drastic consequences of a disruption in the global fossil-fuel supply. But when it comes to plastic, the future looks far more complicated. Even though the plastics industry is facing much the same disruptions as the energy sector, there aren’t the same obvious alternatives available for a transition. Our lives are tied up in plastic, with uses ranging from the essential (like medical equipment) to the mundane (my to-go coffee cup). Soon, our economy could feel the effects of just how much we rely on fossil-derived plastics, and how hard it’s going to be to replace them.  This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. 

Read More »

The Download: gig workers training humanoids, and better AI benchmarks

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The gig workers who are training humanoid robots at home  When Zeus, a medical student in Nigeria, returns to his apartment from a long day at the hospital, he straps his iPhone to his forehead and records himself doing chores.  Zeus is a data recorder for Micro1, which sells the data he collects to robotics firms. As these companies race to build humanoids, videos from workers like Zeus have become the hottest new way to train them.   Micro1 has hired thousands of them in more than 50 countries, including India, Nigeria, and Argentina. The jobs pay well locally, but raise thorny questions around privacy and informed consent. The work can be challenging—and weird. Read the full story. 
—Michelle Kim  Our readers recently voted humanoid robots the “11th breakthrough” to add to our 2026 list of 10 Breakthrough Technologies. Check out what else officially made the cut. 
AI benchmarks are broken. Here’s what we need instead.  For decades, AI has been evaluated based on whether it can outperform humans on isolated problems. But it’s seldom used this way in the real world.  While AI is assessed in a vacuum, it operates in messy, complex, multi-person environments over time. This misalignment leads us to misunderstand its capabilities, risks, and impacts.  We need new benchmarks that assess AI’s performance over longer horizons within human teams, workflows, and organizations. Here’s a proposal for one such approach: Human–AI, Context-Specific Evaluation.   —Angela Aristidou, professor at University College London and faculty fellow at the Stanford Digital Economy Lab and the Stanford Human-Centered AI Institute.  MIT Technology Review Narrated: can quantum computers now solve health care problems? We’ll soon find out.  In a laboratory on the outskirts of Oxford, a quantum computer built from atoms and light awaits its moment. The device is small but powerful—and also very valuable. Infleqtion, the company that owns it, is hoping its abilities will win $5 million at a competition.   The prize will go to the quantum computer that can solve real health care problems that “classical” computers cannot. But there can be only one big winner—if there is a winner at all.  —Michael Brooks  This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. 

The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI just closed the biggest funding round in Silicon Valley history It raised $122 billion ahead of its blockbuster IPO, which is expected later this year. (WSJ $) + It’s also prepping a push to “rethink the social contract.” (Vanity Fair $) + Campaigners are urging people to quit ChatGPT. (MIT Technology Review)    2 Iran has threatened to attack 18 US tech companies  It’s eyeing their operations in the Middle East. (Politico) + Targets include Nvidia, Apple, Microsoft, and Google. (Engadget) + Iran struck AWS data centers earlier this month. (Reuters $)  3 Artemis II is about to fly humans to the Moon. Here’s the science they’ll do Their experiments will set the stage for future explorers. (Nature) + You can watch the launch attempt today. (Engadget)   4 Putin is trying to take full control of Russia’s internet New outages and blockages are cutting the country off from the world. (NYT $) + Can we repair the internet? (MIT Technology Review)  5 A robotaxi outage in China left passengers stranded on highways  Baidu vehicles froze on the streets of Wuhan. (Bloomberg $) + Police are blaming a “system failure.” (Reuters $)  6 US government requests for social media user data are soaring They’ve skyrocketed by 770% in the past decade. (Bloomberg $) + Is the Pentagon allowed to surveil Americans with AI? (MIT Technology Review) 
7 Tesla has admitted that humans sometimes drive its robotaxis Remote drivers occasionally control them completely. (Wired $)  8 A satellite-smashing chain reaction could spiral out of control This data visualization captures the dangers of space collisions. (Guardian) + Here’s all the stuff we’ve put into space. (MIT Technology Review) 
9 Meta’s smartglasses can turn you into a creep According to one journalist who wore them for a month. (Guardian)  10 A Claude Code leak has exposed plans for a virtual pet  We could be getting a Tamagotchi for the GenAI era. (The Verge)  Quote of the day  “From now on, for every assassination, an American company will be destroyed.”  —Iran’s Islamic Revolutionary Guard Corps (IRGC) threatens US tech firms in an affiliated Telegram, per CNBC.  One More Thing  ACKERMAN + GRUBER How one mine could unlock billions in EV subsidies  In a pine farm north of the tiny town of Tamarack, Minnesota, Talon Metals has uncovered one of America’s densest nickel deposits. Now it wants to begin mining the ore. 
Products made from the nickel could net more than $26 billion in subsidies through the Inflation Reduction Act (IRA), which is starting to transform the US economy. To understand how, we tallied up the potential tax credits available. Read the full story to find out what we discovered.  —James Temple  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A selfless group of gluttons tried to taste-test every potato chip in the world.  + Get romantic inspiration from these penguins’ engagement pebbles. + Good news: global terrorism has hit a 15-year low. + Enjoy endless new views through these windows around the world.  

Read More »

The gig workers who are training humanoid robots at home

When Zeus, a medical student living in a hilltop city in central Nigeria, returns to his studio apartment from a long day at the hospital, he turns on his ring light, straps his iPhone to his forehead, and starts recording himself. He raises his hands in front of him like a sleepwalker and puts a sheet on his bed. He moves slowly and carefully to make sure his hands stay within the camera frame.  Zeus is a data recorder for Micro1, a US company based in Palo Alto, California that collects real-world data to sell to robotics companies. As companies like Tesla, Figure AI, and Agility Robotics race to build humanoids—robots designed to resemble and move like humans in factories and homes—videos recorded by gig workers like Zeus are becoming the hottest new way to train them.  Micro1 has hired thousands of contract workers in more than 50 countries, including India, Nigeria, and Argentina, where swathes of tech-savvy young people are looking for jobs. They’re mounting iPhones on their heads and recording themselves folding laundry, washing dishes, and cooking. The job pays well by local standards and is boosting local economies, but it raises thorny questions around privacy and informed consent. And the work can be challenging at times—and weird. Zeus found the job in November, when people started talking about it everywhere on LinkedIn and YouTube. “This would be a real nice opportunity to set a mark and give data that will be used to train robots in the future,” he thought. 
Zeus is paid $15 an hour, which is good income in Nigeria’s strained economy with high unemployment rates. But as a bright-eyed student dreaming of becoming a doctor, he finds ironing his clothes for hours every day boring.  “I really [do] not like it so much,” he says. “I’m the kind of person that requires … a technical job that requires me to think.” 
Zeus, and all the workers interviewed by MIT Technology Review, asked to be referred to only by pseudonyms because they were not authorized to talk about their work. Humanoid robots are notoriously hard to build because manipulating physical objects is a difficult skill to master. But the rise of large language models underlying chatbots like ChatGPT has inspired a paradigm shift in robotics. Just as large language models learned to generate words by being trained on vast troves of text scraped from the internet, many researchers believe that humanoid robots can learn to interact with the world by being trained on massive amounts of movement data.  Editor’s note: In a recent poll, MIT Technology Review readers selected humanoid robots as the 11th breakthrough for our 2026 list of 10 Breakthrough Technologies. Robotics requires far more complex data about the physical world, though, and that is much harder to find. Virtual simulations can train robots to perform acrobatics, but not how to grasp and move objects, because simulations struggle to model physics with perfect accuracy. For robots to work in factories and serve as housekeepers, real-world data, however time-consuming and expensive to collect, may be what we need.  Investors are pouring money feverishly into solving this challenge, spending over $6 billion on humanoid robots in 2025. And at-home data recording is becoming a booming gig economy around the world. Data companies like Scale AI and Encord are recruiting their own armies of data recorders, while DoorDash pays delivery drivers to film themselves doing chores. And in China, workers in dozens of state-owned robot training centers wear virtual-reality headsets and exoskeletons to teach humanoid robots how to open a microwave and wipe down the table.  “There is a lot of demand, and it’s increasing really fast,” says Ali Ansari, CEO of Micro1. He estimates that robotics companies are now spending more than $100 million each year to buy real-world data from his company and others like it. A day in the life Workers at Micro1 are vetted by an AI agent named Zara that conducts interviews and reviews samples of chore videos. Every week, they submit videos of themselves doing chores around their homes, following a list of instructions about things like keeping their hands visible and moving at natural speed. The videos are reviewed by both AI and a human and are either accepted or rejected. They’re then annotated by AI and a team of hundreds of humans who label the actions in the footage. “There is a lot of demand, and it’s increasing really fast.” Ali Ansari, CEO of Micro1  Because this approach to training robots is in its infancy, it’s not clear yet what makes good training data. Still, “you need to give lots and lots of variations for the robot to generalize well for basic navigation and manipulation of the world,” says Ansari.

But many workers say that creating a variety of “chore content” in their tiny homes is a challenge. Zeus, a scrappy student living in a humble studio, struggles to record anything beyond ironing his clothes every day. Arjun, a tutor in Delhi, India, takes an hour to make a 15-minute video because he spends so much time brainstorming new chores. “How much content [can be made] in the home? How much content?” he says.  There’s also the sticky question of privacy. Micro1 asks workers not to show their faces to the camera or reveal personal information such as names, phone numbers, and birth dates. Then it uses AI and human reviewers to remove anything that slips through.  But even without faces, the videos capture an intimate slice of workers’ lives: the interiors of their homes, their possessions, their routines. And understanding what kind of personal information they might be recording while they’re busy doing chores on camera can be tricky. Reviews of such footage might not filter out sensitive information beyond the most obvious identifiers. For workers with families, keeping private life off camera is a constant negotiation. Arjun, a father of two daughters, has to wrangle his chaotic two-year-old out of frame. “Sometimes it’s very difficult to work because my daughter is small,” he says.  Sasha, a banker turned data recorder in Nigeria, tiptoes around when she hangs her laundry outside in a shared residential compound so she won’t record her neighbors, who watch her in bewilderment. “It’s going to take longer than people think.”Ken Goldberg, UC Berkeley While the workers interviewed by MIT Technology Review understand that their data is being used to train robots, none of them know how exactly their data will be used, stored, and shared with third parties, including the robotics companies that Micro1 is selling the data to. For confidentiality reasons, says Ansari, Micro1 doesn’t name its clients or disclose to workers the specific nature of the projects they are contributing to. “It is important that if workers are engaging in this, that they are informed by the companies themselves of the intention … where this kind of technology might go and how that might affect them longer term,” says Yasmine Kotturi, a professor of human-centered computing at the University of Maryland.
Occasionally, some workers say, they’ve seen other workers asking on the company Slack channel if the company could delete their data. Micro1 declined to comment on whether such data is deleted. “People are opting into doing this,” says Ansari. “They could stop the work at any time.”
Hungry for data With thousands of workers doing their chores differently in different homes, some roboticists wonder if the data collected from them is reliable enough to train robots safely.  “How we conduct our lives in our homes is not always right from a safety point of view,” says Aaron Prather, a roboticist at ASTM International. “If those folks are teaching those bad habits that could lead to an incident, then that’s not good data.” And the sheer volume of data being collected makes reviewing it for quality control challenging. But Ansari says the company rejects videos showing unsafe ways of performing a task, while clumsy movements can be useful to teach robots what not to do. Then there’s the question of how much of this data we need. Micro1 says it has tens of thousands of hours of footage, while Scale AI announced it had gathered more than 100,000 hours. “It’s going to take a long time to get there,” says Ken Goldberg, a roboticist at the University of California, Berkeley. Large language models were trained on text and images that would take a human 100,000 years to read, and humanoid robots may need even more data, because controlling robotic joints is even more complicated than generating text. “It’s going to take longer than people think,” he says. When Dattu, an engineering student living in a bustling tech hub in India, comes home after a full day of classes at his university, he skips dinner and dashes to his tiny balcony, cramped with potted plants and dumbbells. He straps his iPhone to his forehead and records himself folding the same set of clothes over and over again.  His family stares at him quizzically. “It’s like some space technology for them,” he says. When he tells his friends about his job, “they just get astounded by the idea that they can get paid by recording chores.” Juggling his university studies with data recording, as well as other data annotation gigs, takes a toll on him. Still, “it feels like you’re doing something different than the whole world,” he says. 

Read More »

Shifting to AI model customization is an architectural imperative

In partnership withMistral AI In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every new model iteration. Today, those jumps have flattened into incremental gains. The exception is domain-specialized intelligence, where true step-function improvements are still the norm. When a model is fused with an organization’s proprietary data and internal logic, it encodes the company’s history into its future workflows. This alignment creates a compounding advantage: a competitive moat built on a model that understands the business intimately. This is more than fine-tuning; it is the institutionalization of expertise into an AI system. This is the power of customization. Intelligence tuned to context Every sector operates within its own specific lexicon. In automotive engineering, the “language” of the firm revolves around tolerance stacks, validation cycles, and revision control. In capital markets, reasoning is dictated by risk-weighted assets and liquidity buffers. In security operations, patterns are extracted from the noise of telemetry signals and identity anomalies. Custom-adapted models internalize the nuances of the field. They recognize which variables dictate a “go/no-go” decision, and they think in the language of the industry.
Domain expertise in action The transition from general-purpose to tailored AI centers on one goal: encoding an organization’s unique logic directly into a model’s weights. Mistral AI partners with organizations to incorporate domain expertise into their training ecosystems. A few use cases illustrate customized implementations in practice:
Software engineering and assisting at scale: A network hardware company with proprietary languages and specialized codebases found that out-of-the-box models could not grasp their internal stack. By training a custom model on their own development patterns, they achieved a step function in fluency. Integrated into Mistral’s software development scaffolding, this customized model now supports the entire lifecycle—from maintaining legacy systems to autonomous code modernization via reinforcement learning. This turns once-opaque, niche code into a space where AI reliably assists at scale. Automotive and the engineering copilot: A leading automotive company uses customization to revolutionize crash test simulations. Previously, specialists spent entire days manually comparing digital simulations with physical results to find divergences. By training a model on proprietary simulation data and internal analyses, they automated this visual inspection, flagging deformations in real time. Moving beyond detection, the model now acts as a copilot, proposing design adjustments to bring simulations closer to real-world behavior and radically accelerating the R&D loop. Public sector and sovereign AI: In Southeast Asia, a government agency is building a sovereign AI layer to move beyond Western-centric models. By commissioning a foundation model tailored to regional languages, local idioms, and cultural contexts, they created a strategic infrastructure asset. This ensures sensitive data remains under local governance while powering inclusive citizen services and regulatory assistants. Here, customization is the key to deploying AI that is both technically effective and genuinely sovereign. The blueprint for strategic customization Moving from a general-purpose AI strategy to a domain-specific advantage requires a structural rethinking of the model’s role within the enterprise. Success is defined by three shifts in organizational logic. 1. Treat AI as infrastructure, not an experiment.  Historically, enterprises have treated model customization as an ad hoc experiment—a single fine-tuning run for a niche use case or a localized pilot. While these bespoke silos often yield promising results, they are rarely built to scale. They produce brittle pipelines, improvised governance, and limited portability. When the underlying base models evolve, the adaptation work must often be discarded and rebuilt from scratch.In contrast, a durable strategy treats customization as foundational infrastructure. In this model, adaptation workflows are reproducible, version-controlled, and engineered for production. Success is measured against deterministic business outcomes. By decoupling the customization logic from the underlying model, firms ensure that their “digital nervous system” remains resilient, even as the frontier of base models shifts. 2. Retain control of your own data and models. As AI migrates from the periphery to core operations, the question of control becomes existential. Reliance on a single cloud provider or vendor for model alignment creates a dangerous asymmetry of power regarding data residency, pricing, and architectural updates. Enterprises that retain control of their training pipelines and deployment environments preserve their strategic agency. By adapting models within controlled environments, organizations can enforce their own data residency requirements and dictate their own update cycles. This approach transforms AI from a service consumed into an asset governed, reducing structural dependency and allowing for cost and energy optimizations aligned with internal priorities rather than vendor roadmaps. 3. Design for continuous adaptation. The enterprise environment is never static: regulations shift, taxonomies evolve, and market conditions fluctuate. A common failure is treating a customized model as a finished artifact. In reality, a domain-aligned model is a living asset subject to model decay if left unmanaged.

Designing for continuous adaptation requires a disciplined approach to ModelOps. This includes automated drift detection, event-driven retraining, and incremental updates. By building the capacity for constant recalibration, the organization ensures that its AI does not just reflect its history, but it evolves in lockstep with its future. This is the stage where the competitive moat begins to compound: the model’s utility grows as it internalizes the organization’s ongoing response to change. Control is the new leverage We have entered an era where generic intelligence is a commodity, but contextual intelligence is a scarcity. While raw model power is now a baseline requirement, the true differentiator is alignment—AI calibrated to an organization’s unique data, mandates, and decision logic. In the next decade, the most valuable AI won’t be the one that knows everything about the world; it will be the one that knows everything about you. The firms that own the model weights of that intelligence will own the market. This content was produced by Mistral AI. It was not written by MIT Technology Review’s editorial staff.

Read More »

Cisco fixes critical IMC auth bypass present in many products

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboard management controller (BMC), a dedicated controller embedded into server motherboards with its own RAM and network interface that gives administrators monitoring and management capabilities as if they were physically connected to the server with a keyboard, monitor, and mouse (KVM). Because BMCs run their own firmware independently of the OS, they can be used to perform operations even when the OS is shut down, including reinstalling it.

Read More »

Kyndryl service targets AI agent automation, security

Understand agents, serving as a single source of truth to help mitigate the risks associated with shadow AI. Validate each agent before launch by testing for security, resilience, and policy compliance to ensure they meet your standards before going live. Maintain control with real-time guardrails that keep agents operating within approved boundaries. Security testing, validation, and threat modeling should be incorporated into development pipelines, Kyndryl stated. “Additionally, runtime protections such as anomaly detection, guardian agents, and rapid isolation capabilities can help contain incidents before they escalate. By making security and governance foundational rather than treating them as afterthoughts, organizations can confidently scale agentic AI, knowing that risks are proactively managed, and trust is maintained with customers, partners, and regulators,” Kyndryl stated. The new service is just one of the platforms the vendor offers to manage AI agents. Last year Kyndral introduced its Agentic AI Framework. That package offers an orchestration system built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. Specialized agents are deployed to gather IT information, such as data analysis, compliance checks, incident response or service desk ticket resolution. Over time, agents learn from data and outcomes to improve decision-making and adapt workflows autonomously, and an orchestration engine parses that data to let enterprise systems adjust to changing conditions in real time, Kyndryl stated. The platform defines what actions agents can and cannot do, basically setting policy across the enterprise.

Read More »

Google Research touts memory-compression breakthrough for AI processing

The last time the market witnessed a shakeup like this was China’s DeepSeek, but doubts emerged quickly about its efficacy. Developers found DeepSeek’s efficiency gains required deep architectural decisions that had to be built in from the start. TurboQuant requires no retraining or fine-tuning. You just drop it straight into existing inference pipelines, at least in theory. If it works in production systems with no retrofitting, then data center operators will get tremendous performance gains on existing hardware. Data center operators won’t have to throw hardware at the performance problem. However, analysts urge caution before jumping to conclusions. “This is a research breakthrough, not a shipping product,” said Alex Cordovil, research director for physical infrastructure at The Dell’Oro Group. “There’s often a meaningful gap between a published paper and real-world inference workloads.” Also, Dell’Oro notes that efficiency gains in AI compute tend to get consumed by more demand, known as the Jevons paradox. “Any freed-up capacity would likely be absorbed by frontier models expanding their capabilities rather than reducing their hardware footprint.” Jim Handy, president of Objective Analysis, agrees on that second part. “Hyperscalers won’t cut their spending – they’ll just spend the same amount and get more bang for their buck,” he said. “Data centers aren’t looking to reach a certain performance level and subsequently stop spending on AI. They’re looking to out-spend each other to gain market dominance. This won’t change that.” Google plans to present a paper outlining TurboQuant at the ICLR conference in Rio de Janeiro running from April 23 through April 27.

Read More »

Energy Department Authorizes Additional Exports of LNG from Elba Island Terminal, Strengthening Global Energy Supply with U.S. LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 22% increase in exports of liquefied natural gas (LNG) from the Elba Island Terminal in Chatham County, Georgia. With today’s order, Kinder Morgan subsidiary Southern LNG Company L.L.C., operator of the Elba Island LNG Terminal, is now authorized to export up to an additional 28.25 (Bcf/yr) to non-free trade agreement countries, strengthening global natural gas supplies with reliable U.S. LNG. Elba Island was previously authorized to export up to 130 billion cubic feet per year (Bcf/yr) of natural gas as LNG to non-free trade agreement countries and has been exporting U.S. LNG since 2019. The project is positioned to export the additional approved volumes immediately.  “At a time when global energy supply routes face disruption, the United States remains a reliable energy partner to our allies and trading partners,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office, Kyle Haustveit. “DOE is using all available authorities to ensure American energy can reach global markets when it is needed most, supporting energy security and helping stabilize global energy supplies.”  The action comes as global oil and LNG supply routes face disruption from tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. The Department will continue to act, using its full set of authorities, to ensure U.S. LNG remains a dependable energy source in global energy markets and a stabilizing presence in times of disruption.  Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter, with exports reaching all-time highs in March 2026. Since President Trump ended the previous administration’s LNG export approval ban, the Department has approved more than 19 Bcf/d of LNG export authorizations. With recent final investment decisions for additional export capacity, U.S. LNG exports are set

Read More »

Why can’t we have nice routers anymore?

In the Volt Typhoon and Flax Typhoon attacks, the routers themselves weren’t compromised because they were foreign-made routers. Far from it! They were compromised because they were unpatched, Internet-exposed, and end-of-life. The router manufacturers were no more guilty of opening the doors to these attacks than Microsoft is for your company’s Windows 7 PCs being hacked in 2026. Only the Salt Typhoon assault on Cisco IOS XE software, which was running on enterprise-grade routers—specifically, ASR 1000 Series, ISR 4000 Series, and Catalyst 8000 Series edge platforms—can be linked directly to Chinese-made routers. Guess what, though? You can still buy, use, and deploy this Cisco hardware, which is used as core routers by top American telecoms such as AT&T, Verizon, and T-Mobile. Uncle Joe wants to replace his router with a brand-new Wi-Fi 7 model router? Nope, he can’t do it. Multi-billion-dollar companies decide to replace vital infrastructure routers that carry billions of messages every day? Sure, go for it! You know, if it were me, I’d be taking a long, hard look at the actual modern enterprise networking gear that we know has been breached. Why isn’t the FCC doing this? Darned if I know. Even the FCC acknowledges that some of Cisco’s problems have nothing to do with who made the hardware and where it was built. For example, the truly awful CVE-2023-20198 vulnerability, with its CVSS score of 10, was all about a boneheaded security hole in Cisco IOS XE Web UI, not the firmware or hardware. The FCC argues, however, that consumer routers pose unique risks because they’re deployed in millions of homes with minimal security oversight, thus making them ideal for botnet infrastructure. I can’t argue with that. But that has nothing to do with who made these devices and where.

Read More »

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

Amazon was contacted for comment on the latest Bahrain drone incident, but said it had nothing to add beyond the statement in its current advisory. Denial of infrastructure Doing the damage is the Shaheed 136, a small and unsophisticated drone designed to overwhelm defenders with numbers. If only one in twenty reaches its target, the price-performance still exceeds that of more expensive systems. When aimed at critical infrastructure such as datacenters, the effect is also psychological; the threat of an attack on its own can be enough to make it difficult for organizations to continue using an at-risk facility.  Iran’s targeting of the Bahrain datacenter is unlikely to be random. Amazon opened its ME-SOUTH-1 AWS presence in 2019, and it is still believed to be the company’s largest site in the Middle East. Earlier this week, the Islamic Revolutionary Guard Corps (IRGC) Telegram channel explicitly threatened to target at least 18 US companies operating in the region, including Microsoft, Google, Nvidia, and Apple. This follows similar threats to an even longer list of US companies made on the IRGC-affiliated Tasnim News Agency in recent weeks. That strategy doesn’t bode well for US companies that have made large investments in Middle Eastern datacenter infrastructure in recent years, drawn by the growing wealth and influence of countries in the region. This includes Amazon, which has announced plans to build a $5.3 billion datacenter in Saudi Arabia, due to become available in 2026. If this is now under threat, whether by warfare or the hypothetical possibility of attack, that will create uncertainty.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE