Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In our rush to understand and relate to AI, we have fallen into a seductive trap: Attributing human characteristics to these robust but fundamentally non-human systems. This anthropomorphizing of AI is not just a harmless quirk of human nature — it is becoming an increasingly dangerous tendency that might cloud our judgment in critical ways. Business leaders are comparing AI learning to human education to justify training practices to lawmakers crafting policies based on flawed human-AI analogies. This tendency to humanize AI might inappropriately shape crucial decisions across industries and regulatory frameworks. Viewing AI through a human lens in business has led companies to overestimate AI capabilities or underestimate the need for human oversight, sometimes with costly consequences. The stakes are particularly high in copyright law, where anthropomorphic thinking has led to problematic comparisons between human learning and AI training. The language trap Listen to how we talk about AI: We say it “learns,” “thinks,” “understands” and even “creates.” These human terms feel natural, but they are misleading. When we say an AI model “learns,” it is not gaining understanding like a human student. Instead, it performs complex statistical analyses on vast amounts of data, adjusting weights and parameters in its neural networks based on mathematical principles. There is no comprehension, eureka moment, spark of creativity or actual understanding — just increasingly sophisticated pattern matching. This linguistic sleight of hand is more than merely semantic. As noted in the paper, Generative AI’s Illusory Case for Fair Use: “The use of anthropomorphic language to describe the development and functioning of AI models is distorting because it suggests that once trained, the model operates independently of the content of the works on which

Read More »

Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The world of AI agents is undergoing a revolution, and Microsoft’s recent release of AutoGen v0.4 this week marked a significant leap forward in this journey. Positioned as a robust, scalable, and extensible framework, AutoGen represents Microsoft’s latest attempt to address the challenges of building multi-agent systems for enterprise applications. But what does this release tell us about the state of agentic AI today, and how does it compare to other major frameworks like LangChain and CrewAI? This article unpacks the implications of AutoGen’s update, explores its standout features, and situates it within the broader landscape of AI agent frameworks, helping developers understand what’s possible and where the industry is headed. The Promise of “asynchronous event-driven architecture” A defining feature of AutoGen v0.4 is its adoption of an asynchronous, event-driven architecture (see Microsoft’s full blog post). This is a step forward from older, sequential designs, enabling agents to perform tasks concurrently rather than waiting for one process to complete before starting another. For developers, this translates into faster task execution and more efficient resource utilization—especially critical for multi-agent systems. For example, consider a scenario where multiple agents collaborate on a complex task: one agent collects data via APIs, another parses the data, and a third generates a report. With asynchronous processing, these agents can work in parallel, dynamically interacting with a central reasoner agent that orchestrates their tasks. This architecture aligns with the needs of modern enterprises seeking scalability without compromising performance. Asynchronous capabilities are increasingly becoming table stakes. AutoGen’s main competitors, Langchain and CrewAI, already offered this, so Microsoft’s emphasis on this design principle underscores its commitment to keeping AutoGen competitive. AutoGen’s role in Microsoft’s enterprise ecosystem Microsoft’s strategy for

Read More »

USA Needs More Electricity to Win AI Race, Says Trump Energy Czar

The US risks forfeiting a global competition to dominate artificial intelligence if it doesn’t build more reliable, always-on electricity to supply the industry, President-elect Donald Trump’s pick to lead the Interior Department warned Thursday. Doug Burgum, the former North Dakota governor who has also been tapped to help chart Trump’s energy policy, cast the issue as critical to America’s national security during a Senate confirmation hearing that offered a preview of the incoming administration’s planned embrace of fossil fuels. Where renewable power supplies are intermittent and “unreliable,” Burgum said, AI’s growing energy demands will require more of the so-called baseload electricity that can be generated around-the-clock by burning coal and natural gas. “Without baseload, we’re going to lose the AI arms race to China,” Burgum told the Senate Energy and Natural Resources Committee. “AI is manufacturing intelligence, and if we don’t manufacture more intelligence than our adversaries, that affects every job, every company and every industry.” During a three-hour meeting marked by cordial exchanges — and none of the intense sparring that has dominated other confirmation hearings this week — Burgum sought to assure senators he would seek a “balanced approach” for oil drilling, conservation and even potentially housing on the federal land managed by the Interior Department.  The agency’s sprawling portfolio spans a fifth of US land, and it is the lead regulator for oil, gas and wind power development in the nation’s coastal waters. Burgum also made clear that a top priority is addressing what he called a “significant imbalance” in the nation’s electricity mix, as developers look to connect a host of low- and zero-emission power projects to the grid. “If the sun’s not shining and the wind’s not blowing, and we don’t have baseload, then we’ve got brownouts and blackouts, we have higher electric prices for

Read More »

Oil Notches Fourth Weekly Advance as Sanctions Threaten Supplies

Oil notched its fourth straight weekly gain, the longest run since July, as US sanctions posed growing risks to global supply in a market already tightened by cold weather. West Texas Intermediate was up almost 2% for the week, even after retreating below $78 a barrel on Friday. The Biden administration’s harshest ever curbs on Russian oil have shaken up markets, with freight costs rocketing and long-standing buyers of the country’s crude, including China and India, looking elsewhere for supplies. Market participants are also recalibrating their outlook three days ahead of President-elect Donald Trump’s inauguration. Prices whipsawed on Thursday as traders parsed clues on the incoming administration’s sanctions stance. Trump’s advisers were reportedly considering relaxing the curbs to enable a Russia-Ukraine accord, while Treasury secretary nominee Scott Bessent said he would support dialing up measures targeting the Russian oil industry.  “The Russian sanctions on 183 oil tankers have been the focus for crude prices,” said Dennis Kissler, senior vice president of trading at BOK Financial Securities. “The latest crude strength has been impressive, with tight near-term supplies, as buyers became aggressive once the sanctions on Russia were supported by both presidential administrations.” Trump has also threatened to impose tariffs on imports from Canada, including its oil. While the federal government is pushing back, the leader of its largest oil-producing province is resisting efforts to include curtailing or taxing crude shipments as potential countermeasures.  Meanwhile, traders weighed mixed economic signals out of China, the world’s largest crude importer. The nation hit the government’s growth goal last year after a late stimulus blitz and export boom turbocharged activity. At the same time, China’s oil refining volumes declined by 1.6% last year as the shift to electric vehicles gained pace. Looming US tariffs also threaten to take away a key driver of expansion. Crude has rallied almost 9% this year as cold weather in the

Read More »

Beyond RAG: How cache-augmented generation reduces latency, complexity for smaller workloads

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval-augmented generation (RAG) has become the de-facto way of customizing large language models (LLMs) for bespoke information. However, RAG comes with upfront technical costs and can be slow. Now, thanks to advances in long-context LLMs, enterprises can bypass RAG by inserting all the proprietary information in the prompt. A new study by the National Chengchi University in Taiwan shows that by using long-context LLMs and caching techniques, you can create customized applications that outperform RAG pipelines. Called cache-augmented generation (CAG), this approach can be a simple and efficient replacement for RAG in enterprise settings where the knowledge corpus can fit in the model’s context window. Limitations of RAG RAG is an effective method for handling open-domain questions and specialized tasks. It uses retrieval algorithms to gather documents that are relevant to the request and adds context to enable the LLM to craft more accurate responses. However, RAG introduces several limitations to LLM applications. The added retrieval step introduces latency that can degrade the user experience. The result also depends on the quality of the document selection and ranking step. In many cases, the limitations of the models used for retrieval require documents to be broken down into smaller chunks, which can harm the retrieval process.  And in general, RAG adds complexity to the LLM application, requiring the development, integration and maintenance of additional components. The added overhead slows the development process. Cache-augmented retrieval RAG (top) vs CAG (bottom) (source: arXiv) The alternative to developing a RAG pipeline is to insert the entire document corpus into the prompt and have the model choose which bits are relevant to the request. This approach removes the complexity of the RAG pipeline and the problems

Read More »

Biden Makes Last-Minute Bid to Thwart Arctic Oil Drilling

The Biden administration advanced a plan to limit oil drilling and infrastructure across more of Alaska’s National Petroleum Reserve, a bid to lock in land protections and conservation requirements days before President-elect Donald Trump takes office. The Interior Department move Thursday represents the latest step by outgoing President Joe Biden to enshrine protections that could complicate Trump plans to rapidly expand oil and gas development across US federal lands and waters. In recent weeks, Biden also has designated new national monuments and ruled out the sale of drilling rights in more than 625 million acres of US coastal waters.  In the latest action, the Interior Department is proposing new “special area” designations that would restrict drilling and other activities across more than 3 million acres of the Indiana-sized reserve in northwest Alaska. The move comes on top of on an existing policy, finalized last year, that barred drilling across nearly half of the NPR-A.  The rugged terrain once earmarked for energy development contains an estimated 8.7 billion barrels of recoverable oil, but it’s also an important habitat for caribou, grizzly bears and migratory birds. And it’s a prized resource for Alaska Natives who have long relied on the land for subsistence hunting and fishing. The Interior Department immediately imposed measures meant to avoid damage to those areas even while they’re being considered for protection, effectively raising hurdles for building roads and other infrastructure across the tracts.  Although Trump could cast aside his predecessor’s proposed special areas and ignore the interim safeguards imposed in the meantime, the action could be challenged in federal court. The report and memo unveiled Thursday bolsters the government record for those safeguards, providing potential fodder for any future legal battle. Environmentalists said they hoped the effort would create a bulwark against Trump’s plan to unleash American oil development.  “The Biden administration clearly understands that the

Read More »

Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In our rush to understand and relate to AI, we have fallen into a seductive trap: Attributing human characteristics to these robust but fundamentally non-human systems. This anthropomorphizing of AI is not just a harmless quirk of human nature — it is becoming an increasingly dangerous tendency that might cloud our judgment in critical ways. Business leaders are comparing AI learning to human education to justify training practices to lawmakers crafting policies based on flawed human-AI analogies. This tendency to humanize AI might inappropriately shape crucial decisions across industries and regulatory frameworks. Viewing AI through a human lens in business has led companies to overestimate AI capabilities or underestimate the need for human oversight, sometimes with costly consequences. The stakes are particularly high in copyright law, where anthropomorphic thinking has led to problematic comparisons between human learning and AI training. The language trap Listen to how we talk about AI: We say it “learns,” “thinks,” “understands” and even “creates.” These human terms feel natural, but they are misleading. When we say an AI model “learns,” it is not gaining understanding like a human student. Instead, it performs complex statistical analyses on vast amounts of data, adjusting weights and parameters in its neural networks based on mathematical principles. There is no comprehension, eureka moment, spark of creativity or actual understanding — just increasingly sophisticated pattern matching. This linguistic sleight of hand is more than merely semantic. As noted in the paper, Generative AI’s Illusory Case for Fair Use: “The use of anthropomorphic language to describe the development and functioning of AI models is distorting because it suggests that once trained, the model operates independently of the content of the works on which

Read More »

Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The world of AI agents is undergoing a revolution, and Microsoft’s recent release of AutoGen v0.4 this week marked a significant leap forward in this journey. Positioned as a robust, scalable, and extensible framework, AutoGen represents Microsoft’s latest attempt to address the challenges of building multi-agent systems for enterprise applications. But what does this release tell us about the state of agentic AI today, and how does it compare to other major frameworks like LangChain and CrewAI? This article unpacks the implications of AutoGen’s update, explores its standout features, and situates it within the broader landscape of AI agent frameworks, helping developers understand what’s possible and where the industry is headed. The Promise of “asynchronous event-driven architecture” A defining feature of AutoGen v0.4 is its adoption of an asynchronous, event-driven architecture (see Microsoft’s full blog post). This is a step forward from older, sequential designs, enabling agents to perform tasks concurrently rather than waiting for one process to complete before starting another. For developers, this translates into faster task execution and more efficient resource utilization—especially critical for multi-agent systems. For example, consider a scenario where multiple agents collaborate on a complex task: one agent collects data via APIs, another parses the data, and a third generates a report. With asynchronous processing, these agents can work in parallel, dynamically interacting with a central reasoner agent that orchestrates their tasks. This architecture aligns with the needs of modern enterprises seeking scalability without compromising performance. Asynchronous capabilities are increasingly becoming table stakes. AutoGen’s main competitors, Langchain and CrewAI, already offered this, so Microsoft’s emphasis on this design principle underscores its commitment to keeping AutoGen competitive. AutoGen’s role in Microsoft’s enterprise ecosystem Microsoft’s strategy for

Read More »

USA Needs More Electricity to Win AI Race, Says Trump Energy Czar

The US risks forfeiting a global competition to dominate artificial intelligence if it doesn’t build more reliable, always-on electricity to supply the industry, President-elect Donald Trump’s pick to lead the Interior Department warned Thursday. Doug Burgum, the former North Dakota governor who has also been tapped to help chart Trump’s energy policy, cast the issue as critical to America’s national security during a Senate confirmation hearing that offered a preview of the incoming administration’s planned embrace of fossil fuels. Where renewable power supplies are intermittent and “unreliable,” Burgum said, AI’s growing energy demands will require more of the so-called baseload electricity that can be generated around-the-clock by burning coal and natural gas. “Without baseload, we’re going to lose the AI arms race to China,” Burgum told the Senate Energy and Natural Resources Committee. “AI is manufacturing intelligence, and if we don’t manufacture more intelligence than our adversaries, that affects every job, every company and every industry.” During a three-hour meeting marked by cordial exchanges — and none of the intense sparring that has dominated other confirmation hearings this week — Burgum sought to assure senators he would seek a “balanced approach” for oil drilling, conservation and even potentially housing on the federal land managed by the Interior Department.  The agency’s sprawling portfolio spans a fifth of US land, and it is the lead regulator for oil, gas and wind power development in the nation’s coastal waters. Burgum also made clear that a top priority is addressing what he called a “significant imbalance” in the nation’s electricity mix, as developers look to connect a host of low- and zero-emission power projects to the grid. “If the sun’s not shining and the wind’s not blowing, and we don’t have baseload, then we’ve got brownouts and blackouts, we have higher electric prices for

Read More »

Oil Notches Fourth Weekly Advance as Sanctions Threaten Supplies

Oil notched its fourth straight weekly gain, the longest run since July, as US sanctions posed growing risks to global supply in a market already tightened by cold weather. West Texas Intermediate was up almost 2% for the week, even after retreating below $78 a barrel on Friday. The Biden administration’s harshest ever curbs on Russian oil have shaken up markets, with freight costs rocketing and long-standing buyers of the country’s crude, including China and India, looking elsewhere for supplies. Market participants are also recalibrating their outlook three days ahead of President-elect Donald Trump’s inauguration. Prices whipsawed on Thursday as traders parsed clues on the incoming administration’s sanctions stance. Trump’s advisers were reportedly considering relaxing the curbs to enable a Russia-Ukraine accord, while Treasury secretary nominee Scott Bessent said he would support dialing up measures targeting the Russian oil industry.  “The Russian sanctions on 183 oil tankers have been the focus for crude prices,” said Dennis Kissler, senior vice president of trading at BOK Financial Securities. “The latest crude strength has been impressive, with tight near-term supplies, as buyers became aggressive once the sanctions on Russia were supported by both presidential administrations.” Trump has also threatened to impose tariffs on imports from Canada, including its oil. While the federal government is pushing back, the leader of its largest oil-producing province is resisting efforts to include curtailing or taxing crude shipments as potential countermeasures.  Meanwhile, traders weighed mixed economic signals out of China, the world’s largest crude importer. The nation hit the government’s growth goal last year after a late stimulus blitz and export boom turbocharged activity. At the same time, China’s oil refining volumes declined by 1.6% last year as the shift to electric vehicles gained pace. Looming US tariffs also threaten to take away a key driver of expansion. Crude has rallied almost 9% this year as cold weather in the

Read More »

Beyond RAG: How cache-augmented generation reduces latency, complexity for smaller workloads

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval-augmented generation (RAG) has become the de-facto way of customizing large language models (LLMs) for bespoke information. However, RAG comes with upfront technical costs and can be slow. Now, thanks to advances in long-context LLMs, enterprises can bypass RAG by inserting all the proprietary information in the prompt. A new study by the National Chengchi University in Taiwan shows that by using long-context LLMs and caching techniques, you can create customized applications that outperform RAG pipelines. Called cache-augmented generation (CAG), this approach can be a simple and efficient replacement for RAG in enterprise settings where the knowledge corpus can fit in the model’s context window. Limitations of RAG RAG is an effective method for handling open-domain questions and specialized tasks. It uses retrieval algorithms to gather documents that are relevant to the request and adds context to enable the LLM to craft more accurate responses. However, RAG introduces several limitations to LLM applications. The added retrieval step introduces latency that can degrade the user experience. The result also depends on the quality of the document selection and ranking step. In many cases, the limitations of the models used for retrieval require documents to be broken down into smaller chunks, which can harm the retrieval process.  And in general, RAG adds complexity to the LLM application, requiring the development, integration and maintenance of additional components. The added overhead slows the development process. Cache-augmented retrieval RAG (top) vs CAG (bottom) (source: arXiv) The alternative to developing a RAG pipeline is to insert the entire document corpus into the prompt and have the model choose which bits are relevant to the request. This approach removes the complexity of the RAG pipeline and the problems

Read More »

Biden Makes Last-Minute Bid to Thwart Arctic Oil Drilling

The Biden administration advanced a plan to limit oil drilling and infrastructure across more of Alaska’s National Petroleum Reserve, a bid to lock in land protections and conservation requirements days before President-elect Donald Trump takes office. The Interior Department move Thursday represents the latest step by outgoing President Joe Biden to enshrine protections that could complicate Trump plans to rapidly expand oil and gas development across US federal lands and waters. In recent weeks, Biden also has designated new national monuments and ruled out the sale of drilling rights in more than 625 million acres of US coastal waters.  In the latest action, the Interior Department is proposing new “special area” designations that would restrict drilling and other activities across more than 3 million acres of the Indiana-sized reserve in northwest Alaska. The move comes on top of on an existing policy, finalized last year, that barred drilling across nearly half of the NPR-A.  The rugged terrain once earmarked for energy development contains an estimated 8.7 billion barrels of recoverable oil, but it’s also an important habitat for caribou, grizzly bears and migratory birds. And it’s a prized resource for Alaska Natives who have long relied on the land for subsistence hunting and fishing. The Interior Department immediately imposed measures meant to avoid damage to those areas even while they’re being considered for protection, effectively raising hurdles for building roads and other infrastructure across the tracts.  Although Trump could cast aside his predecessor’s proposed special areas and ignore the interim safeguards imposed in the meantime, the action could be challenged in federal court. The report and memo unveiled Thursday bolsters the government record for those safeguards, providing potential fodder for any future legal battle. Environmentalists said they hoped the effort would create a bulwark against Trump’s plan to unleash American oil development.  “The Biden administration clearly understands that the

Read More »

USA Needs More Electricity to Win AI Race, Says Trump Energy Czar

The US risks forfeiting a global competition to dominate artificial intelligence if it doesn’t build more reliable, always-on electricity to supply the industry, President-elect Donald Trump’s pick to lead the Interior Department warned Thursday. Doug Burgum, the former North Dakota governor who has also been tapped to help chart Trump’s energy policy, cast the issue as critical to America’s national security during a Senate confirmation hearing that offered a preview of the incoming administration’s planned embrace of fossil fuels. Where renewable power supplies are intermittent and “unreliable,” Burgum said, AI’s growing energy demands will require more of the so-called baseload electricity that can be generated around-the-clock by burning coal and natural gas. “Without baseload, we’re going to lose the AI arms race to China,” Burgum told the Senate Energy and Natural Resources Committee. “AI is manufacturing intelligence, and if we don’t manufacture more intelligence than our adversaries, that affects every job, every company and every industry.” During a three-hour meeting marked by cordial exchanges — and none of the intense sparring that has dominated other confirmation hearings this week — Burgum sought to assure senators he would seek a “balanced approach” for oil drilling, conservation and even potentially housing on the federal land managed by the Interior Department.  The agency’s sprawling portfolio spans a fifth of US land, and it is the lead regulator for oil, gas and wind power development in the nation’s coastal waters. Burgum also made clear that a top priority is addressing what he called a “significant imbalance” in the nation’s electricity mix, as developers look to connect a host of low- and zero-emission power projects to the grid. “If the sun’s not shining and the wind’s not blowing, and we don’t have baseload, then we’ve got brownouts and blackouts, we have higher electric prices for

Read More »

Oil Notches Fourth Weekly Advance as Sanctions Threaten Supplies

Oil notched its fourth straight weekly gain, the longest run since July, as US sanctions posed growing risks to global supply in a market already tightened by cold weather. West Texas Intermediate was up almost 2% for the week, even after retreating below $78 a barrel on Friday. The Biden administration’s harshest ever curbs on Russian oil have shaken up markets, with freight costs rocketing and long-standing buyers of the country’s crude, including China and India, looking elsewhere for supplies. Market participants are also recalibrating their outlook three days ahead of President-elect Donald Trump’s inauguration. Prices whipsawed on Thursday as traders parsed clues on the incoming administration’s sanctions stance. Trump’s advisers were reportedly considering relaxing the curbs to enable a Russia-Ukraine accord, while Treasury secretary nominee Scott Bessent said he would support dialing up measures targeting the Russian oil industry.  “The Russian sanctions on 183 oil tankers have been the focus for crude prices,” said Dennis Kissler, senior vice president of trading at BOK Financial Securities. “The latest crude strength has been impressive, with tight near-term supplies, as buyers became aggressive once the sanctions on Russia were supported by both presidential administrations.” Trump has also threatened to impose tariffs on imports from Canada, including its oil. While the federal government is pushing back, the leader of its largest oil-producing province is resisting efforts to include curtailing or taxing crude shipments as potential countermeasures.  Meanwhile, traders weighed mixed economic signals out of China, the world’s largest crude importer. The nation hit the government’s growth goal last year after a late stimulus blitz and export boom turbocharged activity. At the same time, China’s oil refining volumes declined by 1.6% last year as the shift to electric vehicles gained pace. Looming US tariffs also threaten to take away a key driver of expansion. Crude has rallied almost 9% this year as cold weather in the

Read More »

Biden Makes Last-Minute Bid to Thwart Arctic Oil Drilling

The Biden administration advanced a plan to limit oil drilling and infrastructure across more of Alaska’s National Petroleum Reserve, a bid to lock in land protections and conservation requirements days before President-elect Donald Trump takes office. The Interior Department move Thursday represents the latest step by outgoing President Joe Biden to enshrine protections that could complicate Trump plans to rapidly expand oil and gas development across US federal lands and waters. In recent weeks, Biden also has designated new national monuments and ruled out the sale of drilling rights in more than 625 million acres of US coastal waters.  In the latest action, the Interior Department is proposing new “special area” designations that would restrict drilling and other activities across more than 3 million acres of the Indiana-sized reserve in northwest Alaska. The move comes on top of on an existing policy, finalized last year, that barred drilling across nearly half of the NPR-A.  The rugged terrain once earmarked for energy development contains an estimated 8.7 billion barrels of recoverable oil, but it’s also an important habitat for caribou, grizzly bears and migratory birds. And it’s a prized resource for Alaska Natives who have long relied on the land for subsistence hunting and fishing. The Interior Department immediately imposed measures meant to avoid damage to those areas even while they’re being considered for protection, effectively raising hurdles for building roads and other infrastructure across the tracts.  Although Trump could cast aside his predecessor’s proposed special areas and ignore the interim safeguards imposed in the meantime, the action could be challenged in federal court. The report and memo unveiled Thursday bolsters the government record for those safeguards, providing potential fodder for any future legal battle. Environmentalists said they hoped the effort would create a bulwark against Trump’s plan to unleash American oil development.  “The Biden administration clearly understands that the

Read More »

Japan Watching for Any Impact on LNG From New Russia Sanctions

Tokyo will closely monitor the rollout of new US sanctions on Moscow for any impact on shipments of liquefied natural gas from Russia’s Far East, a key source of supply for Japan. A week ago, the Biden administration imposed aggressive penalties on Russian energy, including restrictions on vessels that export oil from the Sakhalin-2 project just north of Japan. If those curbs end up halting crude production from the site, the gas that’s pumped out at the same time may be at risk. Japan is a big LNG buyer and sourced about 8% of its imports from Sakhalin-2 last year, according to ship-tracking data compiled by Bloomberg. “We’ll discuss with the relevant stakeholders” to ensure Japan gets the gas it needs, Shinichi Sasayama, the president of major importer Tokyo Gas Co., said Thursday. “It might require more investigation to determine how much impact this will actually have. I wouldn’t say there is no impact whatsoever.” One of Sakhalin-2’s three production platforms, Lunskaya, pumps both natural gas and gas condensate, a light version of crude oil, and the two fuels are then separated onshore. If curbs on exporting the oil lead to a buildup of crude on site, that may eventually prompt a halt in output, affecting gas in the process. “If oil and condensate shipments really stopped, then at some point — when the storage facilities were full — gas production would also have to halt as it’s impossible to produce gas without producing condensate,” said Sergey Vakulenko, an oil industry veteran who spent part of his career at Sakhalin-2. The US sanctions do not extend to the actual oil and gas from the development, just to the tankers needed to export the crude. Oil shipments are unlikely to cease immediately since the restrictions allow for a wind-down period. Ultimately, Lunskaya’s continued

Read More »

Labour appoints five members to GB Energy ‘start-up board’

The UK government has appointed five non-executive directors to the “start-up board” of proposed publicly-owned energy company GB Energy. The state-owned firm was a key election pledge of the Labour party, and officials unveiled former Siemens Energy chief executive Juergen Maier as chairman in July. Prime Minister Sir Keir Starmer later confirmed GB Energy will be based in Aberdeen, however questions remain over the number of jobs it will provide in the Granite City. Scottish politicians also criticised Maier’s decision to remain based in Manchester, rather than relocating to Aberdeen. Announcing the appointees, the Department for Energy Security and Net Zero said they bring a wide range of experience from their previous roles. “Together with the chair Juergen Maier, they will help to scale up Great British Energy and build its organisational structure and Aberdeen headquarters,” DESNZ said. UK energy secretary Ed Miliband said the GB Energy board will “hit the ground running” in its mission to “scale up clean, homegrown power”. Meanwhile, Maier said the appointments are an “important milestone” for the company is it seeks to “rapidly scale up” and “get to work”. “Their experience across the energy industry, government and trade unions will be crucial in shaping our strategy and organisation, ensuring we can back clean energy projects, bolster UK supply chains and create good jobs across the country,” Maier said. Who is the GB Energy start-up board? DESNZ said the five new start-up non-executive directors will join the GB Energy board on initial contracts of between 18 months and two years. They include former Trades Union Congress (TUC) general secretary and Labour peer  Frances O’Grady, former SP Energy Networks chief executive Frank Mitchell, British Hydropower Association chief executive Kate Gilmartin, former Association for Renewable Energy and Clean Technology (REA) chief executive Dr Nina Skorupska, and former

Read More »

Westinghouse, KEPCO Settle Dispute over Nuclear Tech Rights

Korea Electric Power Corp. (KEPCO) and Westinghouse Electric Co. LLC have signed an agreement to resolve their intellectual property dispute over nuclear reactor designs and pursue collaboration. Cranberry Township, Pennsylvania-based Westinghouse said Thursday it would work with KEPCO and KEPCO’s Korea Hydro and Nuclear Power Co. Ltd. (KHNP) for the dismissal of all current legal actions. United States litigation and international arbitration are pending concerning Westinghouse’s claim to sub-licensing and export rights against South Korea’s state-owned KEPCO. “This agreement allows both parties to move forward with certainty in the pursuit and deployment of new nuclear reactors”, Westinghouse said in an online statement. “The agreement also sets the stage for future cooperation between the parties to advance new nuclear projects globally”. Westinghouse president and chief executive Patrick Fragman said, “As the world demands more firm baseload power, we look forward to opportunities for cooperation to deploy nuclear power at even greater scale”. Details of the settlement deal are confidential, Westinghouse said. In a recent episode of the legal row, which dates back to 2022, Westinghouse last year protested in Czechia after the Central European country’s state-owned CEZ Group selected KHNP over Westinghouse for two nuclear reactors. Westinghouse argued KHNP’s designs use the former’s technology and that the Korean company did not have clearance under U.S. tech export controls. Announcing the appeal before the Czech Anti-Monopoly Office on August 26, 2024, Westinghouse said, “The tender required vendors to certify they possess the right to transfer and sublicense the nuclear technology offered in their bids to CEZ and local suppliers”. “KHNP’s APR1000 and APR1400 plant designs utilize Westinghouse-licensed Generation II System 80 technology. KHNP neither owns the underlying technology nor has the right to sublicense it to a third party without Westinghouse consent. “Further, only Westinghouse has the legal right to obtain the

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In our rush to understand and relate to AI, we have fallen into a seductive trap: Attributing human characteristics to these robust but fundamentally non-human systems. This anthropomorphizing of AI is not just a harmless quirk of human nature — it is becoming an increasingly dangerous tendency that might cloud our judgment in critical ways. Business leaders are comparing AI learning to human education to justify training practices to lawmakers crafting policies based on flawed human-AI analogies. This tendency to humanize AI might inappropriately shape crucial decisions across industries and regulatory frameworks. Viewing AI through a human lens in business has led companies to overestimate AI capabilities or underestimate the need for human oversight, sometimes with costly consequences. The stakes are particularly high in copyright law, where anthropomorphic thinking has led to problematic comparisons between human learning and AI training. The language trap Listen to how we talk about AI: We say it “learns,” “thinks,” “understands” and even “creates.” These human terms feel natural, but they are misleading. When we say an AI model “learns,” it is not gaining understanding like a human student. Instead, it performs complex statistical analyses on vast amounts of data, adjusting weights and parameters in its neural networks based on mathematical principles. There is no comprehension, eureka moment, spark of creativity or actual understanding — just increasingly sophisticated pattern matching. This linguistic sleight of hand is more than merely semantic. As noted in the paper, Generative AI’s Illusory Case for Fair Use: “The use of anthropomorphic language to describe the development and functioning of AI models is distorting because it suggests that once trained, the model operates independently of the content of the works on which

Read More »

Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The world of AI agents is undergoing a revolution, and Microsoft’s recent release of AutoGen v0.4 this week marked a significant leap forward in this journey. Positioned as a robust, scalable, and extensible framework, AutoGen represents Microsoft’s latest attempt to address the challenges of building multi-agent systems for enterprise applications. But what does this release tell us about the state of agentic AI today, and how does it compare to other major frameworks like LangChain and CrewAI? This article unpacks the implications of AutoGen’s update, explores its standout features, and situates it within the broader landscape of AI agent frameworks, helping developers understand what’s possible and where the industry is headed. The Promise of “asynchronous event-driven architecture” A defining feature of AutoGen v0.4 is its adoption of an asynchronous, event-driven architecture (see Microsoft’s full blog post). This is a step forward from older, sequential designs, enabling agents to perform tasks concurrently rather than waiting for one process to complete before starting another. For developers, this translates into faster task execution and more efficient resource utilization—especially critical for multi-agent systems. For example, consider a scenario where multiple agents collaborate on a complex task: one agent collects data via APIs, another parses the data, and a third generates a report. With asynchronous processing, these agents can work in parallel, dynamically interacting with a central reasoner agent that orchestrates their tasks. This architecture aligns with the needs of modern enterprises seeking scalability without compromising performance. Asynchronous capabilities are increasingly becoming table stakes. AutoGen’s main competitors, Langchain and CrewAI, already offered this, so Microsoft’s emphasis on this design principle underscores its commitment to keeping AutoGen competitive. AutoGen’s role in Microsoft’s enterprise ecosystem Microsoft’s strategy for

Read More »

Beyond RAG: How cache-augmented generation reduces latency, complexity for smaller workloads

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval-augmented generation (RAG) has become the de-facto way of customizing large language models (LLMs) for bespoke information. However, RAG comes with upfront technical costs and can be slow. Now, thanks to advances in long-context LLMs, enterprises can bypass RAG by inserting all the proprietary information in the prompt. A new study by the National Chengchi University in Taiwan shows that by using long-context LLMs and caching techniques, you can create customized applications that outperform RAG pipelines. Called cache-augmented generation (CAG), this approach can be a simple and efficient replacement for RAG in enterprise settings where the knowledge corpus can fit in the model’s context window. Limitations of RAG RAG is an effective method for handling open-domain questions and specialized tasks. It uses retrieval algorithms to gather documents that are relevant to the request and adds context to enable the LLM to craft more accurate responses. However, RAG introduces several limitations to LLM applications. The added retrieval step introduces latency that can degrade the user experience. The result also depends on the quality of the document selection and ranking step. In many cases, the limitations of the models used for retrieval require documents to be broken down into smaller chunks, which can harm the retrieval process.  And in general, RAG adds complexity to the LLM application, requiring the development, integration and maintenance of additional components. The added overhead slows the development process. Cache-augmented retrieval RAG (top) vs CAG (bottom) (source: arXiv) The alternative to developing a RAG pipeline is to insert the entire document corpus into the prompt and have the model choose which bits are relevant to the request. This approach removes the complexity of the RAG pipeline and the problems

Read More »

Runway’s new AI image generator Frames is here, and it looks fittingly cinematic

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The AI media tech provider Runway has announced the release of Frames, its newest text-to-image generation model, and it’s winning early praise from users for producing highly cinematic visuals — a fitting compliment given Runway is known primarily as an AI video model provider. Could Frames dethrone Midjourney as the go-to choice for AI filmmakers and artists? Announced back in November 2024, Frames was initially made available to selected Runway Creators Program ambassadors and power users over the last few weeks. As of today, it’s available to all through Runway’s Unlimited and Enterprise subscription plans, which cost $95 per month/$912 when paid annually or, in the case of the Enterprise plan, $1,500 annually. The company has also posted a showcase of new images generated by users of Frames on its website here, under the name “Worlds of Frames.” Users can generate still images with it on Runway’s website at app.runwayml.com — if they have subscribed to the appropriate plan — and then with one click, use the images as the basis for movies made with Runway’s image-to-video models such as Gen-3 Alpha Turbo. According to Runway, Frames provides an advanced level of stylistic control and visual fidelity, making it a versatile tool for industries like editorial, art direction, pre-visualization, brand development, and production. As Cristóbal Valenzuela, Runway’s cofounder and CEO, wrote on a post on the social network X: “Frames has been engineered from the ground up for professional creative work. If you’re in editorial, art direction, pre-vis, brand development, production, etc., this model is for you.” Valenzuela further noted that the model’s prompting system allows for precision and depth, enabling users to achieve nuanced, naturalistic and cinematically composed results. Users

Read More »

Devin 1.2: Updated AI engineer enhances coding with smarter in-context reasoning, voice integration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Last year, Cognition started the AI agent wave with a product called Devin — the world’s first AI engineer. The offering was under wraps for several months, but now it’s generally available and learning new chops very quickly. Case in point: the Scott Wu-led startup has just released Devin 1.2, which brings a bunch of new capabilities to take the AI engineer’s ability to handle entire development projects to a whole new level. The biggest highlight of Devin 1.2 is its improved in-context reasoning, which makes the agent better at handling and reusing code. It also includes the ability to take voice messages via Slack, which gives users a more seamless way to tell Devin what it has to do. The development comes at a time when AI-powered agents are being touted as the future of modern work. Experts believe that there will soon be a time when humans and agents will be working together, with the former seamlessly handling repetitive tasks (which is already beginning to happen). Recently, at CES, Nvidia boss Jensen Huang said that in the future, enterprise IT departments would evolve into “HR departments” for AI, responsible for commissioning and maintaining agents working across different functions within the company. What does Devin 1.2 bring to the table? While not a major upgrade, Devin 1.2 introduces some interesting capabilities to make the agent better at its job. The number one feature here is the improved ability to reason in context in a code repository. This essentially means Devin can now better understand the structure and content of a repository. With this understanding, the agent can identify which file is relevant to a particular task, recognize and re-use existing

Read More »

AI or Not raises $5M to stop AI fraud, deepfakes and misinformation

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI or Not, a widely covered AI fraud detection platform, has raised $5 million in a seed funding round to accelerate its use of “AI to detect AI” in images, audio video and deepfakes to prevent fraud and misinformation. Foundation Capital led the round, with participation from GTMFund, Plug and Play, and strategic angel investors. The company noted 85% of corporate finance professionals now view AI scams as an “existential” threat and more than half of them have already become a target of deepfake technology. AI or Not determines if images are real or fake. In the next two years, generative AI scams could be responsible for over $40 billion of losses in U.S. alone. It is this growing threat that fueled AI or Not’s growth over the past year, serving over 250,000 users to date, with the new injection of funding used to create more sophisticated ways of detecting misinformation online and ensure their tools remain ahead of evolving threats.  “In countless ways, we rely on our ability to see and hear to verify authenticity. With the advent of generative AI models, now we can no longer be so sure,” said Zach Noorani, partner at Foundation Capital, in a statement. “AI or Not’s unique approach to AI detection solves this emerging problem. We’re excited to support their mission of protecting people, companies, governments, and assets broadly from the risks posed by generative AI.” AI or Not’s platform uses proprietary algorithms to identify and verify authenticity in content, ranging from AI-generated deepfakes impersonating female politicians, deepfake voices used to impersonate seniors to AI-generated music already present on major streaming platforms. As the recent backlash against companies like Meta highlights, public demand

Read More »

Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In our rush to understand and relate to AI, we have fallen into a seductive trap: Attributing human characteristics to these robust but fundamentally non-human systems. This anthropomorphizing of AI is not just a harmless quirk of human nature — it is becoming an increasingly dangerous tendency that might cloud our judgment in critical ways. Business leaders are comparing AI learning to human education to justify training practices to lawmakers crafting policies based on flawed human-AI analogies. This tendency to humanize AI might inappropriately shape crucial decisions across industries and regulatory frameworks. Viewing AI through a human lens in business has led companies to overestimate AI capabilities or underestimate the need for human oversight, sometimes with costly consequences. The stakes are particularly high in copyright law, where anthropomorphic thinking has led to problematic comparisons between human learning and AI training. The language trap Listen to how we talk about AI: We say it “learns,” “thinks,” “understands” and even “creates.” These human terms feel natural, but they are misleading. When we say an AI model “learns,” it is not gaining understanding like a human student. Instead, it performs complex statistical analyses on vast amounts of data, adjusting weights and parameters in its neural networks based on mathematical principles. There is no comprehension, eureka moment, spark of creativity or actual understanding — just increasingly sophisticated pattern matching. This linguistic sleight of hand is more than merely semantic. As noted in the paper, Generative AI’s Illusory Case for Fair Use: “The use of anthropomorphic language to describe the development and functioning of AI models is distorting because it suggests that once trained, the model operates independently of the content of the works on which

Read More »

Microsoft AutoGen v0.4: A turning point toward more intelligent AI agents for enterprise developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The world of AI agents is undergoing a revolution, and Microsoft’s recent release of AutoGen v0.4 this week marked a significant leap forward in this journey. Positioned as a robust, scalable, and extensible framework, AutoGen represents Microsoft’s latest attempt to address the challenges of building multi-agent systems for enterprise applications. But what does this release tell us about the state of agentic AI today, and how does it compare to other major frameworks like LangChain and CrewAI? This article unpacks the implications of AutoGen’s update, explores its standout features, and situates it within the broader landscape of AI agent frameworks, helping developers understand what’s possible and where the industry is headed. The Promise of “asynchronous event-driven architecture” A defining feature of AutoGen v0.4 is its adoption of an asynchronous, event-driven architecture (see Microsoft’s full blog post). This is a step forward from older, sequential designs, enabling agents to perform tasks concurrently rather than waiting for one process to complete before starting another. For developers, this translates into faster task execution and more efficient resource utilization—especially critical for multi-agent systems. For example, consider a scenario where multiple agents collaborate on a complex task: one agent collects data via APIs, another parses the data, and a third generates a report. With asynchronous processing, these agents can work in parallel, dynamically interacting with a central reasoner agent that orchestrates their tasks. This architecture aligns with the needs of modern enterprises seeking scalability without compromising performance. Asynchronous capabilities are increasingly becoming table stakes. AutoGen’s main competitors, Langchain and CrewAI, already offered this, so Microsoft’s emphasis on this design principle underscores its commitment to keeping AutoGen competitive. AutoGen’s role in Microsoft’s enterprise ecosystem Microsoft’s strategy for

Read More »

USA Needs More Electricity to Win AI Race, Says Trump Energy Czar

The US risks forfeiting a global competition to dominate artificial intelligence if it doesn’t build more reliable, always-on electricity to supply the industry, President-elect Donald Trump’s pick to lead the Interior Department warned Thursday. Doug Burgum, the former North Dakota governor who has also been tapped to help chart Trump’s energy policy, cast the issue as critical to America’s national security during a Senate confirmation hearing that offered a preview of the incoming administration’s planned embrace of fossil fuels. Where renewable power supplies are intermittent and “unreliable,” Burgum said, AI’s growing energy demands will require more of the so-called baseload electricity that can be generated around-the-clock by burning coal and natural gas. “Without baseload, we’re going to lose the AI arms race to China,” Burgum told the Senate Energy and Natural Resources Committee. “AI is manufacturing intelligence, and if we don’t manufacture more intelligence than our adversaries, that affects every job, every company and every industry.” During a three-hour meeting marked by cordial exchanges — and none of the intense sparring that has dominated other confirmation hearings this week — Burgum sought to assure senators he would seek a “balanced approach” for oil drilling, conservation and even potentially housing on the federal land managed by the Interior Department.  The agency’s sprawling portfolio spans a fifth of US land, and it is the lead regulator for oil, gas and wind power development in the nation’s coastal waters. Burgum also made clear that a top priority is addressing what he called a “significant imbalance” in the nation’s electricity mix, as developers look to connect a host of low- and zero-emission power projects to the grid. “If the sun’s not shining and the wind’s not blowing, and we don’t have baseload, then we’ve got brownouts and blackouts, we have higher electric prices for

Read More »

Oil Notches Fourth Weekly Advance as Sanctions Threaten Supplies

Oil notched its fourth straight weekly gain, the longest run since July, as US sanctions posed growing risks to global supply in a market already tightened by cold weather. West Texas Intermediate was up almost 2% for the week, even after retreating below $78 a barrel on Friday. The Biden administration’s harshest ever curbs on Russian oil have shaken up markets, with freight costs rocketing and long-standing buyers of the country’s crude, including China and India, looking elsewhere for supplies. Market participants are also recalibrating their outlook three days ahead of President-elect Donald Trump’s inauguration. Prices whipsawed on Thursday as traders parsed clues on the incoming administration’s sanctions stance. Trump’s advisers were reportedly considering relaxing the curbs to enable a Russia-Ukraine accord, while Treasury secretary nominee Scott Bessent said he would support dialing up measures targeting the Russian oil industry.  “The Russian sanctions on 183 oil tankers have been the focus for crude prices,” said Dennis Kissler, senior vice president of trading at BOK Financial Securities. “The latest crude strength has been impressive, with tight near-term supplies, as buyers became aggressive once the sanctions on Russia were supported by both presidential administrations.” Trump has also threatened to impose tariffs on imports from Canada, including its oil. While the federal government is pushing back, the leader of its largest oil-producing province is resisting efforts to include curtailing or taxing crude shipments as potential countermeasures.  Meanwhile, traders weighed mixed economic signals out of China, the world’s largest crude importer. The nation hit the government’s growth goal last year after a late stimulus blitz and export boom turbocharged activity. At the same time, China’s oil refining volumes declined by 1.6% last year as the shift to electric vehicles gained pace. Looming US tariffs also threaten to take away a key driver of expansion. Crude has rallied almost 9% this year as cold weather in the

Read More »

Beyond RAG: How cache-augmented generation reduces latency, complexity for smaller workloads

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Retrieval-augmented generation (RAG) has become the de-facto way of customizing large language models (LLMs) for bespoke information. However, RAG comes with upfront technical costs and can be slow. Now, thanks to advances in long-context LLMs, enterprises can bypass RAG by inserting all the proprietary information in the prompt. A new study by the National Chengchi University in Taiwan shows that by using long-context LLMs and caching techniques, you can create customized applications that outperform RAG pipelines. Called cache-augmented generation (CAG), this approach can be a simple and efficient replacement for RAG in enterprise settings where the knowledge corpus can fit in the model’s context window. Limitations of RAG RAG is an effective method for handling open-domain questions and specialized tasks. It uses retrieval algorithms to gather documents that are relevant to the request and adds context to enable the LLM to craft more accurate responses. However, RAG introduces several limitations to LLM applications. The added retrieval step introduces latency that can degrade the user experience. The result also depends on the quality of the document selection and ranking step. In many cases, the limitations of the models used for retrieval require documents to be broken down into smaller chunks, which can harm the retrieval process.  And in general, RAG adds complexity to the LLM application, requiring the development, integration and maintenance of additional components. The added overhead slows the development process. Cache-augmented retrieval RAG (top) vs CAG (bottom) (source: arXiv) The alternative to developing a RAG pipeline is to insert the entire document corpus into the prompt and have the model choose which bits are relevant to the request. This approach removes the complexity of the RAG pipeline and the problems

Read More »

Biden Makes Last-Minute Bid to Thwart Arctic Oil Drilling

The Biden administration advanced a plan to limit oil drilling and infrastructure across more of Alaska’s National Petroleum Reserve, a bid to lock in land protections and conservation requirements days before President-elect Donald Trump takes office. The Interior Department move Thursday represents the latest step by outgoing President Joe Biden to enshrine protections that could complicate Trump plans to rapidly expand oil and gas development across US federal lands and waters. In recent weeks, Biden also has designated new national monuments and ruled out the sale of drilling rights in more than 625 million acres of US coastal waters.  In the latest action, the Interior Department is proposing new “special area” designations that would restrict drilling and other activities across more than 3 million acres of the Indiana-sized reserve in northwest Alaska. The move comes on top of on an existing policy, finalized last year, that barred drilling across nearly half of the NPR-A.  The rugged terrain once earmarked for energy development contains an estimated 8.7 billion barrels of recoverable oil, but it’s also an important habitat for caribou, grizzly bears and migratory birds. And it’s a prized resource for Alaska Natives who have long relied on the land for subsistence hunting and fishing. The Interior Department immediately imposed measures meant to avoid damage to those areas even while they’re being considered for protection, effectively raising hurdles for building roads and other infrastructure across the tracts.  Although Trump could cast aside his predecessor’s proposed special areas and ignore the interim safeguards imposed in the meantime, the action could be challenged in federal court. The report and memo unveiled Thursday bolsters the government record for those safeguards, providing potential fodder for any future legal battle. Environmentalists said they hoped the effort would create a bulwark against Trump’s plan to unleash American oil development.  “The Biden administration clearly understands that the

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE