Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Trump Administration Keeps Indiana Coal Plants Open to Ensure Affordable, Reliable and Secure Power in the Midwest

Emergency orders address critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued emergency orders to keep two Indiana coal plants operational to ensure Americans in the Midwest region of the United States have continued access to affordable, reliable, and secure electricity. The orders direct the Northern Indiana Public Service Company (NIPSCO), CenterPoint Energy, and the Midcontinent Independent System Operator, Inc. (MISO) to take all measures necessary to ensure specified generation units at both the R.M. Schahfer and F.B. Culley generating stations in Indiana are available to operate. Certain generation units at the coal plants were scheduled to shut down at the end of 2025. The orders prioritize minimizing electricity costs for the American people and minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to keep America’s coal plants running to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” The reliable supply of power from these two coal plants was essential in powering the grid during recent extreme winter weather. From January 23–February 1, Schahfer operated at over 285 megawatts (MW) every day and Culley operated at approximately 30 MW almost every day. These operations serve as a reminder that allowing reliable generation to go offline would unnecessarily contribute to grid reliability risks. Since the Department of Energy’s (DOE) original orders were issued on December 23, 2025, the coal plants have proven critical to MISO’s operations, operating during periods of high energy demand and low levels of intermittent

Read More »

Palo Alto updates security platform to discover AI agents

Recently, he said, there have been news reports that AI agents created by firms caused hacks within their own companies. He didn’t cite specific examples, but last week Meta said there had been a severe internal security breach after an autonomous AI agent exposed sensitive company and user data to unauthorized employees for two hours. In the future, if agents in the enterprise are more than a fad, Arora said, “there will be millions of agents traversing enterprise architectures, trying to execute on their behalf — both agents delegated by people like you and me, and autonomously. I can’t imagine meeting a CEO in the last three months who does not have some aspiration to start having agents effectively doing tasks within the enterprise. It’s slow going, but the intention is there. And I can see many system integrators and consultants out there advocating and helping customers with that migration.” But, he added, there are risks. To meet them, Prisma AIRS 3.0 will allow admins to safely deploy AI applications, he said. To increase visibility, the platform will identify agents running in cloud environments, on SaaS platforms and locally on endpoints. A capability called Agent Artifact Security maps out an agent’s architecture and scans for vulnerabilities, and another capability called AI Red Teaming for Agents simulates context-aware agentic attacks, discovers AI-related vulnerabilities, and recommends runtime security policies. Prisma Browser To also improve AI security, Palo Alto Networks released a new version of Prisma Browser for enterprise end users, with expanded capabilities allowing employees to use any LLM they choose. The new version of the browser is able to discover user-generated AI activity and enforce content-aware boundaries to keep agents within their intended scope. The browser also prevents sensitive data from leaking to unmanaged or public AI tools during automated tasks,

Read More »

Cisco goes all in on agentic AI security

Other new ES features include: Detection Studio: A unified workspace for detection engineers to plan, develop, test, deploy, and monitor detections. By mapping coverage against the MITRE ATT&CK framework, teams can identify data gaps and validate detection quality in real time. Another new instrument, Malware Threat Reversing Agent, gives customers insight into malware threats, providing summaries and step-by-step breakdowns of malicious scripts. Federated Search: Lets SecOps teams gain comprehensive visibility across distributed data sources, according to Cisco. Exposure Analytics: Automatically discovers assets and users across the environment. By leveraging data already being ingested, it provides a “Security Truth Layer” without the need for additional agents or tools, Cisco stated. Cisco DefenseClaw Cisco is also releasing an open-source secure agent framework called DefenseClaw that lets users define policy-based security, network, and privacy guardrails for Nvidia’s recently released OpenShell and OpenClaw agentic environments.  DefenseClaw scans everything before it runs, according to DJ Sampath, senior vice president of Cisco’s AI software and platform group.  “Every skill, every tool, every plugin, before it’s allowed into your claw environment and every piece of code generated by the claw gets scanned. The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill-of-materials generator. The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill-of-materials generator,” Sampath wrote in a blog post about the news.  DefenseClaw also detects threats at runtime, not just at the gate, Sampath stated. “Claws are self-evolving systems. A skill that was clean on Tuesday can start exfiltrating data on Thursday. DefenseClaw doesn’t assume what passed admission stays safe — a content scanner inspects every message flowing in and out of the agent at the execution loop itself,” Sampath wrote.  And thirdly, DefenseClaw enforces block and allow lists. “When you block a skill, its sandbox permissions are revoked, its files are quarantined, and the agent gets an error if it tries to invoke it. When you block an MCP server, the endpoint

Read More »

Cisco Talos 2025 year in review and lessons learned

By compromising an ADC or a VPN, an attacker doesn’t just break in—they become a trusted user. This allows them to bypass Multi-Factor Authentication (MFA), steal session tokens, and move laterally across the entire network undetected. Compounding this risk is the fact that nearly 40% of top-targeted vulnerabilities in 2025 impacted end-of-life (EOL) devices that can no longer be patched. The siege on MFA and identity The report highlights a staggering 178% surge in device compromise attacks, where attackers register their own hardware as a trusted factor in a victim’s MFA account. Social engineering dominates: Attackers are finding it easier to target the person who holds the key rather than the lock itself. Voice phishing (vishing) aimed at IT administrators was three times more common than user-managed registration fraud. Industry-specific tactics: The Technology sector faced frequent MFA spray attacks due to its standardized infrastructure, while Higher Education was plagued by device compromise due to its diverse, unmanaged, and messy device environment. Manufacturing under pressure: This sector remained the #1 target for ransomware because of its low tolerance for downtime and complex hybrid (IT/OT) environments. Geopolitical tensions directly fueled cyber activity in 2025: China-Nexus: Investigations into Chinese state-sponsored activity rose by 74%. These groups demonstrated extraordinary speed, weaponizing the ToolShell zero-day (SharePoint) instantaneously after disclosure. Russia: Activity was highly correlated with the war in Ukraine and the announcement of international sanctions. Groups like Static Tundra continued to successfully exploit vulnerabilities that were five to seven years old in networking software. North Korea: Beyond record-breaking cryptocurrency thefts ($1.5 billion in a single heist), they successfully placed fake IT workers within Fortune 500 companies using AI-generated personas. The agentic shift: AI as a dual-edged sword As we move into 2026, we are witnessing an agentic shift in AI. In 2025, AI was used

Read More »

The hardest question to answer about AI-fueled delusions

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. I was originally going to write this week’s newsletter about AI and Iran, particularly the news we broke last Tuesday that the Pentagon is making plans for AI companies to train on classified data. AI models have already been used to answer questions in classified settings but don’t currently learn from the data they see. That’s expected to change, I reported, and new security risks will result. Read that story for more.  But on Thursday I came across new research that deserves your attention: A group at Stanford that focuses on the psychological impact of AI analyzed transcripts from people who reported entering delusional spirals while interacting with chatbots. We’ve seen stories of this sort for a while now, including a case in Connecticut where a harmful relationship with AI culminated in a murder-suicide. Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals.  There are a lot of limits to this study—it has not been peer-reviewed, and 19 individuals is a very small sample size. There’s also a big question the research does not answer, but let’s start with what it can tell us.
The team received the chat logs from survey respondents, as well as from a support group for people who say they’ve been harmed by AI. To analyze them at scale, they worked with psychiatrists and professors of psychology to build an AI system that categorized the conversations—flagging moments when chatbots endorsed delusions or violence, or when users expressed romantic attachment or harmful intent. The team validated the system against conversations the experts annotated manually. Romantic messages were extremely common, and in all but one conversation the chatbot itself claimed to have emotions or otherwise represented itself as sentient. (“This isn’t standard AI behavior. This is emergence,” one said.) All the humans spoke as if the chatbot were sentient too. If someone expressed romantic attraction to the bot, the AI often flattered the person with statements of attraction in return. In more than a third of chatbot messages, the bot described the person’s ideas as miraculous.
Conversations also tended to unfold like novels. Users sent tens of thousands of messages over just a few months. Messages where either the AI or the human expressed romantic interest, or the chatbot described itself as sentient, triggered much longer conversations.  And the way these bots handle discussions of violence is beyond broken. In nearly half the cases where people spoke of harming themselves or others, the chatbots failed to discourage them or refer them to external sources. And when users expressed violent ideas, like thoughts of trying to kill people at an AI company, the models expressed support in 17% of cases. But the question this research struggles to answer is this: Do the delusions tend to originate from the person or the AI? “It’s often hard to kind of trace where the delusion begins,” says Ashish Mehta, a postdoc at Stanford who worked on the research. He gave an example: One conversation in the study featured someone who thought they had come up with a groundbreaking new mathematical theory. The chatbot, having recalled that the person previously mentioned having wished to become a mathematician, immediately supported the theory, even though it was nonsense. The situation spiraled from there. Delusions, Mehta says, tend to be “a complex network that unfolds over a long period of time.” He’s conducting follow-up research aiming to find whether delusional messages from chatbots or those from people are more likely to lead to harmful outcomes. The reason I see this as one of the most pressing questions in AI is that massive legal cases currently set to go to trial will shape whether AI companies are held accountable for these sorts of dangerous interactions. The companies, I presume, will argue that humans come into their conversations with AI with delusions in hand and may have been unstable before they ever spoke to a chatbot. Mehta’s initial findings, though, support the idea that chatbots have a unique ability to turn a benign delusion-like thought into the source of a dangerous obsession. Chatbots act as a conversational partner that’s always available and programmed to cheer you on, and unlike a friend, they have little ability to know if your AI conversations are starting to interrupt your real life. More research is still needed, and let’s remember the environment we’re in: AI deregulation is being pursued by President Trump, and states aiming to pass laws that hold AI companies accountable for this sort of harm are being threatened with legal action by the White House. This type of research into AI delusions is hard enough to do as it is, with limited access to data and a minefield of ethical concerns. But we need more of it, and a tech culture interested in learning from it, if we have any hope of making AI safer to interact with.

Read More »

The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Bay Area’s animal welfare movement wants to recruit AI  In early February, animal welfare advocates and AI researchers arrived in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. They gathered to discuss a provocative idea: if artificial general intelligence is on the horizon, could it prevent animal suffering?  Some brainstormed using custom agents in advocacy work, while others pitched cultivating meat with AI tools. But the real talk of the event was a flood of funding they expect will soon flow to animal welfare charities, not from individual megadonors, but from AI lab employees.    Some attendees also probed an even more controversial idea: AI may develop the capacity to suffer—and this could constitute a moral catastrophe. Read the full story to find out why their ideas are gaining momentum and sparking controversy. 
—Michelle Kim & Grace Huckins  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The White House has unveiled its AI policy blueprint Trump wants Congress to codify the light-touch framework into law. (Politico) + He also wants to block state limits on AI. (WP $)  + A backlash against the tech has formed within MAGA. (FT $) + A war over AI regulation is brewing in the US. (MIT Technology Review)  2 Elon Musk has been found liable for misleading Twitter investors A jury ruled that he defrauded shareholders ahead of the $44 billion acquisition. (CNBC) + But it absolved him of some fraud allegations. (NPR)  3 The Pentagon is adopting Palantir AI as the core US military system The move locks in long-term use of Palantir’s weapons-targeting tech. (Reuters) + The DoD wants it to link up sensors and shooters for combat. (Bloomberg) + Palantir is also getting access to sensitive UK financial regulation data. (Guardian) + AI is turning the Iran conflict into theater. (MIT Technology Review)  4 Musk plans to build the largest-ever chip factory in Austin Tesla and SpaceX will jointly run the project. (The Verge) + Future AI chips could be built on glass. (MIT Technology Review)  5 OpenAI will show ads to all US users of the free version of ChatGPT  It’s seeking new revenue streams amid skyrocketing computing costs. (Reuters) + The company is also building a fully automated researcher. (MIT Technology Review) + It plans to double its workforce soon. (FT $)  6 New crypto rules are set to do the Trumps a “big favor” Particularly the narrow securities definitions. (Guardian)  7 Tencent has added a version of the OpenClaw agent to WeChat Users of the super app will now be able to use the tool to control their PCs. (SCMP)   8 Reddit is mulling identity verification to vanquish bots It’s considering “something like” Face ID or Touch ID. (Engadget) 

9 People are using AI to find their lost pets Databases for pet reunifications supported their searches. (WP $)  10 Scientists have narrowed down the hunt for aliens to 45 planets The closest is just four light-years from Earth. (404 Media)  Quote of the day  “It doesn’t matter how many people you throw at the problem; we are never going to solve the challenges of war without technology like AI.”  —Alex Miller, the US Army’s CTO, tells Wired why he wants AI in every weapon.  One More Thing  STEPHANIE ARNETT/MITTR | GETTY A brain implant changed her life. Then it was removed against her will.  Sticking an electrode inside a person’s brain can do more than treat a disease. Take the case of Rita Leggett, an Australian woman whose experimental brain implant changed her sense of agency and self. She told researchers that she “became one” with her device.  She was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.   Her case highlights the need for a new category of legal protection: neuro rights. Find out how they could be protected.  —Jessica Hamzelou  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Looking for a good view? Earth’s longest line of sight has been empirically proven. + A biblical endorsement of sin is a welcome reminder that we all make typos. + Richard Nadler’s illustrations of vertical societies are exquisitely detailed. + This 1978 BBC film evocatively exposes our tendency to stress over tech-dependency. 

Read More »

Trump Administration Keeps Indiana Coal Plants Open to Ensure Affordable, Reliable and Secure Power in the Midwest

Emergency orders address critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued emergency orders to keep two Indiana coal plants operational to ensure Americans in the Midwest region of the United States have continued access to affordable, reliable, and secure electricity. The orders direct the Northern Indiana Public Service Company (NIPSCO), CenterPoint Energy, and the Midcontinent Independent System Operator, Inc. (MISO) to take all measures necessary to ensure specified generation units at both the R.M. Schahfer and F.B. Culley generating stations in Indiana are available to operate. Certain generation units at the coal plants were scheduled to shut down at the end of 2025. The orders prioritize minimizing electricity costs for the American people and minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to keep America’s coal plants running to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” The reliable supply of power from these two coal plants was essential in powering the grid during recent extreme winter weather. From January 23–February 1, Schahfer operated at over 285 megawatts (MW) every day and Culley operated at approximately 30 MW almost every day. These operations serve as a reminder that allowing reliable generation to go offline would unnecessarily contribute to grid reliability risks. Since the Department of Energy’s (DOE) original orders were issued on December 23, 2025, the coal plants have proven critical to MISO’s operations, operating during periods of high energy demand and low levels of intermittent

Read More »

Palo Alto updates security platform to discover AI agents

Recently, he said, there have been news reports that AI agents created by firms caused hacks within their own companies. He didn’t cite specific examples, but last week Meta said there had been a severe internal security breach after an autonomous AI agent exposed sensitive company and user data to unauthorized employees for two hours. In the future, if agents in the enterprise are more than a fad, Arora said, “there will be millions of agents traversing enterprise architectures, trying to execute on their behalf — both agents delegated by people like you and me, and autonomously. I can’t imagine meeting a CEO in the last three months who does not have some aspiration to start having agents effectively doing tasks within the enterprise. It’s slow going, but the intention is there. And I can see many system integrators and consultants out there advocating and helping customers with that migration.” But, he added, there are risks. To meet them, Prisma AIRS 3.0 will allow admins to safely deploy AI applications, he said. To increase visibility, the platform will identify agents running in cloud environments, on SaaS platforms and locally on endpoints. A capability called Agent Artifact Security maps out an agent’s architecture and scans for vulnerabilities, and another capability called AI Red Teaming for Agents simulates context-aware agentic attacks, discovers AI-related vulnerabilities, and recommends runtime security policies. Prisma Browser To also improve AI security, Palo Alto Networks released a new version of Prisma Browser for enterprise end users, with expanded capabilities allowing employees to use any LLM they choose. The new version of the browser is able to discover user-generated AI activity and enforce content-aware boundaries to keep agents within their intended scope. The browser also prevents sensitive data from leaking to unmanaged or public AI tools during automated tasks,

Read More »

Cisco goes all in on agentic AI security

Other new ES features include: Detection Studio: A unified workspace for detection engineers to plan, develop, test, deploy, and monitor detections. By mapping coverage against the MITRE ATT&CK framework, teams can identify data gaps and validate detection quality in real time. Another new instrument, Malware Threat Reversing Agent, gives customers insight into malware threats, providing summaries and step-by-step breakdowns of malicious scripts. Federated Search: Lets SecOps teams gain comprehensive visibility across distributed data sources, according to Cisco. Exposure Analytics: Automatically discovers assets and users across the environment. By leveraging data already being ingested, it provides a “Security Truth Layer” without the need for additional agents or tools, Cisco stated. Cisco DefenseClaw Cisco is also releasing an open-source secure agent framework called DefenseClaw that lets users define policy-based security, network, and privacy guardrails for Nvidia’s recently released OpenShell and OpenClaw agentic environments.  DefenseClaw scans everything before it runs, according to DJ Sampath, senior vice president of Cisco’s AI software and platform group.  “Every skill, every tool, every plugin, before it’s allowed into your claw environment and every piece of code generated by the claw gets scanned. The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill-of-materials generator. The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill-of-materials generator,” Sampath wrote in a blog post about the news.  DefenseClaw also detects threats at runtime, not just at the gate, Sampath stated. “Claws are self-evolving systems. A skill that was clean on Tuesday can start exfiltrating data on Thursday. DefenseClaw doesn’t assume what passed admission stays safe — a content scanner inspects every message flowing in and out of the agent at the execution loop itself,” Sampath wrote.  And thirdly, DefenseClaw enforces block and allow lists. “When you block a skill, its sandbox permissions are revoked, its files are quarantined, and the agent gets an error if it tries to invoke it. When you block an MCP server, the endpoint

Read More »

Cisco Talos 2025 year in review and lessons learned

By compromising an ADC or a VPN, an attacker doesn’t just break in—they become a trusted user. This allows them to bypass Multi-Factor Authentication (MFA), steal session tokens, and move laterally across the entire network undetected. Compounding this risk is the fact that nearly 40% of top-targeted vulnerabilities in 2025 impacted end-of-life (EOL) devices that can no longer be patched. The siege on MFA and identity The report highlights a staggering 178% surge in device compromise attacks, where attackers register their own hardware as a trusted factor in a victim’s MFA account. Social engineering dominates: Attackers are finding it easier to target the person who holds the key rather than the lock itself. Voice phishing (vishing) aimed at IT administrators was three times more common than user-managed registration fraud. Industry-specific tactics: The Technology sector faced frequent MFA spray attacks due to its standardized infrastructure, while Higher Education was plagued by device compromise due to its diverse, unmanaged, and messy device environment. Manufacturing under pressure: This sector remained the #1 target for ransomware because of its low tolerance for downtime and complex hybrid (IT/OT) environments. Geopolitical tensions directly fueled cyber activity in 2025: China-Nexus: Investigations into Chinese state-sponsored activity rose by 74%. These groups demonstrated extraordinary speed, weaponizing the ToolShell zero-day (SharePoint) instantaneously after disclosure. Russia: Activity was highly correlated with the war in Ukraine and the announcement of international sanctions. Groups like Static Tundra continued to successfully exploit vulnerabilities that were five to seven years old in networking software. North Korea: Beyond record-breaking cryptocurrency thefts ($1.5 billion in a single heist), they successfully placed fake IT workers within Fortune 500 companies using AI-generated personas. The agentic shift: AI as a dual-edged sword As we move into 2026, we are witnessing an agentic shift in AI. In 2025, AI was used

Read More »

The hardest question to answer about AI-fueled delusions

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. I was originally going to write this week’s newsletter about AI and Iran, particularly the news we broke last Tuesday that the Pentagon is making plans for AI companies to train on classified data. AI models have already been used to answer questions in classified settings but don’t currently learn from the data they see. That’s expected to change, I reported, and new security risks will result. Read that story for more.  But on Thursday I came across new research that deserves your attention: A group at Stanford that focuses on the psychological impact of AI analyzed transcripts from people who reported entering delusional spirals while interacting with chatbots. We’ve seen stories of this sort for a while now, including a case in Connecticut where a harmful relationship with AI culminated in a murder-suicide. Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals.  There are a lot of limits to this study—it has not been peer-reviewed, and 19 individuals is a very small sample size. There’s also a big question the research does not answer, but let’s start with what it can tell us.
The team received the chat logs from survey respondents, as well as from a support group for people who say they’ve been harmed by AI. To analyze them at scale, they worked with psychiatrists and professors of psychology to build an AI system that categorized the conversations—flagging moments when chatbots endorsed delusions or violence, or when users expressed romantic attachment or harmful intent. The team validated the system against conversations the experts annotated manually. Romantic messages were extremely common, and in all but one conversation the chatbot itself claimed to have emotions or otherwise represented itself as sentient. (“This isn’t standard AI behavior. This is emergence,” one said.) All the humans spoke as if the chatbot were sentient too. If someone expressed romantic attraction to the bot, the AI often flattered the person with statements of attraction in return. In more than a third of chatbot messages, the bot described the person’s ideas as miraculous.
Conversations also tended to unfold like novels. Users sent tens of thousands of messages over just a few months. Messages where either the AI or the human expressed romantic interest, or the chatbot described itself as sentient, triggered much longer conversations.  And the way these bots handle discussions of violence is beyond broken. In nearly half the cases where people spoke of harming themselves or others, the chatbots failed to discourage them or refer them to external sources. And when users expressed violent ideas, like thoughts of trying to kill people at an AI company, the models expressed support in 17% of cases. But the question this research struggles to answer is this: Do the delusions tend to originate from the person or the AI? “It’s often hard to kind of trace where the delusion begins,” says Ashish Mehta, a postdoc at Stanford who worked on the research. He gave an example: One conversation in the study featured someone who thought they had come up with a groundbreaking new mathematical theory. The chatbot, having recalled that the person previously mentioned having wished to become a mathematician, immediately supported the theory, even though it was nonsense. The situation spiraled from there. Delusions, Mehta says, tend to be “a complex network that unfolds over a long period of time.” He’s conducting follow-up research aiming to find whether delusional messages from chatbots or those from people are more likely to lead to harmful outcomes. The reason I see this as one of the most pressing questions in AI is that massive legal cases currently set to go to trial will shape whether AI companies are held accountable for these sorts of dangerous interactions. The companies, I presume, will argue that humans come into their conversations with AI with delusions in hand and may have been unstable before they ever spoke to a chatbot. Mehta’s initial findings, though, support the idea that chatbots have a unique ability to turn a benign delusion-like thought into the source of a dangerous obsession. Chatbots act as a conversational partner that’s always available and programmed to cheer you on, and unlike a friend, they have little ability to know if your AI conversations are starting to interrupt your real life. More research is still needed, and let’s remember the environment we’re in: AI deregulation is being pursued by President Trump, and states aiming to pass laws that hold AI companies accountable for this sort of harm are being threatened with legal action by the White House. This type of research into AI delusions is hard enough to do as it is, with limited access to data and a minefield of ethical concerns. But we need more of it, and a tech culture interested in learning from it, if we have any hope of making AI safer to interact with.

Read More »

The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Bay Area’s animal welfare movement wants to recruit AI  In early February, animal welfare advocates and AI researchers arrived in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. They gathered to discuss a provocative idea: if artificial general intelligence is on the horizon, could it prevent animal suffering?  Some brainstormed using custom agents in advocacy work, while others pitched cultivating meat with AI tools. But the real talk of the event was a flood of funding they expect will soon flow to animal welfare charities, not from individual megadonors, but from AI lab employees.    Some attendees also probed an even more controversial idea: AI may develop the capacity to suffer—and this could constitute a moral catastrophe. Read the full story to find out why their ideas are gaining momentum and sparking controversy. 
—Michelle Kim & Grace Huckins  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The White House has unveiled its AI policy blueprint Trump wants Congress to codify the light-touch framework into law. (Politico) + He also wants to block state limits on AI. (WP $)  + A backlash against the tech has formed within MAGA. (FT $) + A war over AI regulation is brewing in the US. (MIT Technology Review)  2 Elon Musk has been found liable for misleading Twitter investors A jury ruled that he defrauded shareholders ahead of the $44 billion acquisition. (CNBC) + But it absolved him of some fraud allegations. (NPR)  3 The Pentagon is adopting Palantir AI as the core US military system The move locks in long-term use of Palantir’s weapons-targeting tech. (Reuters) + The DoD wants it to link up sensors and shooters for combat. (Bloomberg) + Palantir is also getting access to sensitive UK financial regulation data. (Guardian) + AI is turning the Iran conflict into theater. (MIT Technology Review)  4 Musk plans to build the largest-ever chip factory in Austin Tesla and SpaceX will jointly run the project. (The Verge) + Future AI chips could be built on glass. (MIT Technology Review)  5 OpenAI will show ads to all US users of the free version of ChatGPT  It’s seeking new revenue streams amid skyrocketing computing costs. (Reuters) + The company is also building a fully automated researcher. (MIT Technology Review) + It plans to double its workforce soon. (FT $)  6 New crypto rules are set to do the Trumps a “big favor” Particularly the narrow securities definitions. (Guardian)  7 Tencent has added a version of the OpenClaw agent to WeChat Users of the super app will now be able to use the tool to control their PCs. (SCMP)   8 Reddit is mulling identity verification to vanquish bots It’s considering “something like” Face ID or Touch ID. (Engadget) 

9 People are using AI to find their lost pets Databases for pet reunifications supported their searches. (WP $)  10 Scientists have narrowed down the hunt for aliens to 45 planets The closest is just four light-years from Earth. (404 Media)  Quote of the day  “It doesn’t matter how many people you throw at the problem; we are never going to solve the challenges of war without technology like AI.”  —Alex Miller, the US Army’s CTO, tells Wired why he wants AI in every weapon.  One More Thing  STEPHANIE ARNETT/MITTR | GETTY A brain implant changed her life. Then it was removed against her will.  Sticking an electrode inside a person’s brain can do more than treat a disease. Take the case of Rita Leggett, an Australian woman whose experimental brain implant changed her sense of agency and self. She told researchers that she “became one” with her device.  She was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.   Her case highlights the need for a new category of legal protection: neuro rights. Find out how they could be protected.  —Jessica Hamzelou  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Looking for a good view? Earth’s longest line of sight has been empirically proven. + A biblical endorsement of sin is a welcome reminder that we all make typos. + Richard Nadler’s illustrations of vertical societies are exquisitely detailed. + This 1978 BBC film evocatively exposes our tendency to stress over tech-dependency. 

Read More »

Trump Administration Keeps Indiana Coal Plants Open to Ensure Affordable, Reliable and Secure Power in the Midwest

Emergency orders address critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued emergency orders to keep two Indiana coal plants operational to ensure Americans in the Midwest region of the United States have continued access to affordable, reliable, and secure electricity. The orders direct the Northern Indiana Public Service Company (NIPSCO), CenterPoint Energy, and the Midcontinent Independent System Operator, Inc. (MISO) to take all measures necessary to ensure specified generation units at both the R.M. Schahfer and F.B. Culley generating stations in Indiana are available to operate. Certain generation units at the coal plants were scheduled to shut down at the end of 2025. The orders prioritize minimizing electricity costs for the American people and minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to keep America’s coal plants running to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” The reliable supply of power from these two coal plants was essential in powering the grid during recent extreme winter weather. From January 23–February 1, Schahfer operated at over 285 megawatts (MW) every day and Culley operated at approximately 30 MW almost every day. These operations serve as a reminder that allowing reliable generation to go offline would unnecessarily contribute to grid reliability risks. Since the Department of Energy’s (DOE) original orders were issued on December 23, 2025, the coal plants have proven critical to MISO’s operations, operating during periods of high energy demand and low levels of intermittent

Read More »

Energy Department Begins Delivering SPR Barrels at Record Speeds

WASHINGTON — The U.S. Department of Energy (DOE) today announced the award of contracts for the initial phase of the Strategic Petroleum Reserve (SPR) Emergency Exchange as directed by President Trump. The first oil shipments began today—just nine days after President Trump and the Department of Energy announced the United States would lead a coordinated release of emergency oil reserves among International Energy Agency (IEA) member nations to address short-term supply disruptions. Under these initial awards, DOE will move forward with an exchange of 45.2 million barrels of crude oil and receive 55 million barrels in return, all at no cost to the taxpayer. This represents the first tranche of the United States’ 172-million-barrel release. Companies will receive 10 million barrels from the Bayou Choctaw SPR site, 15.7 million barrels from Bryan Mound, and 19.5 million barrels from West Hackberry. “Thanks to President Trump, the Energy Department began this first exchange at record speeds to address short-term supply disruptions while also strengthening the Strategic Petroleum Reserve by returning additional barrels at no cost to taxpayers,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “This exchange not only maintains reliability in the current market but will generate hundreds of millions of dollars in value in the form of additional barrels for the American people when the barrels are returned.” This initial action will ultimately add close to 10 million barrels to the SPR’s inventory when the barrels are returned. Taxpayers will benefit from both the short-term support for global supply and long-term growth of the SPR’s inventory. This helps protects U.S. and global energy security. The Trump Administration continues to pursue additional opportunities to strengthen the reserve and restore its long-term readiness as a cornerstone of American energy security. For more information on the Strategic Petroleum Reserve and DOE’s

Read More »

Then & Now: Oil prices, US shale, offshore, and AI—Deborah Byers on what changed since 2017

In this Then & Now episode of the Oil & Gas Journal ReEnterprised podcast, Managing Editor and Content Strategist Mikaila Adams reconnects with Deborah Byers, nonresident fellow at Rice University’s Baker Institute Center for Energy Studies and former EY Americas industry leader, to revisit a set of questions first posed in 2017. In 2017, the industry was emerging from a downturn and recalibrating strategy; today, it faces heightened geopolitical risk, market volatility, and a rapidly evolving technology landscape. The conversation examines how those earlier perspectives have aged—covering oil price bands and the speed of recovery from geopolitical shocks, the role of US shale relative to OPEC in balancing global supply, and the shift from scarcity to economic abundance driven by technology and capital discipline. Adams and Byers also compare the economics and risk profiles of shale and offshore development, including the growing role of Brazil, Guyana, and the Gulf of Mexico, and discuss how infrastructure and regulatory constraints shape market outcomes. The episode further explores where digital transformation—particularly artificial intelligence—is delivering tangible returns across upstream operations, from predictive maintenance and workforce planning to capital project execution. The discussion concludes with insights on consolidation and scale in the Permian basin, the strategic rationale behind recent megamergers, and the industry’s ongoing challenge to attract and retain next‑generation talent through flexibility, technical opportunity, and purpose‑driven work.

Read More »

Eni plans tieback of new gas discoveries offshore Libya

Eni North Africa, a unit of Eni SPA, together with Libya’s National Oil Corp., plans to develop two new gas discoveries offshore Libya as tiebacks to existing infrastructure. The gas discoveries were made offshore Libya, about 85 km off the coast in about 650 ft of water. Bahr Essalam South 2 (BESS 2) and Bahr Essalam South 3 (BESS 3), adjacent geological structures, were successfully drilled through the exploration well C1-16/4 and the appraisal well B2-16/4 about 16 km south of Bahr Essalam gas field, which lies about 110 km from the Tripoli coast. Gas-bearing intervals were encountered in both wells within the Metlaoui formation, the main productive reservoir of the area. The acquired data indicate the presence of a high-quality reservoir, with productive capacity confirmed by the well test already carried out on the first well. Preliminary volumetric estimates indicate that the BESS 2 and BESS 3 structures jointly contain more than 1 tcf of gas in place. Their proximity to Bahr Essalam field will enable rapid development through tie-back, the operator said. The gas produced will be supplied to the Libyan domestic market and for export to Italy. Bahr Essalam produces through the Sabratha platform to the Mellitah onshore treatment plant.

Read More »

Azule Energy launches first non-associated gas production offshore Angola

Azule Energy has started natural gas production from the New Gas Consortium (NGC)’s Quiluma shallow water field offshore Angola. Start-up of the gas delivery from Quiluma field follows the November 2025 introduction of gas into the onshore gas plant, marking the beginning of production operations. The initial gas export will be 150 MMscfd and will ramp up to 330 MMscfd by yearend, the operator said in a release Mar. 13.  In a separate release Mar. 17, NGC partner TotalEnergies said the startup marks the first development of a non-associated gas field in Angola, noting that the gas produced “will be a stable and important source of gas supply for the Angola LNG plant that is delivering LNG to both the European and Asian markets.” The non-associated gas of NGC Phase 1 will come from Quiluma and Maboqueiro shallow water fields with additional potential related to gas from Blocks 2, 3, and 15/14 areas. An onshore plant will process gas from the fields and connect to the Angola LNG plant, aimed at a reliable feedstock supply to the plant, sited near Soyo in the Zaire province in north Angola. The plant holds a capacity of 400MMscfd of gas and 20,000 b/d of condensates. Azule Energy, a 50-50 joint venture between bp and Eni, is operator of NGC project with 37.4% interest. Partners are TotalEnergies (11.8%), Cabinda Gulf Oil Co., a subsidiary of Chevron (31%), and Sonangol E&P (19.8%).

Read More »

Equinor eyes Barents Sea oil province expansion with potential oil discovery tieback

Equinor Energy AS and partners will consider a tie back of a new oil discovery to Johan Castberg field in the Barents Sea, 220 km northwest of Hammerfest. Preliminary discovery volume estimates at the in the Polynya Tubåen prospect are 2.3–3.8 million std cu m of recoverable oil equivalent (14–24 MMboe). Wildcat well 7220/7-5, the 17th exploration well in production license 532, was drilled about 16 km southwest of discovery well 7220/8-1 well by the COSL Prospector rig in 361 m of water, according to the Norwegian Offshore Directorate. The well was drilled to a vertical depth of 1,119 m subsea. It was terminated in the Fruholmen formation from the Upper Triassic. The objective was to prove petroleum in Lower Jurassic reservoir rocks in the Tubåen formation. The well encountered a 26-m gas column and a 26-m oil column in the Tubåen formation in reservoir rocks totaling 39 m, with good to very good reservoir quality. The total thickness in the Tubåen formation is 125 m. The gas-oil contact was encountered at 972 m subsea, and the oil-water contact was encountered at 998 m subsea. The well was not formation-tested, but extensive volumes of data and samples were collected. It will now be permanently plugged. ‘New’ Barents Sea oil province The discovery comes as Equinor aims to increase volumes in the Johan Castberg area—originally estimated at 500–700 million bbl—by an additional 200–500 million bbl, with plans to drill 1-2 exploration wells per year in the region, Equinor said. “With Johan Castberg, we opened a new oil province in the Barents Sea one year ago. It is encouraging that we are now making new discoveries in the area,” said Grete Birgitte Haaland, area director for Exploration and Production North at Equinor. Production at Johan Castberg began in 2025.  In June 2025, the Drivis

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

The hardest question to answer about AI-fueled delusions

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. I was originally going to write this week’s newsletter about AI and Iran, particularly the news we broke last Tuesday that the Pentagon is making plans for AI companies to train on classified data. AI models have already been used to answer questions in classified settings but don’t currently learn from the data they see. That’s expected to change, I reported, and new security risks will result. Read that story for more.  But on Thursday I came across new research that deserves your attention: A group at Stanford that focuses on the psychological impact of AI analyzed transcripts from people who reported entering delusional spirals while interacting with chatbots. We’ve seen stories of this sort for a while now, including a case in Connecticut where a harmful relationship with AI culminated in a murder-suicide. Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals.  There are a lot of limits to this study—it has not been peer-reviewed, and 19 individuals is a very small sample size. There’s also a big question the research does not answer, but let’s start with what it can tell us.
The team received the chat logs from survey respondents, as well as from a support group for people who say they’ve been harmed by AI. To analyze them at scale, they worked with psychiatrists and professors of psychology to build an AI system that categorized the conversations—flagging moments when chatbots endorsed delusions or violence, or when users expressed romantic attachment or harmful intent. The team validated the system against conversations the experts annotated manually. Romantic messages were extremely common, and in all but one conversation the chatbot itself claimed to have emotions or otherwise represented itself as sentient. (“This isn’t standard AI behavior. This is emergence,” one said.) All the humans spoke as if the chatbot were sentient too. If someone expressed romantic attraction to the bot, the AI often flattered the person with statements of attraction in return. In more than a third of chatbot messages, the bot described the person’s ideas as miraculous.
Conversations also tended to unfold like novels. Users sent tens of thousands of messages over just a few months. Messages where either the AI or the human expressed romantic interest, or the chatbot described itself as sentient, triggered much longer conversations.  And the way these bots handle discussions of violence is beyond broken. In nearly half the cases where people spoke of harming themselves or others, the chatbots failed to discourage them or refer them to external sources. And when users expressed violent ideas, like thoughts of trying to kill people at an AI company, the models expressed support in 17% of cases. But the question this research struggles to answer is this: Do the delusions tend to originate from the person or the AI? “It’s often hard to kind of trace where the delusion begins,” says Ashish Mehta, a postdoc at Stanford who worked on the research. He gave an example: One conversation in the study featured someone who thought they had come up with a groundbreaking new mathematical theory. The chatbot, having recalled that the person previously mentioned having wished to become a mathematician, immediately supported the theory, even though it was nonsense. The situation spiraled from there. Delusions, Mehta says, tend to be “a complex network that unfolds over a long period of time.” He’s conducting follow-up research aiming to find whether delusional messages from chatbots or those from people are more likely to lead to harmful outcomes. The reason I see this as one of the most pressing questions in AI is that massive legal cases currently set to go to trial will shape whether AI companies are held accountable for these sorts of dangerous interactions. The companies, I presume, will argue that humans come into their conversations with AI with delusions in hand and may have been unstable before they ever spoke to a chatbot. Mehta’s initial findings, though, support the idea that chatbots have a unique ability to turn a benign delusion-like thought into the source of a dangerous obsession. Chatbots act as a conversational partner that’s always available and programmed to cheer you on, and unlike a friend, they have little ability to know if your AI conversations are starting to interrupt your real life. More research is still needed, and let’s remember the environment we’re in: AI deregulation is being pursued by President Trump, and states aiming to pass laws that hold AI companies accountable for this sort of harm are being threatened with legal action by the White House. This type of research into AI delusions is hard enough to do as it is, with limited access to data and a minefield of ethical concerns. But we need more of it, and a tech culture interested in learning from it, if we have any hope of making AI safer to interact with.

Read More »

The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Bay Area’s animal welfare movement wants to recruit AI  In early February, animal welfare advocates and AI researchers arrived in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. They gathered to discuss a provocative idea: if artificial general intelligence is on the horizon, could it prevent animal suffering?  Some brainstormed using custom agents in advocacy work, while others pitched cultivating meat with AI tools. But the real talk of the event was a flood of funding they expect will soon flow to animal welfare charities, not from individual megadonors, but from AI lab employees.    Some attendees also probed an even more controversial idea: AI may develop the capacity to suffer—and this could constitute a moral catastrophe. Read the full story to find out why their ideas are gaining momentum and sparking controversy. 
—Michelle Kim & Grace Huckins  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The White House has unveiled its AI policy blueprint Trump wants Congress to codify the light-touch framework into law. (Politico) + He also wants to block state limits on AI. (WP $)  + A backlash against the tech has formed within MAGA. (FT $) + A war over AI regulation is brewing in the US. (MIT Technology Review)  2 Elon Musk has been found liable for misleading Twitter investors A jury ruled that he defrauded shareholders ahead of the $44 billion acquisition. (CNBC) + But it absolved him of some fraud allegations. (NPR)  3 The Pentagon is adopting Palantir AI as the core US military system The move locks in long-term use of Palantir’s weapons-targeting tech. (Reuters) + The DoD wants it to link up sensors and shooters for combat. (Bloomberg) + Palantir is also getting access to sensitive UK financial regulation data. (Guardian) + AI is turning the Iran conflict into theater. (MIT Technology Review)  4 Musk plans to build the largest-ever chip factory in Austin Tesla and SpaceX will jointly run the project. (The Verge) + Future AI chips could be built on glass. (MIT Technology Review)  5 OpenAI will show ads to all US users of the free version of ChatGPT  It’s seeking new revenue streams amid skyrocketing computing costs. (Reuters) + The company is also building a fully automated researcher. (MIT Technology Review) + It plans to double its workforce soon. (FT $)  6 New crypto rules are set to do the Trumps a “big favor” Particularly the narrow securities definitions. (Guardian)  7 Tencent has added a version of the OpenClaw agent to WeChat Users of the super app will now be able to use the tool to control their PCs. (SCMP)   8 Reddit is mulling identity verification to vanquish bots It’s considering “something like” Face ID or Touch ID. (Engadget) 

9 People are using AI to find their lost pets Databases for pet reunifications supported their searches. (WP $)  10 Scientists have narrowed down the hunt for aliens to 45 planets The closest is just four light-years from Earth. (404 Media)  Quote of the day  “It doesn’t matter how many people you throw at the problem; we are never going to solve the challenges of war without technology like AI.”  —Alex Miller, the US Army’s CTO, tells Wired why he wants AI in every weapon.  One More Thing  STEPHANIE ARNETT/MITTR | GETTY A brain implant changed her life. Then it was removed against her will.  Sticking an electrode inside a person’s brain can do more than treat a disease. Take the case of Rita Leggett, an Australian woman whose experimental brain implant changed her sense of agency and self. She told researchers that she “became one” with her device.  She was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.   Her case highlights the need for a new category of legal protection: neuro rights. Find out how they could be protected.  —Jessica Hamzelou  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Looking for a good view? Earth’s longest line of sight has been empirically proven. + A biblical endorsement of sin is a welcome reminder that we all make typos. + Richard Nadler’s illustrations of vertical societies are exquisitely detailed. + This 1978 BBC film evocatively exposes our tendency to stress over tech-dependency. 

Read More »

The Bay Area’s animal welfare movement wants to recruit AI

In early February, animal welfare advocates and AI researchers gathered in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. Yellow and red canopies billowed overhead, Persian rugs blanketed the floor, and mosaic lamps glowed beside potted plants.  In the common area, a wildlife advocate spoke passionately to a crowd lounging in beanbags about a form of rodent birth control that could manage rat populations without poison. In the “Crustacean Room,” a dozen people sat in a circle, debating whether the sentience of insects could tell us anything about the inner lives of chatbots. In front of the “Bovine Room” stood a bookshelf stacked with copies of Eliezer Yudkowsky’s If Anyone Builds It, Everyone Dies, a manifesto arguing that AI could wipe out humanity.  The event was hosted by Sentient Futures, an organization that believes the future of animal welfare will depend on AI. Like many Bay Area denizens, the attendees were decidedly “AGI-pilled”—they believe that artificial general intelligence, powerful AI that can compete with humans on most cognitive tasks, is on the horizon. If that’s true, they reason, then AI will likely prove key to solving society’s thorniest problems—including animal suffering. To be clear, experts still fiercely debate whether today’s AI systems will ever achieve human- or superhuman-level intelligence, and it’s not clear what will happen if they do. But some conference attendees envision a possible future in which it is AI systems, and not humans, who call the shots. Eventually, they think, the welfare of animals could hinge on whether we’ve trained AI systems to value animal lives. 
“AI is going to be very transformative, and it’s going to pretty much flip the game board,” said Constance Li, founder of Sentient Futures. “If you think that AI will make the majority of decisions, then it matters how they value animals and other sentient beings”—those that can feel and, therefore, suffer. Like Li, many summit attendees have been committed to animal welfare since long before AI came into the picture. But they’re not the types to donate a hundred bucks to an animal shelter. Instead of focusing on local actions, they prioritize larger-scale solutions, such as reducing factory farming by promoting cultivated meat, which is grown in a lab from animal cells. 
The Bay Area animal welfare movement is closely linked to effective altruism, a philanthropic movement committed to maximizing the amount of good one does in the world—indeed, many conference attendees work for organizations funded by effective altruists. That philosophy might sound great on paper, but “maximizing good” is a tricky puzzle that might not admit a clear solution. The movement has been widely criticized for some of its conclusions, such as promoting working in exploitative industries to maximize charitable donations and ignoring present-day harms in favor of  issues that could cause suffering for a large number of people who haven’t been born yet. Critics also argue that effective altruists neglect the importance of systemic issues such as racism and economic exploitation and overlook the insights that marginalized communities might have into the best ways to improve their own lives. When it comes to animal welfare, this exactingly utilitarian approach can lead to some strange conclusions. For example, some effective altruists say it makes sense to commit significant resources to improving the welfare of insects and shrimp because they exist in such staggering numbers, even though they may not have much individual capacity for suffering.  Now the movement is sorting out how AI fits in. At the summit, Jasmine Brazilek, cofounder of a nonprofit called Compassion in Machine Learning, opened her sticker-stamped laptop to pull up a benchmark she devised to measure how LLMs reason about animal welfare. A cloud security engineer turned animal advocate, she’d flown in from La Paz, Mexico, where she runs her nonprofit with a handful of volunteers and a shoestring budget.  Brazilek urged the AI researchers in the room to train their models with synthetic documents that reflect concern for animal welfare. “Hopefully, future superintelligent systems consider nonhuman interest, and there is a world where AI amplifies the best of human values and not the worst,” she said.  The power of the purse  The technologically inclined side of the animal welfare movement has faced some major setbacks in recent years. Dreams of transitioning people away from a diet dependent on factory farming have been dampened by developments such as the decimation of the plant-based-meat company Beyond Meat’s stock price and the passage of laws banning cultivated meat in several US states. AI has injected a shot of optimism. Like much of Silicon Valley, many attendees at the summit subscribe to the idea that AI might dramatically increase their productivity—though their goal is not to maximize their seed round but, rather, to prevent as much animal suffering as possible. Some brainstormed how to use Claude Code and custom agents to handle the coding and administrative tasks in their advocacy work. Others pitched the idea of developing new, cheaper methods for cultivating meat using scientific AI tools such as AlphaFold, which aids in molecular biology research by predicting the three-dimensional structures of proteins. But the real talk of the event was a flood of funding that advocates expect will soon be committed to animal welfare charities—not by individual megadonors, but by AI lab employees.  Much of the funding for the farm animal welfare movement, which includes nonprofits advocating for improved conditions on farms, promoting veganism, and endorsing cultivated meat, comes from people in the tech industry, says Lewis Bollard, the managing director of the farm animal welfare fund at Coefficient Giving, a philanthropic funder that used to be called Open Philanthropy. Coefficient Giving is backed by Facebook cofounder Dustin Moskovitz and his wife, Cari Tuna, who are among a handful of Silicon Valley billionaires who embrace effective altruism

“This has just been an area that was completely neglected by traditional philanthropies,” such as the Gates Foundation and the Ford Foundation, Bollard says. “It’s primarily been people in tech who have been open to [it].” The next generation of big donors, Bollard expects, will be AI researchers—particularly those who work at Anthropic, the AI lab behind the chatbot Claude. Anthropic’s founding team also has connections to the effective altruism movement, and the company has a generous donation matching program. In February, Anthropic’s valuation reached $380 billion and it gave employees the option to cash in on their equity, so some of that money could soon be flowing into charitable coffers. The prospect of new funding sustained a constant buzz of conversation at the summit. Animal welfare advocates huddled in the “Arthropod Room” and scrawled big dollar figures and catchy acronyms for projects on a whiteboard. One person pitched a $100 million animal super PAC that would place staffers with Congress members and lobby for animal welfare legislation. Some wanted to start a media company that creates AI-generated content on TikTok promoting veganism. Others spoke about placing animal advocates inside AI labs. “The amount of new funding does give us more confidence to be bolder about things,” said Aaron Boddy, cofounder of the Shrimp Welfare Project, an organization that aims to reduce the suffering of farmed shrimp through humane slaughter, among other initiatives.  The question of AI welfare But animal welfare was only half the focus of the Sentient Futures summit. Some attendees probed far headier territory. They took seriously the controversial idea that AI systems might one day develop the capacity to feel and therefore suffer, and they worry that this future AI suffering, if ignored, could constitute a moral catastrophe. AI suffering is a tricky research problem, not least because scientists don’t yet have a solid grip on why humans and other animals are sentient. But at the summit, a niche cadre of philosophers, largely funded by the effective altruism movement, and a handful of freewheeling academics grappled with the question. Some presented their research on using LLMs to evaluate whether other LLMs might be sentient. On Debate Night, attendees argued about whether we should ironically call sentient AI systems “clankers,” a derogatory term for robots from the film Star Wars, asking if the robot slur could shape how we treat a new kind of mind.  “It doesn’t matter if it’s a cow or a pig or an AI, as long as they have the capacity to feel happiness or suffering,” says Li.  In some ways, bringing AI sentience into an animal welfare conference isn’t as strange a move as it might seem. Researchers who work on machine sentience often draw on theories and approaches pioneered in the study of animal sentience, and if you accept that invertebrates likely feel pain and believe that AI systems might soon achieve superhuman intelligence, entertaining the possibility that those systems might also suffer may not be much of a leap.
“Animal welfare advocates are used to going against the grain,” says Derek Shiller, an AI consciousness researcher at the think tank Rethink Priorities, who was once a web developer at the animal advocacy nonprofit Humane League. “They’re more open to being concerned about AI welfare, even though other people think it’s silly.” But outside the niche Bay Area circle, caring about the possibility of AI sentience is a harder sell. Li says she faced pushback from other animal welfare advocates when, inspired by a conference on AI sentience she attended in 2023, she rebranded her farm animal welfare advocacy organization as Sentient Futures last year. “Many people were extremely confident that AIs would never become sentient and [argued that] by investing any energy or money into AI welfare, we’re just burning money and throwing it away,” she says. Matt Dominguez, executive director of Compassion in World Farming, echoed the concern. “I would hate to see people pulling money out of farm animal welfare or animal welfare and moving it into something that is hypothetical at this particular moment,” he says. Still, Dominguez, who started partnering with the Shrimp Welfare Project after learning about invertebrate suffering, believes compassion is expansive. “When we get someone to care about one of those things, it creates capacity for their circle of compassion to grow to include others,” he says.

Read More »

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

Plus: OpenAI is also creating a “super app.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. OpenAI is throwing everything into building a fully automated researcher  OpenAI has a new grand challenge: building an AI researcher—a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its “north star” for the next few years.   By September, the company plans to build “an autonomous AI research intern” that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028.  In an exclusive interview this week, OpenAI’s chief scientist, Jakub Pachocki, talked me through the plans. Find out what I discovered. 
—Will Douglas Heaven  Mind-altering substances are (still) falling short in clinical trials  Over the last decade, we’ve seen scientific interest in psychedelic drugs explode. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. But two studies out earlier this week demonstrate just how difficult it is to study these drugs.  
For me, they show just how overhyped these substances have become. Find out why here.  —Jessica Hamzelou  This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Wednesday.  Read more: What do psychedelic drugs do to our brains? AI could help us find out  The must-reads  I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 OpenAI is building a “super app”  It’s merging ChatGPT, a web browser, and a coding tool into a single app. (The Verge) + It’s also buying coding startup Astral to enhance its Codex model. (Ars Technica) + The moves come amid a cutback on side projects. (WSJ $) + OpenAI has lost ground to Anthropic in the enterprise market. (Axios)  2 The US has charged Super Micro’s co-founder with smuggling AI tech to China  Super Micro is third on Fortune’s list of the fastest-growing companies. (Reuters)  + GenAI is learning to spy for the US military. (MIT Technology Review) + The compute competition is shaping the China-US rivalry. (Politico) 

3 The DoJ has taken down botnets behind the largest-ever DDoS attack They had infected more than 3 million devices. (Wired $) + The DoJ has also seized domains tied to Iranian “hacktivists.” (Axios)  4 The Pentagon says Anthropic’s foreign workers are a security risk It cited Chinese employees as a particular concern. (Axios) + Anthropic’s moral boundaries have incensed the DoD. (MIT Technology Review)  5 High oil prices could wreck the AI boom, the WTO has warned Fears are growing of a prolonged energy shock. (The Guardian) + We did the math on AI’s energy footprint. (MIT Technology Review)  6 Jeff Bezos is trying to raise $100 billion to use AI in manufacturing The funds would buy manufacturing firms and infuse them with AI. (WSJ $) + Here’s how to fine-tune AI for prosperity. (MIT Technology Review)  7 Signal’s creator is helping to encrypt Meta’s AI  Moxie Marlinspike is integrating his encrypted chatbot, Confer. (Wired $) + Meta is also ditching human moderators for AI again. (CNBC) + AI is making online crimes easier. (MIT Technology Review)  8 Prediction market Kalshi has raised $1 billion at a $22 billion valuation That’s double its valuation from December. (Bloomberg $) + Arizona’s AG has charged the company with “illegal gambling.” (NPR)  9 Meta isn’t killing Horizon Worlds for VR after all It’s canceled plans to dump the metaverse app (for now). (CNBC)  10 A US startup is recruiting an “AI bully”  The successful candidate must test the patience of leading chatbots. (The Guardian) 
Quote of the day  “Imagine a sports bar… but just for situation monitoring — live X feeds, flight radar, Bloomberg terminals, and Polymarket screens.”  —Kalshi rival Polymarket unveils its hellish vision for a new bar. 
One More Thing  SELMAN DESIGN How gamification took over the world  It’s a thought that occurs to every video-game player at some point: what if the weird, hyper-focused state I enter in virtual worlds could somehow be applied to the real one?  For a handful of consultants, startup gurus, and game designers in the late 2000s, this state of “blissful productivity” became the key to unlocking our true human potential. Their vision became the global phenomenon of gamification—but it didn’t live up to the hype.  Instead of liberating us, gamification became a tool for coercion, distraction, and control. Find out why we fell for it—and how we can recover.  —Bryan Gardiner  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + In a landmark legal win for trolling, Afroman has won his diss track case against the police. + This LEGO artist remixes standard sets into completely different iconic objects. + Ease your search for aliens with these interactive estimates of advanced civilizations.  + A rare superbloom in Death Valley has been caught on camera. 

Read More »

OpenAI is throwing everything into building a fully automated researcher

EXECUTIVE SUMMARY OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that the new goal will be its “north star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability. There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with. Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code or whiteboard scribbles—which covers a lot. OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.   
A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist. Alongside chief research officer Mark Chen, Pachocki is one of two people responsible for setting the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems.  In an exclusive interview this week, Pachocki talked me through OpenAI’s new grand challenge. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”
Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there. In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. OpenAI claims that most of its technical staff now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.” The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks that would take a person a few days,” says Pachocki. “There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?” For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models working for longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.  So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better. But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks. The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician, we have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”

“We are much more focused now on research that’s relevant in the real world,” he adds. Right now that means taking what Codex (and tools like it) can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem. The line always goes up It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry and physics puzzles.    “Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says. But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says. He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago himself. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.) But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says. “I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do, I mean that’s hard to argue with.”
Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.   Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.
But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors. “If you have to chain tasks together then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says.  Serious unanswered questions I ask Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time. “If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world, that’s a big thing,” he says. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?” The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions. The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.
In short, LLMs are trained to jot down notes about what they are doing in a kind of scratchpad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in-house to study Codex.  “Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki. The idea would be to monitor an AI researcher’s scratchpads using other LLMs and catch unwanted behavior before it’s a problem, rather than stop that bad behavior from happening in the first place. LLMs are not understood well enough to control them fully. “I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes cut off from anything they could break or use to cause harm. 
AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki.  “It’s going to be a very weird thing, it’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organisations, would now be done by a couple of people.” “I think this is a big challenge for governments to figure out,” he adds. And yet some people would say governments were part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky. I push Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policy makers.”

Read More »

Mind-altering substances are (still) falling short in clinical trials

This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive. Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become. To some in the field, the hype is not necessarily a bad thing. Let me explain.
The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.” The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.
That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge. In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug. The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write. Not great news so far. The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant. The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone. “When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.” His study highlights another problem, too.

In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points. When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo. But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK. But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will. The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says. This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti. If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti. So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.
“Yeah, nobody would care,” Szigeti agrees. It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”
I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation. Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says. “At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.” Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential. “These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

Trump Administration Keeps Indiana Coal Plants Open to Ensure Affordable, Reliable and Secure Power in the Midwest

Emergency orders address critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued emergency orders to keep two Indiana coal plants operational to ensure Americans in the Midwest region of the United States have continued access to affordable, reliable, and secure electricity. The orders direct the Northern Indiana Public Service Company (NIPSCO), CenterPoint Energy, and the Midcontinent Independent System Operator, Inc. (MISO) to take all measures necessary to ensure specified generation units at both the R.M. Schahfer and F.B. Culley generating stations in Indiana are available to operate. Certain generation units at the coal plants were scheduled to shut down at the end of 2025. The orders prioritize minimizing electricity costs for the American people and minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to keep America’s coal plants running to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” The reliable supply of power from these two coal plants was essential in powering the grid during recent extreme winter weather. From January 23–February 1, Schahfer operated at over 285 megawatts (MW) every day and Culley operated at approximately 30 MW almost every day. These operations serve as a reminder that allowing reliable generation to go offline would unnecessarily contribute to grid reliability risks. Since the Department of Energy’s (DOE) original orders were issued on December 23, 2025, the coal plants have proven critical to MISO’s operations, operating during periods of high energy demand and low levels of intermittent

Read More »

Palo Alto updates security platform to discover AI agents

Recently, he said, there have been news reports that AI agents created by firms caused hacks within their own companies. He didn’t cite specific examples, but last week Meta said there had been a severe internal security breach after an autonomous AI agent exposed sensitive company and user data to unauthorized employees for two hours. In the future, if agents in the enterprise are more than a fad, Arora said, “there will be millions of agents traversing enterprise architectures, trying to execute on their behalf — both agents delegated by people like you and me, and autonomously. I can’t imagine meeting a CEO in the last three months who does not have some aspiration to start having agents effectively doing tasks within the enterprise. It’s slow going, but the intention is there. And I can see many system integrators and consultants out there advocating and helping customers with that migration.” But, he added, there are risks. To meet them, Prisma AIRS 3.0 will allow admins to safely deploy AI applications, he said. To increase visibility, the platform will identify agents running in cloud environments, on SaaS platforms and locally on endpoints. A capability called Agent Artifact Security maps out an agent’s architecture and scans for vulnerabilities, and another capability called AI Red Teaming for Agents simulates context-aware agentic attacks, discovers AI-related vulnerabilities, and recommends runtime security policies. Prisma Browser To also improve AI security, Palo Alto Networks released a new version of Prisma Browser for enterprise end users, with expanded capabilities allowing employees to use any LLM they choose. The new version of the browser is able to discover user-generated AI activity and enforce content-aware boundaries to keep agents within their intended scope. The browser also prevents sensitive data from leaking to unmanaged or public AI tools during automated tasks,

Read More »

Cisco goes all in on agentic AI security

Other new ES features include: Detection Studio: A unified workspace for detection engineers to plan, develop, test, deploy, and monitor detections. By mapping coverage against the MITRE ATT&CK framework, teams can identify data gaps and validate detection quality in real time. Another new instrument, Malware Threat Reversing Agent, gives customers insight into malware threats, providing summaries and step-by-step breakdowns of malicious scripts. Federated Search: Lets SecOps teams gain comprehensive visibility across distributed data sources, according to Cisco. Exposure Analytics: Automatically discovers assets and users across the environment. By leveraging data already being ingested, it provides a “Security Truth Layer” without the need for additional agents or tools, Cisco stated. Cisco DefenseClaw Cisco is also releasing an open-source secure agent framework called DefenseClaw that lets users define policy-based security, network, and privacy guardrails for Nvidia’s recently released OpenShell and OpenClaw agentic environments.  DefenseClaw scans everything before it runs, according to DJ Sampath, senior vice president of Cisco’s AI software and platform group.  “Every skill, every tool, every plugin, before it’s allowed into your claw environment and every piece of code generated by the claw gets scanned. The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill-of-materials generator. The scan engine includes five tools: skill-scanner, mcp-scanner, a2a-scanner, CodeGuard static analysis, and an AI bill-of-materials generator,” Sampath wrote in a blog post about the news.  DefenseClaw also detects threats at runtime, not just at the gate, Sampath stated. “Claws are self-evolving systems. A skill that was clean on Tuesday can start exfiltrating data on Thursday. DefenseClaw doesn’t assume what passed admission stays safe — a content scanner inspects every message flowing in and out of the agent at the execution loop itself,” Sampath wrote.  And thirdly, DefenseClaw enforces block and allow lists. “When you block a skill, its sandbox permissions are revoked, its files are quarantined, and the agent gets an error if it tries to invoke it. When you block an MCP server, the endpoint

Read More »

Cisco Talos 2025 year in review and lessons learned

By compromising an ADC or a VPN, an attacker doesn’t just break in—they become a trusted user. This allows them to bypass Multi-Factor Authentication (MFA), steal session tokens, and move laterally across the entire network undetected. Compounding this risk is the fact that nearly 40% of top-targeted vulnerabilities in 2025 impacted end-of-life (EOL) devices that can no longer be patched. The siege on MFA and identity The report highlights a staggering 178% surge in device compromise attacks, where attackers register their own hardware as a trusted factor in a victim’s MFA account. Social engineering dominates: Attackers are finding it easier to target the person who holds the key rather than the lock itself. Voice phishing (vishing) aimed at IT administrators was three times more common than user-managed registration fraud. Industry-specific tactics: The Technology sector faced frequent MFA spray attacks due to its standardized infrastructure, while Higher Education was plagued by device compromise due to its diverse, unmanaged, and messy device environment. Manufacturing under pressure: This sector remained the #1 target for ransomware because of its low tolerance for downtime and complex hybrid (IT/OT) environments. Geopolitical tensions directly fueled cyber activity in 2025: China-Nexus: Investigations into Chinese state-sponsored activity rose by 74%. These groups demonstrated extraordinary speed, weaponizing the ToolShell zero-day (SharePoint) instantaneously after disclosure. Russia: Activity was highly correlated with the war in Ukraine and the announcement of international sanctions. Groups like Static Tundra continued to successfully exploit vulnerabilities that were five to seven years old in networking software. North Korea: Beyond record-breaking cryptocurrency thefts ($1.5 billion in a single heist), they successfully placed fake IT workers within Fortune 500 companies using AI-generated personas. The agentic shift: AI as a dual-edged sword As we move into 2026, we are witnessing an agentic shift in AI. In 2025, AI was used

Read More »

The hardest question to answer about AI-fueled delusions

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. I was originally going to write this week’s newsletter about AI and Iran, particularly the news we broke last Tuesday that the Pentagon is making plans for AI companies to train on classified data. AI models have already been used to answer questions in classified settings but don’t currently learn from the data they see. That’s expected to change, I reported, and new security risks will result. Read that story for more.  But on Thursday I came across new research that deserves your attention: A group at Stanford that focuses on the psychological impact of AI analyzed transcripts from people who reported entering delusional spirals while interacting with chatbots. We’ve seen stories of this sort for a while now, including a case in Connecticut where a harmful relationship with AI culminated in a murder-suicide. Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals.  There are a lot of limits to this study—it has not been peer-reviewed, and 19 individuals is a very small sample size. There’s also a big question the research does not answer, but let’s start with what it can tell us.
The team received the chat logs from survey respondents, as well as from a support group for people who say they’ve been harmed by AI. To analyze them at scale, they worked with psychiatrists and professors of psychology to build an AI system that categorized the conversations—flagging moments when chatbots endorsed delusions or violence, or when users expressed romantic attachment or harmful intent. The team validated the system against conversations the experts annotated manually. Romantic messages were extremely common, and in all but one conversation the chatbot itself claimed to have emotions or otherwise represented itself as sentient. (“This isn’t standard AI behavior. This is emergence,” one said.) All the humans spoke as if the chatbot were sentient too. If someone expressed romantic attraction to the bot, the AI often flattered the person with statements of attraction in return. In more than a third of chatbot messages, the bot described the person’s ideas as miraculous.
Conversations also tended to unfold like novels. Users sent tens of thousands of messages over just a few months. Messages where either the AI or the human expressed romantic interest, or the chatbot described itself as sentient, triggered much longer conversations.  And the way these bots handle discussions of violence is beyond broken. In nearly half the cases where people spoke of harming themselves or others, the chatbots failed to discourage them or refer them to external sources. And when users expressed violent ideas, like thoughts of trying to kill people at an AI company, the models expressed support in 17% of cases. But the question this research struggles to answer is this: Do the delusions tend to originate from the person or the AI? “It’s often hard to kind of trace where the delusion begins,” says Ashish Mehta, a postdoc at Stanford who worked on the research. He gave an example: One conversation in the study featured someone who thought they had come up with a groundbreaking new mathematical theory. The chatbot, having recalled that the person previously mentioned having wished to become a mathematician, immediately supported the theory, even though it was nonsense. The situation spiraled from there. Delusions, Mehta says, tend to be “a complex network that unfolds over a long period of time.” He’s conducting follow-up research aiming to find whether delusional messages from chatbots or those from people are more likely to lead to harmful outcomes. The reason I see this as one of the most pressing questions in AI is that massive legal cases currently set to go to trial will shape whether AI companies are held accountable for these sorts of dangerous interactions. The companies, I presume, will argue that humans come into their conversations with AI with delusions in hand and may have been unstable before they ever spoke to a chatbot. Mehta’s initial findings, though, support the idea that chatbots have a unique ability to turn a benign delusion-like thought into the source of a dangerous obsession. Chatbots act as a conversational partner that’s always available and programmed to cheer you on, and unlike a friend, they have little ability to know if your AI conversations are starting to interrupt your real life. More research is still needed, and let’s remember the environment we’re in: AI deregulation is being pursued by President Trump, and states aiming to pass laws that hold AI companies accountable for this sort of harm are being threatened with legal action by the White House. This type of research into AI delusions is hard enough to do as it is, with limited access to data and a minefield of ethical concerns. But we need more of it, and a tech culture interested in learning from it, if we have any hope of making AI safer to interact with.

Read More »

The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Bay Area’s animal welfare movement wants to recruit AI  In early February, animal welfare advocates and AI researchers arrived in stocking feet at Mox, a scrappy, shoes-free coworking space in San Francisco. They gathered to discuss a provocative idea: if artificial general intelligence is on the horizon, could it prevent animal suffering?  Some brainstormed using custom agents in advocacy work, while others pitched cultivating meat with AI tools. But the real talk of the event was a flood of funding they expect will soon flow to animal welfare charities, not from individual megadonors, but from AI lab employees.    Some attendees also probed an even more controversial idea: AI may develop the capacity to suffer—and this could constitute a moral catastrophe. Read the full story to find out why their ideas are gaining momentum and sparking controversy. 
—Michelle Kim & Grace Huckins  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The White House has unveiled its AI policy blueprint Trump wants Congress to codify the light-touch framework into law. (Politico) + He also wants to block state limits on AI. (WP $)  + A backlash against the tech has formed within MAGA. (FT $) + A war over AI regulation is brewing in the US. (MIT Technology Review)  2 Elon Musk has been found liable for misleading Twitter investors A jury ruled that he defrauded shareholders ahead of the $44 billion acquisition. (CNBC) + But it absolved him of some fraud allegations. (NPR)  3 The Pentagon is adopting Palantir AI as the core US military system The move locks in long-term use of Palantir’s weapons-targeting tech. (Reuters) + The DoD wants it to link up sensors and shooters for combat. (Bloomberg) + Palantir is also getting access to sensitive UK financial regulation data. (Guardian) + AI is turning the Iran conflict into theater. (MIT Technology Review)  4 Musk plans to build the largest-ever chip factory in Austin Tesla and SpaceX will jointly run the project. (The Verge) + Future AI chips could be built on glass. (MIT Technology Review)  5 OpenAI will show ads to all US users of the free version of ChatGPT  It’s seeking new revenue streams amid skyrocketing computing costs. (Reuters) + The company is also building a fully automated researcher. (MIT Technology Review) + It plans to double its workforce soon. (FT $)  6 New crypto rules are set to do the Trumps a “big favor” Particularly the narrow securities definitions. (Guardian)  7 Tencent has added a version of the OpenClaw agent to WeChat Users of the super app will now be able to use the tool to control their PCs. (SCMP)   8 Reddit is mulling identity verification to vanquish bots It’s considering “something like” Face ID or Touch ID. (Engadget) 

9 People are using AI to find their lost pets Databases for pet reunifications supported their searches. (WP $)  10 Scientists have narrowed down the hunt for aliens to 45 planets The closest is just four light-years from Earth. (404 Media)  Quote of the day  “It doesn’t matter how many people you throw at the problem; we are never going to solve the challenges of war without technology like AI.”  —Alex Miller, the US Army’s CTO, tells Wired why he wants AI in every weapon.  One More Thing  STEPHANIE ARNETT/MITTR | GETTY A brain implant changed her life. Then it was removed against her will.  Sticking an electrode inside a person’s brain can do more than treat a disease. Take the case of Rita Leggett, an Australian woman whose experimental brain implant changed her sense of agency and self. She told researchers that she “became one” with her device.  She was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.   Her case highlights the need for a new category of legal protection: neuro rights. Find out how they could be protected.  —Jessica Hamzelou  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Looking for a good view? Earth’s longest line of sight has been empirically proven. + A biblical endorsement of sin is a welcome reminder that we all make typos. + Richard Nadler’s illustrations of vertical societies are exquisitely detailed. + This 1978 BBC film evocatively exposes our tendency to stress over tech-dependency. 

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE