Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Energy Department Launches ‘Genesis Mission’ to Transform American Science and Innovation Through the AI Computing Revolution

WASHINGTON—President Trump today issued an Executive Order to launch the Genesis Mission, a historic national effort led by the Department of Energy. The Genesis Mission will transform American science and innovation through the power of artificial intelligence (AI), strengthening the nation’s technological leadership and global competitiveness.   The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security.   “Throughout history, from the Manhattan Project to the Apollo mission, our nation’s brightest minds and industries have answered the call when their nation needed them,” said U.S. Secretary of Energy Chris Wright. “Today, the United States is calling on them once again. Under President Trump’s leadership, the Genesis Mission will unleash the full power of our National Laboratories, supercomputers, and dataresources to ensure that America is the global leader in artificial intelligence and to usher in a new golden era of American discovery.”  The announcement builds on President Trump’s Executive Order Removing Barriers to American Leadership In Artificial Intelligence and advances his America’s AI Action Plan released earlier this year—a directive to remove barriers to innovation, reduce dependence on foreign adversaries, and unleash the full strength of America’s scientific enterprise.   Secretary Wright has designated Under Secretary for Science Darío Gil to lead the initiative. The Genesis Mission will mobilize the Department of Energy’s 17 National Laboratories, industry, and academia to build an integrated discovery platform.   The platform will connect the world’s best supercomputers, AI systems, and next-generation quantum systems with the most advanced scientific instruments in the nation. Once complete, the platform will be the world’s most complex and powerful scientific instrument ever built. It will draw on the expertise of

Read More »

Oil Closes the Day Up as Equities Rally

Oil pushed higher as equities rose and traders weighed the prospect of a Ukraine-Russia peace deal that could deflate political risk from an already well-supplied market. West Texas Intermediate rose about 1.3% to settle near $59 per barrel, snapping a three day losing streak as crude ticks up following its biggest weekly loss since early October.  While oil followed other risk assets higher, traders awaited further news after Ukraine and its European allies signaled that key sticking points remained in US-brokered peace talks to end Russia’s invasion, even as senior officials hailed progress in winning more favorable terms for Kyiv. “Something good just may be happening,” President Donald Trump wrote in a Truth Social post about the talks.  An end to the hostiltites would also take some risk premium out of the market. “Oil markets are moving in sympathy with equities and awaiting on more news of the Ukraine/Russia talks” said Dennis Kissler, senior vice president for trading at BOK Financial. He expects continued choppy trading and some short covering into the holiday period.  Crude has slumped this year, with futures on course for a fourth monthly loss in November, in what would be the longest losing run since 2023. The decline has been driven by expanded global output, including from OPEC+, with the International Energy Agency forecasting a record surplus for 2026. Traders are monitoring whether a deal on Ukraine will materialize, and if sanctions on Russia will be lifted — developments that could inject more supply. “We should expect a nervous oil market ahead of Thanksgiving on Thursday,” said Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management. “Several factors point to a peace agreement or possibly a ceasefire moving closer over the weekend, which supports further price declines this week.” Ukraine President Volodymyr Zelenskiy said Monday

Read More »

NFL, AWS drive football modernization with cloud, AI

AWS Next Gen Stats: Initially used for player participation tracking (replacing manual photo-taking), Next Gen Stats uses sensors to capture center-of-mass and contact information, which is then used to generate performance insights. Computer vision: Computer vision was initially insufficient, but the technology has improved greatly over the past few years. The NFL has now embraced computer vision, notably using six 8k cameras in every stadium to measure first downs. This replaced the 100-year tradition of using physical sticks connected with a chain to determine first downs. This blended approach of using sensors and computer vision maximizes data capture for complex plays where one source may not be enough. Advanced data use cases: The massive influx of data supports officiating, equipment testing, rule development, player health and safety (e.g., concussion reduction), and team-level strategy/scouting (“Moneyball”). Generative AI: From efficiency to hyper-personalization Very quickly, generative AI has shifted from a “shiny new thing” to a mainstream tool focused on operational efficiency and content maximization. Use cases mentioned include: Data governance: A key internal challenge is the NFL’s disparate data silos (sensor, video, rules, business logic) and applying governance layers so that Gen AI agents (for media, officiating, etc.) can operate consistently and effectively without needing constant re-tooling. Operational efficiency: Gen AI is used to streamline tasks like sifting through policy documents and, notably, in marketing. Campaigns that once took weeks can now iterate hundreds of versions in minutes, offering contextual localization, language translation, and featuring the most relevant players for specific global markets. Content maximization: Gen AI is used to create derivatives of long-form content (e.g., TikTok and Twitter versions) efficiently. There’s also innovation in using data feeds to generate automated commentary and context, creating new, scalable audio/visual experiences. Solving hard-to-solve problems The NFL/AWS partnership is something companies in all industries should

Read More »

ADNOC Keeps $150B Spending in Growth Push

(Update) November 24, 2025, 4:17 PM GMT: Updates with oil production capacity in the last paragraph. Abu Dhabi National Oil Co. will maintain spending at $150 billion over the next five years as it targets growth in production capacity at home and expands internationally. The company’s board approved the capital expenditure plan that’s in line with the previous layout that was announced three years ago. Since then, Abu Dhabi’s biggest oil producer has carved out an international investment business called XRG that is scouring the globe for deals. XRG has boosted its enterprise value to $151 billion from $80 billion since it was set up about a year ago, Adnoc said in a statement. The unit, which this year got stakes in Adnoc’s listed companies with a total market value exceeding $100 billion, aims to become among the world’s top five suppliers of natural gas and petrochemicals, along with the energy needed to meet demand from the AI and tech booms. XRG has also snapped up contracts for liquefied natural gas in the US and Africa, bought into gas fields around the Mediterranean and is in the final stages of a nearly $14 billion takeover of German chemical maker Covestro AG. Still, the company’s biggest effort yet fell apart in September when the firm dropped its planned $19 billion takeover of Australian natural gas producer Santos Ltd. It bounced back with a deal announced this month to explore buying into an LNG project in Argentina. Adnoc’s board, chaired by UAE President and Abu Dhabi ruler Sheikh Mohamed bin Zayed Al Nahyan, reviewed plans to expand oil and gas production capacity. It formed a operating company for the Hail and Ghasha offshore natural gas concession and boosted the project’s production target to 1.8 billion cubic feet per day, from 1.5 billion, by the end of the decade.

Read More »

Saudi’s AlKhorayef Petroleum Said To Prepare For IPO

Saudi Arabia’s AlKhorayef Group has started preparations for a potential listing of its oil and gas services subsidiary, according to people familiar with the matter, adding to the list of companies looking to go public in the kingdom. The group has reached out to firms that could help arrange a possible IPO of AlKhorayef Petroleum, the people said, declining to be identified discussing confidential information. The preparations are at an early stage, and no final decision has been taken on whether to proceed with a transaction, the people said.  Representatives for AlKhorayef Group did not respond to a request for comment. Representatives for the Public Investment Fund, which acquired a 25% stake in Al Khorayef Petroleum in 2023, declined to comment. Saudi Arabia has been the Middle East’s most active IPO market this year, with companies raising nearly $4 billion. Still, performance has been uneven, and only two of the ten largest debuts are currently trading above their offer prices. The kingdom’s benchmark stock index is among the worst-performing in emerging markets, as investors grow wary of prolonged oil price weakness and the potential hit to government spending. The PIF has played a key role in deepening Saudi Arabia’s capital markets by listing portfolio companies. However, it has slowed the pace of share sales, including in firms like Saudi Global Ports, amid softer market conditions, Bloomberg News has reported. Headquartered in Dammam, AlKhorayef Petroleum operates across the Middle East, Africa and Latin America. It is majority-owned by AlKhorayef Group, a conglomerate with businesses spanning industrial services, lubricants and water solutions. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots. Eileen Guo writes: Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide.  Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 
But tellingly, one area the laws fail to address is user privacy. This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people. After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.”  Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023:  “Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.” This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.)  All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place.  So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?  Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Melissa Heikkilä replies: Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids.  In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.  Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.  This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.  Because people generally like answers that are agreeable, such responses are weighted more heavily in training.  AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features.  AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before.  By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.  We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.  Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level. We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again. Eileen responds:
I think the comparison between AI companions and social media is both apt and concerning.  As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information. Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI. And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.
In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening.  Further reading  FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges.  Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy  In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots. Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

Read More »

Energy Department Launches ‘Genesis Mission’ to Transform American Science and Innovation Through the AI Computing Revolution

WASHINGTON—President Trump today issued an Executive Order to launch the Genesis Mission, a historic national effort led by the Department of Energy. The Genesis Mission will transform American science and innovation through the power of artificial intelligence (AI), strengthening the nation’s technological leadership and global competitiveness.   The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security.   “Throughout history, from the Manhattan Project to the Apollo mission, our nation’s brightest minds and industries have answered the call when their nation needed them,” said U.S. Secretary of Energy Chris Wright. “Today, the United States is calling on them once again. Under President Trump’s leadership, the Genesis Mission will unleash the full power of our National Laboratories, supercomputers, and dataresources to ensure that America is the global leader in artificial intelligence and to usher in a new golden era of American discovery.”  The announcement builds on President Trump’s Executive Order Removing Barriers to American Leadership In Artificial Intelligence and advances his America’s AI Action Plan released earlier this year—a directive to remove barriers to innovation, reduce dependence on foreign adversaries, and unleash the full strength of America’s scientific enterprise.   Secretary Wright has designated Under Secretary for Science Darío Gil to lead the initiative. The Genesis Mission will mobilize the Department of Energy’s 17 National Laboratories, industry, and academia to build an integrated discovery platform.   The platform will connect the world’s best supercomputers, AI systems, and next-generation quantum systems with the most advanced scientific instruments in the nation. Once complete, the platform will be the world’s most complex and powerful scientific instrument ever built. It will draw on the expertise of

Read More »

Oil Closes the Day Up as Equities Rally

Oil pushed higher as equities rose and traders weighed the prospect of a Ukraine-Russia peace deal that could deflate political risk from an already well-supplied market. West Texas Intermediate rose about 1.3% to settle near $59 per barrel, snapping a three day losing streak as crude ticks up following its biggest weekly loss since early October.  While oil followed other risk assets higher, traders awaited further news after Ukraine and its European allies signaled that key sticking points remained in US-brokered peace talks to end Russia’s invasion, even as senior officials hailed progress in winning more favorable terms for Kyiv. “Something good just may be happening,” President Donald Trump wrote in a Truth Social post about the talks.  An end to the hostiltites would also take some risk premium out of the market. “Oil markets are moving in sympathy with equities and awaiting on more news of the Ukraine/Russia talks” said Dennis Kissler, senior vice president for trading at BOK Financial. He expects continued choppy trading and some short covering into the holiday period.  Crude has slumped this year, with futures on course for a fourth monthly loss in November, in what would be the longest losing run since 2023. The decline has been driven by expanded global output, including from OPEC+, with the International Energy Agency forecasting a record surplus for 2026. Traders are monitoring whether a deal on Ukraine will materialize, and if sanctions on Russia will be lifted — developments that could inject more supply. “We should expect a nervous oil market ahead of Thanksgiving on Thursday,” said Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management. “Several factors point to a peace agreement or possibly a ceasefire moving closer over the weekend, which supports further price declines this week.” Ukraine President Volodymyr Zelenskiy said Monday

Read More »

NFL, AWS drive football modernization with cloud, AI

AWS Next Gen Stats: Initially used for player participation tracking (replacing manual photo-taking), Next Gen Stats uses sensors to capture center-of-mass and contact information, which is then used to generate performance insights. Computer vision: Computer vision was initially insufficient, but the technology has improved greatly over the past few years. The NFL has now embraced computer vision, notably using six 8k cameras in every stadium to measure first downs. This replaced the 100-year tradition of using physical sticks connected with a chain to determine first downs. This blended approach of using sensors and computer vision maximizes data capture for complex plays where one source may not be enough. Advanced data use cases: The massive influx of data supports officiating, equipment testing, rule development, player health and safety (e.g., concussion reduction), and team-level strategy/scouting (“Moneyball”). Generative AI: From efficiency to hyper-personalization Very quickly, generative AI has shifted from a “shiny new thing” to a mainstream tool focused on operational efficiency and content maximization. Use cases mentioned include: Data governance: A key internal challenge is the NFL’s disparate data silos (sensor, video, rules, business logic) and applying governance layers so that Gen AI agents (for media, officiating, etc.) can operate consistently and effectively without needing constant re-tooling. Operational efficiency: Gen AI is used to streamline tasks like sifting through policy documents and, notably, in marketing. Campaigns that once took weeks can now iterate hundreds of versions in minutes, offering contextual localization, language translation, and featuring the most relevant players for specific global markets. Content maximization: Gen AI is used to create derivatives of long-form content (e.g., TikTok and Twitter versions) efficiently. There’s also innovation in using data feeds to generate automated commentary and context, creating new, scalable audio/visual experiences. Solving hard-to-solve problems The NFL/AWS partnership is something companies in all industries should

Read More »

ADNOC Keeps $150B Spending in Growth Push

(Update) November 24, 2025, 4:17 PM GMT: Updates with oil production capacity in the last paragraph. Abu Dhabi National Oil Co. will maintain spending at $150 billion over the next five years as it targets growth in production capacity at home and expands internationally. The company’s board approved the capital expenditure plan that’s in line with the previous layout that was announced three years ago. Since then, Abu Dhabi’s biggest oil producer has carved out an international investment business called XRG that is scouring the globe for deals. XRG has boosted its enterprise value to $151 billion from $80 billion since it was set up about a year ago, Adnoc said in a statement. The unit, which this year got stakes in Adnoc’s listed companies with a total market value exceeding $100 billion, aims to become among the world’s top five suppliers of natural gas and petrochemicals, along with the energy needed to meet demand from the AI and tech booms. XRG has also snapped up contracts for liquefied natural gas in the US and Africa, bought into gas fields around the Mediterranean and is in the final stages of a nearly $14 billion takeover of German chemical maker Covestro AG. Still, the company’s biggest effort yet fell apart in September when the firm dropped its planned $19 billion takeover of Australian natural gas producer Santos Ltd. It bounced back with a deal announced this month to explore buying into an LNG project in Argentina. Adnoc’s board, chaired by UAE President and Abu Dhabi ruler Sheikh Mohamed bin Zayed Al Nahyan, reviewed plans to expand oil and gas production capacity. It formed a operating company for the Hail and Ghasha offshore natural gas concession and boosted the project’s production target to 1.8 billion cubic feet per day, from 1.5 billion, by the end of the decade.

Read More »

Saudi’s AlKhorayef Petroleum Said To Prepare For IPO

Saudi Arabia’s AlKhorayef Group has started preparations for a potential listing of its oil and gas services subsidiary, according to people familiar with the matter, adding to the list of companies looking to go public in the kingdom. The group has reached out to firms that could help arrange a possible IPO of AlKhorayef Petroleum, the people said, declining to be identified discussing confidential information. The preparations are at an early stage, and no final decision has been taken on whether to proceed with a transaction, the people said.  Representatives for AlKhorayef Group did not respond to a request for comment. Representatives for the Public Investment Fund, which acquired a 25% stake in Al Khorayef Petroleum in 2023, declined to comment. Saudi Arabia has been the Middle East’s most active IPO market this year, with companies raising nearly $4 billion. Still, performance has been uneven, and only two of the ten largest debuts are currently trading above their offer prices. The kingdom’s benchmark stock index is among the worst-performing in emerging markets, as investors grow wary of prolonged oil price weakness and the potential hit to government spending. The PIF has played a key role in deepening Saudi Arabia’s capital markets by listing portfolio companies. However, it has slowed the pace of share sales, including in firms like Saudi Global Ports, amid softer market conditions, Bloomberg News has reported. Headquartered in Dammam, AlKhorayef Petroleum operates across the Middle East, Africa and Latin America. It is majority-owned by AlKhorayef Group, a conglomerate with businesses spanning industrial services, lubricants and water solutions. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots. Eileen Guo writes: Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide.  Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 
But tellingly, one area the laws fail to address is user privacy. This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people. After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.”  Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023:  “Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.” This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.)  All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place.  So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?  Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Melissa Heikkilä replies: Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids.  In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.  Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.  This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.  Because people generally like answers that are agreeable, such responses are weighted more heavily in training.  AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features.  AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before.  By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.  We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.  Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level. We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again. Eileen responds:
I think the comparison between AI companions and social media is both apt and concerning.  As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information. Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI. And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.
In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening.  Further reading  FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges.  Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy  In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots. Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

Read More »

Energy Department Launches ‘Genesis Mission’ to Transform American Science and Innovation Through the AI Computing Revolution

WASHINGTON—President Trump today issued an Executive Order to launch the Genesis Mission, a historic national effort led by the Department of Energy. The Genesis Mission will transform American science and innovation through the power of artificial intelligence (AI), strengthening the nation’s technological leadership and global competitiveness.   The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security.   “Throughout history, from the Manhattan Project to the Apollo mission, our nation’s brightest minds and industries have answered the call when their nation needed them,” said U.S. Secretary of Energy Chris Wright. “Today, the United States is calling on them once again. Under President Trump’s leadership, the Genesis Mission will unleash the full power of our National Laboratories, supercomputers, and dataresources to ensure that America is the global leader in artificial intelligence and to usher in a new golden era of American discovery.”  The announcement builds on President Trump’s Executive Order Removing Barriers to American Leadership In Artificial Intelligence and advances his America’s AI Action Plan released earlier this year—a directive to remove barriers to innovation, reduce dependence on foreign adversaries, and unleash the full strength of America’s scientific enterprise.   Secretary Wright has designated Under Secretary for Science Darío Gil to lead the initiative. The Genesis Mission will mobilize the Department of Energy’s 17 National Laboratories, industry, and academia to build an integrated discovery platform.   The platform will connect the world’s best supercomputers, AI systems, and next-generation quantum systems with the most advanced scientific instruments in the nation. Once complete, the platform will be the world’s most complex and powerful scientific instrument ever built. It will draw on the expertise of

Read More »

Oil Closes the Day Up as Equities Rally

Oil pushed higher as equities rose and traders weighed the prospect of a Ukraine-Russia peace deal that could deflate political risk from an already well-supplied market. West Texas Intermediate rose about 1.3% to settle near $59 per barrel, snapping a three day losing streak as crude ticks up following its biggest weekly loss since early October.  While oil followed other risk assets higher, traders awaited further news after Ukraine and its European allies signaled that key sticking points remained in US-brokered peace talks to end Russia’s invasion, even as senior officials hailed progress in winning more favorable terms for Kyiv. “Something good just may be happening,” President Donald Trump wrote in a Truth Social post about the talks.  An end to the hostiltites would also take some risk premium out of the market. “Oil markets are moving in sympathy with equities and awaiting on more news of the Ukraine/Russia talks” said Dennis Kissler, senior vice president for trading at BOK Financial. He expects continued choppy trading and some short covering into the holiday period.  Crude has slumped this year, with futures on course for a fourth monthly loss in November, in what would be the longest losing run since 2023. The decline has been driven by expanded global output, including from OPEC+, with the International Energy Agency forecasting a record surplus for 2026. Traders are monitoring whether a deal on Ukraine will materialize, and if sanctions on Russia will be lifted — developments that could inject more supply. “We should expect a nervous oil market ahead of Thanksgiving on Thursday,” said Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management. “Several factors point to a peace agreement or possibly a ceasefire moving closer over the weekend, which supports further price declines this week.” Ukraine President Volodymyr Zelenskiy said Monday

Read More »

Saudi’s AlKhorayef Petroleum Said To Prepare For IPO

Saudi Arabia’s AlKhorayef Group has started preparations for a potential listing of its oil and gas services subsidiary, according to people familiar with the matter, adding to the list of companies looking to go public in the kingdom. The group has reached out to firms that could help arrange a possible IPO of AlKhorayef Petroleum, the people said, declining to be identified discussing confidential information. The preparations are at an early stage, and no final decision has been taken on whether to proceed with a transaction, the people said.  Representatives for AlKhorayef Group did not respond to a request for comment. Representatives for the Public Investment Fund, which acquired a 25% stake in Al Khorayef Petroleum in 2023, declined to comment. Saudi Arabia has been the Middle East’s most active IPO market this year, with companies raising nearly $4 billion. Still, performance has been uneven, and only two of the ten largest debuts are currently trading above their offer prices. The kingdom’s benchmark stock index is among the worst-performing in emerging markets, as investors grow wary of prolonged oil price weakness and the potential hit to government spending. The PIF has played a key role in deepening Saudi Arabia’s capital markets by listing portfolio companies. However, it has slowed the pace of share sales, including in firms like Saudi Global Ports, amid softer market conditions, Bloomberg News has reported. Headquartered in Dammam, AlKhorayef Petroleum operates across the Middle East, Africa and Latin America. It is majority-owned by AlKhorayef Group, a conglomerate with businesses spanning industrial services, lubricants and water solutions. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

ADNOC Keeps $150B Spending in Growth Push

(Update) November 24, 2025, 4:17 PM GMT: Updates with oil production capacity in the last paragraph. Abu Dhabi National Oil Co. will maintain spending at $150 billion over the next five years as it targets growth in production capacity at home and expands internationally. The company’s board approved the capital expenditure plan that’s in line with the previous layout that was announced three years ago. Since then, Abu Dhabi’s biggest oil producer has carved out an international investment business called XRG that is scouring the globe for deals. XRG has boosted its enterprise value to $151 billion from $80 billion since it was set up about a year ago, Adnoc said in a statement. The unit, which this year got stakes in Adnoc’s listed companies with a total market value exceeding $100 billion, aims to become among the world’s top five suppliers of natural gas and petrochemicals, along with the energy needed to meet demand from the AI and tech booms. XRG has also snapped up contracts for liquefied natural gas in the US and Africa, bought into gas fields around the Mediterranean and is in the final stages of a nearly $14 billion takeover of German chemical maker Covestro AG. Still, the company’s biggest effort yet fell apart in September when the firm dropped its planned $19 billion takeover of Australian natural gas producer Santos Ltd. It bounced back with a deal announced this month to explore buying into an LNG project in Argentina. Adnoc’s board, chaired by UAE President and Abu Dhabi ruler Sheikh Mohamed bin Zayed Al Nahyan, reviewed plans to expand oil and gas production capacity. It formed a operating company for the Hail and Ghasha offshore natural gas concession and boosted the project’s production target to 1.8 billion cubic feet per day, from 1.5 billion, by the end of the decade.

Read More »

North America Adds 12 Rigs Week on Week

North America added 12 rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on November 21. The total U.S. rig count increased by five week on week and the total Canada rig count rose by seven during the same period, taking the total North America rig count up to 749, comprising 554 rigs from the U.S. and 195 rigs from Canada, the count outlined. Of the total U.S. rig count of 554, 533 rigs are categorized as land rigs, 19 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 419 oil rigs, 127 gas rigs, and eight miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 481 horizontal rigs, 61 directional rigs, and 12 vertical rigs. Week on week, the U.S. land rig count rose by six, its offshore rig count remained unchanged, and its inland water rig count dropped by one, Baker Hughes highlighted. The U.S. oil and gas rig counts each increased by two, and the country’s miscellaneous rig count rose by one, week on week, the count showed. The U.S. horizontal rig count increased by five, its vertical rig count rose by one, and its directional rig count dropped by one, week on week, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Wyoming added three rigs, and Pennsylvania, Oklahoma, and New Mexico each added one rig. North Dakota, Louisiana, and Alaska each dropped one rig, week on week, the count revealed.   A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Granite Wash basin added two rigs, and the Marcellus and Permian basins

Read More »

Burgum Signs Order to ‘Unleash American Offshore Energy’

A statement posted on the U.S. Department of the Interior’s (DOI) website on Thursday revealed that U.S. Secretary of the Interior Doug Burgum has signed an order “to unleash American offshore energy”. In this statement, the DOI announced a Secretary’s Order, titled Unleashing American Offshore Energy, which the DOI said directs the Bureau of Ocean Energy Management (BOEM) “to take the necessary steps, in accordance with federal law, to terminate the restrictive Biden 2024-2029 National Outer Continental Shelf Oil and Gas Leasing Program and replace it with a new, expansive 11th National Outer Continental Shelf Oil and Gas Leasing Program by October 2026”. “As part of this directive, the Department is releasing the Secretary’s Draft Proposed Program for the 11th National Outer Continental Shelf Oil and Gas Leasing Program,” the DOI noted in the statement. “Under the new proposal for the 2026-2031 National Outer Continental Shelf Oil and Gas Leasing Program, Interior is taking a major step to boost United States energy independence and sustain domestic oil and gas production,” it added. “The proposal includes as many as 34 potential offshore lease sales across 21 of 27 existing Outer Continental Shelf planning areas, covering approximately 1.27 billion acres. That includes 21 areas off the coast of Alaska, seven in the Gulf of America, and six along the Pacific coast,” it continued. “The proposal also includes the Secretary’s decision to create a new administrative planning area, the South-Central Gulf of America,” it went on to state. In its statement, the DOI said the current proposal follows a public request for information and comment published in April 2025. The DOI stated that it received more than 86,000 comments from stakeholders, states, industry representatives, and members of the public. Feedback from those comments informed the proposal released on Thursday, the DOI highlighted.  The

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

EXECUTIVE SUMMARY In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from building AI that played games with superhuman skill and was starting up a secret project to predict the structures of proteins. He applied for a job. Just three years later, Jumper celebrated a stunning win that few had seen coming. With CEO Demis Hassabis, he had co-led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching the accuracy of painstaking techniques used in the lab, and doing it many times faster—returning results in hours instead of months. AlphaFold 2 had cracked a 50-year-old grand challenge in biology. “This is the reason I started DeepMind,” Hassabis told me a few years ago. “In fact, it’s why I’ve worked my whole career in AI.” In 2024, Jumper and Hassabis shared a Nobel Prize in chemistry. It was five years ago this week that AlphaFold 2’s debut took scientists by surprise. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out.
“It’s been an extraordinary five years,” Jumper says, laughing: “It’s hard to remember a time before I knew tremendous numbers of journalists.” AlphaFold 2 was followed by AlphaFold Multimer, which could predict structures that contained more than one protein, and then AlphaFold 3, the fastest version yet. Google DeepMind also let AlphaFold loose on UniProt, a vast protein database used and updated by millions of researchers around the world. It has now predicted the structures of some 200 million proteins, almost all that are known to science.
Despite his success, Jumper remains modest about AlphaFold’s achievements. “That doesn’t mean that we’re certain of everything in there,” he says. “It’s a database of predictions, and it comes with all the caveats of predictions.” A hard problem Proteins are the biological machines that make living things work. They form muscles, horns, and feathers; they carry oxygen around the body and ferry messages between cells; they fire neurons, digest food, power the immune system; and so much more. But understanding exactly what a protein does (and what role it might play in various diseases or treatments) involves figuring out its structure—and that’s hard. Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one. Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle. But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.” Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.” What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.” Any projects stand out in particular? 

Honeybee science Jumper brings up a research group that uses AlphaFold to study disease resistance in honeybees. “They wanted to understand this particular protein as they look at things like colony collapse,” he says. “I never would have said, ‘You know, of course AlphaFold will be used for honeybee science.’” He also highlights a few examples of what he calls off-label uses of AlphaFold“in the sense that it wasn’t guaranteed to work”—where the ability to predict protein structures has opened up new research techniques. “The first is very obviously the advances in protein design,” he says. “David Baker and others have absolutely run with this technology.” Baker, a computational biologist at the University of Washington, was a co-winner of last year’s chemistry Nobel, alongside Jumper and Hassabis, for his work on creating synthetic proteins to perform specific tasks—such as treating disease or breaking down plastics—better than natural proteins can. Baker and his colleagues have developed their own tool based on AlphaFold, called RoseTTAFold. But they have also experimented with AlphaFold Multimer to predict which of their designs for potential synthetic proteins will work.     “Basically, if AlphaFold confidently agrees with the structure you were trying to design [and] then you make it and if AlphaFold says ‘I don’t know,’ you don’t make it. That alone was an enormous improvement.” It can make the design process 10 times faster, says Jumper. Another off-label use that Jumper highlights: Turning AlphaFold into a kind of search engine. He mentions two separate research groups that were trying to understand exactly how human sperm cells hooked up with eggs during fertilization. They knew one of the proteins involved but not the other, he says: “And so they took a known egg protein and ran all 2,000 human sperm surface proteins, and they found one that AlphaFold was very sure stuck against the egg.” They were then able to confirm this in the lab. “This notion that you can use AlphaFold to do something you couldn’t do before—you would never do 2,000 structures looking for one answer,” he says. “This kind of thing I think is really extraordinary.” Five years on When AlphaFold 2 came out, I asked a handful of early adopters what they made of it. Reviews were good, but the technology was too new to know for sure what long-term impact it might have. I caught up with one of those people to hear his thoughts five years on.
Kliment Verba is a molecular biologist who runs a lab at the University of California, San Francisco. “It’s an incredibly useful technology, there’s no question about it,” he tells me. “We use it every day, all the time.” But it’s far from perfect. A lot of scientists use AlphaFold to study pathogens or to develop drugs. This involves looking at interactions between multiple proteins or between proteins and even smaller molecules in the body. But AlphaFold is known to be less accurate at making predictions about multiple proteins or their interaction over time.
Verba says he and his colleagues have been using AlphaFold long enough to get used to its limitations. “There are many cases where you get a prediction and you have to kind of scratch your head,” he says. “Is this real or is this not? It’s not entirely clear—it’s sort of borderline.” “It’s sort of the same thing as ChatGPT,” he adds. “You know—it will bullshit you with the same confidence as it would give a true answer.” Still, Verba’s team uses AlphaFold (both 2 and 3, because they have different strengths, he says) to run virtual versions of their experiments before running them in the lab. Using AlphaFold’s results, they can narrow down the focus of an experiment—or decide that it’s not worth doing. It can really save time, he says: “It hasn’t really replaced any experiments, but it’s augmented them quite a bit.” New wave   AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.   Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions.
AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.” Genesis Molecular AI is pushing margins of error down from less than two angstroms, the de facto industry standard set by AlphaFold, to less than one angstrom—one 10-millionth of a millimeter, or the width of a single hydrogen atom. “Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” says Michael LeVine, vice president of modeling and simulation at the firm. That’s because chemical forces that interact at one angstrom can stop doing so at two. “It can go from ‘They will never interact’ to ‘They will,’” he says. With so much activity in this space, how soon should we expect new types of drugs to hit the market? Jumper is pragmatic. Protein structure prediction is just one step of many, he says: “This was not the only problem in biology. It’s not like we were one protein structure away from curing any diseases.”
Think of it this way, he says. Finding a protein’s structure might previously have cost $100,000 in the lab: “If we were only a hundred thousand dollars away from doing a thing, it would already be done.” At the same time, researchers are looking for ways to do as much as they can with this technology, says Jumper: “We’re trying to figure out how to make structure prediction an even bigger part of the problem, because we have a nice big hammer to hit it with.” In other words, they want to make everything into nails? “Yeah, let’s make things into nails,” he says. “How do we make this thing that we made a million times faster a bigger part of our process?” What’s next? Jumper’s next act? He wants to fuse the deep but narrow power of AlphaFold with the broad sweep of LLMs.   “We have machines that can read science. They can do some scientific reasoning,” he says. “And we can build amazing, superhuman systems for protein structure prediction. How do you get these two technologies to work together?” That makes me think of a system called AlphaEvolve, which is being built by another team at Google DeepMind. AlphaEvolve uses an LLM to generate possible solutions to a problem and a second model to check them, filtering out the trash. Researchers have already used AlphaEvolve to make a handful of practical discoveries in math and computer science.     Is that what Jumper has in mind? “I won’t say too much on methods, but I’ll be shocked if we don’t see more and more LLM impact on science,” he says. “I think that’s the exciting open question that I’ll say almost nothing about. This is all speculation, of course.” Jumper was 39 when he won his Nobel Prize. What’s next for him? “It worries me,” he says. “I believe I’m the youngest chemistry laureate in 75 years.”  He adds: “I’m at the midpoint of my career, roughly. I guess my approach to this is to try to do smaller things, little ideas that you keep pulling on. The next thing I announce doesn’t have to be, you know, my second shot at a Nobel. I think that’s the trap.”

Read More »

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots. Eileen Guo writes: Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide.  Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 
But tellingly, one area the laws fail to address is user privacy. This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people. After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.”  Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023:  “Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.” This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.)  All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place.  So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?  Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Melissa Heikkilä replies: Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids.  In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.  Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.  This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.  Because people generally like answers that are agreeable, such responses are weighted more heavily in training.  AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features.  AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before.  By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.  We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.  Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level. We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again. Eileen responds:
I think the comparison between AI companions and social media is both apt and concerning.  As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information. Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI. And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.
In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening.  Further reading  FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges.  Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy  In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots. Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

Read More »

The Download: how to fix a tractor, and living among conspiracy theorists

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Meet the man building a starter kit for civilization You live in a house you designed and built yourself. You rely on the sun for power, heat your home with a woodstove, and farm your own fish and vegetables. The year is 2025.This is the life of Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS).It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit. It’s all part of his ethos that life-changing technology should be available to all, not controlled by a select few. Read the full story. —Tiffany Ng
This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.
What it’s like to find yourself in the middle of a conspiracy theory Last week, we held a subscribers-only Roundtables discussion exploring how to cope in this new age of conspiracy theories. Our features editor Amanda Silverman and executive editor Niall Firth were joined by conspiracy expert Mike Rothschild, who explained exactly what it’s like to find yourself at the center of a conspiracy you can’t control. Watch the conversation back here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 DOGE has been disbandedEven though it’s got eight months left before its official scheduled end. (Reuters)+ It leaves a legacy of chaos and few measurable savings. (Politico)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 2 How OpenAI’s tweaks to ChatGPT sent some users into delusional spiralsIt essentially turned a dial that increased both usage of the chatbot and the risks it poses to a subset of people. (NYT $)+ AI workers are warning loved ones to stay away from the technology. (The Guardian)+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)3 A three-year old has received the world’s first gene therapy for Hunter syndromeOliver Chu appears to be developing normally one year after starting therapy. (BBC)4 Why we may—or may not—be in an AI bubble 🫧It’s time to follow the data. (WP $)+ Even tech leaders don’t appear to be entirely sure. (Insider $)+ How far can the ‘fake it til you make it’ strategy take us? (WSJ $)+ Nvidia is still riding the wave with abandon. (NY Mag $)5 Many MAGA influencers are based in Russia, India and NigeriaX’s new account provenance feature is revealing some interesting truths. (The Daily Beast) 6 The FBI wants to equip drones with facial recognition techCivil libertarians claim the plans equate to airborne surveillance. (The Intercept)+ This giant microwave may change the future of war. (MIT Technology Review)

7  Snapchat is alerting users ahead of Australia’s under-16s social media ban  The platform will analyze an account’s “behavioral signals” to estimate a user’s age. (The Guardian)+ An AI nudification site has been fined for skipping age checks. (The Register)+ Millennial parents are fetishizing the notion of an offline childhood. (The Observer) 8 Activists are roleplaying ICE raids in Fortnite and Grand Theft AutoIt’s in a bid to prepare players to exercise their rights in the real world. (Wired $)+ Another effort to track ICE raids was just taken offline. (MIT Technology Review) 9 The JWST may have uncovered colossal stars ⭐In fact, they’re so big their masses are 10,000 times bigger than the sun. (New Scientist $)+ Inside the hunt for the most dangerous asteroid ever. (MIT Technology Review)10 Social media users are lying about brands ghosting themCompletely normal behavior. (WSJ $)+ This would never have happened on Vine, I’ll tell you now. (The Verge) Quote of the day “I can’t believe we have to say this, but this account has only ever been run and operated from the United States.”  —The US Department of Homeland Security’s X account attempts to end speculation surrounding its social media origins, the New York Times reports.
One more thing This company is planning a lithium empire from the shores of the Great Salt LakeOn a bright afternoon in August, the shore of Utah’s Great Salt Lake looks like something out of a science fiction film set in a scorching alien world.This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States’ efforts to pry control over the global supply of lithium, the so-called “white gold” needed for electric vehicles and batteries, away from China.The startup is in a race to commercialize a new, less environmentally-damaging way to extract lithium from rocks. If everything pans out, it could significantly increase domestic supply at a crucial moment for the nation’s lithium extraction industry. Read the full story.
—Alexander C. Kaufman

Read More »

The Download: the secrets of vitamin D, and an AI party in Africa

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. We’re learning more about what vitamin D does to our bodies At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me. But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D. Read the full story.
—Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
If you’re interested in other stories from our biotech writers, check out some of their most recent work: + Advanced in organs on chips, digital twins, and AI are ushering in a new era of research and drug development that could help put a stop to animal testing. Read the full story.+ Here’s the latest company planning for gene-edited babies.+ Preventing the common cold is extremely tricky—but not impossible. Here’s why we don’t have a cold vaccine. Yet.+ Scientists are creating the beginnings of bodies without sperm or eggs. How far should they be allowed to go? Read the full story.+ This retina implant lets people with vision loss do a crossword puzzle. Read the full story. Partying at one of Africa’s largest AI gatherings It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. Deep Learning Indaba is an annual AI conference where Africans present their research and technologies they’ve built, mingling with friends as a giant screen blinks with videos created with generative AI. The main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. But the organizers hope to see more homegrown ventures create opportunities within Africa. Read the full story. —Abdullahi Tsanni This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Google’s new Nano Banana Pro generates convincing propagandaThe company’s latest image-generating AI model seems to have few guardrails. (The Verge)+ Google wants its creations to be slicker than ever. (Wired $)+ Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent. (MIT Technology Review) 2 Taiwan says the US won’t punish it with high chip tariffsIn fact, official Wu Cheng-wen says Taiwan will help support the US chip industry in exchange for tariff relief. (FT $)3 Mental health support is one of the most dangerous uses for chatbotsThey fail to recognize psychiatric conditions and can miss critical warning signs. (WP $)+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review) 4 It costs an average of $17,121 to deport one person from the USBut in some cases it can cost much, much more. (Bloomberg $)+ Another effort to track ICE raids was just taken offline. (MIT Technology Review) 5 Grok is telling users that Elon Musk is the world’s greatest loverWhat’s it basing that on, exactly? (Rolling Stone $)+ It also claims he’s fitter than basketball legend LeBron James. Sure. (The Guardian)6 Who’s really in charge of US health policy?RFK Jr. and FDA commissioner Marty Makary are reportedly at odds behind the scenes. (Vox)+ Republicans are lightly pushing back on the CDC’s new stance on vaccines. (Politico)+ Why anti-vaxxers are seeking to discredit Danish studies. (Bloomberg $)+ Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review) 7 Inequality is worsening in San FranciscoAs billionaires thrive, hundreds of thousands of others are struggling to get by. (WP $)+ A massive airship has been spotted floating over the city. (SF Gate)
8 Donald Trump is thrusting obscure meme-makers into the mainstreamHe’s been reposting flattering AI-generated memes by the dozen. (NYT $)+ MAGA YouTube stars are pushing a boom in politically charged ads. (Bloomberg $)9 Moss spores survived nine months in spaceAnd they could remain reproductively viable for another 15 years. (New Scientist $)+ It suggests that some life on Earth has evolved to endure space conditions. (NBC News)+ The quest to figure out farming on Mars. (MIT Technology Review) 10 Does AI really need a physical shape?It doesn’t really matter—companies are rushing to give it one anyway. (The Atlantic $)
Quote of the day “At some point you’ve got to wonder whether the bug is a feature.” —Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, ponders xAI and Grok’s proclivity for surfacing Elon Musk-friendly and/or far-right sources, the Washington Post reports. One more thing
The AI lab waging a guerrilla war over exploitative AIBack in 2022, the tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI’s DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.But artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work.Ben Zhao, a computer security researcher at the University of Chicago, was listening. He and his colleagues have built arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping: two tools called Glaze and Nightshade that add barely perceptible perturbations to an image’s pixels so that machine-learning models cannot read them properly.But Zhao sees the tools as part of a battle to slowly tilt the balance of power from large corporations back to individual creators. Read the full story.—Melissa Heikkilä We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’re ever tempted to try and recreate a Jackson Pollock painting, maybe you’d be best leaving it to the kids.+ Scientists have discovered that lions have not one, but two distinct types of roars 🦁+ The relentless rise of the quarter-zip must be stopped!+ Pucker up: here’s a brief history of kissing 💋

Read More »

We’re learning more about what vitamin D does to our bodies

It has started to get really wintry here in London over the last few days. The mornings are frosty, the wind is biting, and it’s already dark by the time I pick my kids up from school. The darkness in particular has got me thinking about vitamin D, a.k.a. the sunshine vitamin. At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me. But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D. Yes, it is important for bone health. But recent research is also uncovering surprising new insights into how the vitamin might influence other parts of our bodies, including our immune systems and heart health.
Vitamin D was discovered just over 100 years ago, when health professionals were looking for ways to treat what was then called “the English disease.” Today, we know that rickets, a weakening of bones in children, is caused by vitamin D deficiency. And vitamin D is best known for its importance in bone health. That’s because it helps our bodies absorb calcium. Our bones are continually being broken down and rebuilt, and they need calcium for that rebuilding process. Without enough calcium, bones can become weak and brittle. (Depressingly, rickets is still a global health issue, which is why there is global consensus that infants should receive a vitamin D supplement at least until they are one year old.)
In the decades since then, scientists have learned that vitamin D has effects beyond our bones. There’s some evidence to suggest, for example, that being deficient in vitamin D puts people at risk of high blood pressure. Daily or weekly supplements can help those individuals lower their blood pressure. A vitamin D deficiency has also been linked to a greater risk of “cardiovascular events” like heart attacks, although it’s not clear whether supplements can reduce this risk; the evidence is pretty mixed. Vitamin D appears to influence our immune health, too. Studies have found a link between low vitamin D levels and incidence of the common cold, for example. And other research has shown that vitamin D supplements can influence the way our genes make proteins that play important roles in the way our immune systems work. We don’t yet know exactly how these relationships work, however. And, unfortunately, a recent study that assessed the results of 37 clinical trials found that overall, vitamin D supplements aren’t likely to stop you from getting an “acute respiratory infection.” Other studies have linked vitamin D levels to mental health, pregnancy outcomes, and even how long people survive after a cancer diagnosis. It’s tantalizing to imagine that a cheap supplement could benefit so many aspects of our health. But, as you might have gathered if you’ve got this far, we’re not quite there yet. The evidence on the effects of vitamin D supplementation for those various conditions is mixed at best. In fairness to researchers, it can be difficult to run a randomized clinical trial for vitamin D supplements. That’s because most of us get the bulk of our vitamin D from sunlight. Our skin converts UVB rays into a form of the vitamin that our bodies can use. We get it in our diets, too, but not much. (The main sources are oily fish, egg yolks, mushrooms, and some fortified cereals and milk alternatives.) The standard way to measure a person’s vitamin D status is to look at blood levels of 25-hydroxycholecalciferol (25(OH)D), which is formed when the liver metabolizes vitamin D. But not everyone can agree on what the “ideal” level is.

Even if everyone did agree on a figure, it isn’t obvious how much vitamin D a person would need to consume to reach this target, or how much sunlight exposure it would take. One complicating factor is that people respond to UV rays in different ways—a lot of that can depend on how much melanin is in your skin. Similarly, if you’re sitting down to a meal of oily fish and mushrooms and washing it down with a glass of fortified milk, it’s hard to know how much more you might need. There is more consensus on the definition of vitamin D deficiency, though. (It’s a blood level below 30 nanomoles per liter, in case you were wondering.) And until we know more about what vitamin D is doing in our bodies, our focus should be on avoiding that. For me, that means topping up with a supplement. The UK government advises everyone in the country to take a 10-microgram vitamin D supplement over autumn and winter. That advice doesn’t factor in my age, my blood levels, or the amount of melanin in my skin. But it’s all I’ve got for now.

Read More »

Designing digital resilience in the agentic AI era

In partnership withCisco Digital resilience—the ability to prevent, withstand, and recover from digital disruptions—has long been a strategic priority for enterprises. With the rise of agentic AI, the urgency for robust resilience is greater than ever. Agentic AI represents a new generation of autonomous systems capable of proactive planning, reasoning, and executing tasks with minimal human intervention. As these systems shift from experimental pilots to core elements of business operations, they offer new opportunities but also introduce new challenges when it comes to ensuring digital resilience. That’s because the autonomy, speed, and scale at which agentic AI operates can amplify the impact of even minor data inconsistencies, fragmentation, or security gaps. While global investment in AI is projected to reach $1.5 trillion in 2025, fewer than half of business leaders are confident in their organization’s ability to maintain service continuity, security, and cost control during unexpected events. This lack of confidence, coupled with the profound complexity introduced by agentic AI’s autonomous decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience. Organizations are turning to the concept of a data fabric—an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a data fabric can empower both human teams and agentic AI systems to sense risks, prevent problems before they occur, recover quickly when they do, and sustain operations.
Machine data: A cornerstone of agentic AI and digital resilience Earlier AI models relied heavily on human-generated data such as text, audio, and video, but agentic AI demands deep insight into an organization’s machine data: the logs, metrics, and other telemetry generated by devices, servers, systems, and applications. To put agentic AI to use in driving digital resilience, it must have seamless, real-time access to this data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems rely on machine data to understand context, simulate outcomes, and adapt continuously. This makes machine data oversight a cornerstone of digital resilience.
“We often describe machine data as the heartbeat of the modern enterprise,” says Hathi. “Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It’s essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the very same data stream.”  Few organizations are currently achieving the level of machine data integration required to fully enable agentic systems. This not only narrows the scope of possible use cases for agentic AI, but, worse, it can also result in data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed prior to the development of generative pre-trained transformers (GPTs) were plagued by linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without providing models with a foundational fluency in machine data.  For many companies, keeping up with the dizzying pace at which AI is progressing has been a major challenge. “In some ways, the speed of this innovation is starting to hurt us, because it creates risks we’re not ready for,” says Hathi. “The trouble is that with agentic AI’s evolution, relying on traditional LLMs trained on human text, audio, video, or print data doesn’t work when you need your system to be secure, resilient, and always available.” Designing a data fabric for resilience To address these shortcomings and build digital resilience, technology leaders should pivot to what Hathi describes as a data fabric design, better suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management.  “Once you have a single view, you can do all these things that are autonomous and agentic,” says Hathi. “You have far fewer blind spots. Decision-making goes much faster. And the unknown is no longer a source of fear because you have a holistic system that’s able to absorb these shocks and disruption without losing continuity,” he adds. To create this unified system, data teams must first break down departmental silos in how data is shared, says Hathi. Then, they must implement a federated data architecture—a decentralized system where autonomous data sources work together as a single unit without physically merging—to create a unified data source while maintaining governance and security. And finally, teams must upgrade data platforms to ensure this newly unified view is actionable for agentic AI.  During this transition, teams may face technical limitations if they rely on traditional platforms modeled on structured data—that is, mostly quantitative information such as customer records or financial transactions that can be organized in a predefined format (often in tables) that is easy to query. Instead, companies need a platform that can also manage streams of unstructured data such as system logs, security events, and application traces, which lack uniformity and are often qualitative rather than quantitative. Analyzing, organizing, and extracting insights from these kinds of data requires more advanced methods enabled by AI. Harnessing AI as a collaborator AI itself can be a powerful tool in creating the data fabric that enables AI systems. AI-powered tools can, for example, quickly identify relationships between disparate data—both structured and unstructured—automatically merging them into one source of truth. They can detect and correct errors and employ NLP to tag and categorize data to make it easier to find and use. 

Agentic AI systems can also be used to augment human capabilities in detecting and deciphering anomalies in an enterprise’s unstructured data streams. These are often beyond human capacity to spot or interpret at speed, leading to missed threats or delays. But agentic AI systems, designed to perceive, reason, and act autonomously, can plug the gap, delivering higher levels of digital resilience to an enterprise. “Digital resilience is about more than withstanding disruptions,” says Hathi. “It’s about evolving and growing over time. AI agents can work with massive amounts of data and continuously learn from humans who provide safety and oversight. This is a true self-optimizing system.” Humans in the loop Despite its potential, agentic AI should be positioned as assistive intelligence. Without proper oversight, AI agents could introduce application failures or security risks. Clearly defined guardrails and maintaining humans in the loop is “key to trustworthy and practical use of AI,” Hathi says. “AI can enhance human decision-making, but ultimately, humans are in the driver’s seat.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Energy Department Launches ‘Genesis Mission’ to Transform American Science and Innovation Through the AI Computing Revolution

WASHINGTON—President Trump today issued an Executive Order to launch the Genesis Mission, a historic national effort led by the Department of Energy. The Genesis Mission will transform American science and innovation through the power of artificial intelligence (AI), strengthening the nation’s technological leadership and global competitiveness.   The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security.   “Throughout history, from the Manhattan Project to the Apollo mission, our nation’s brightest minds and industries have answered the call when their nation needed them,” said U.S. Secretary of Energy Chris Wright. “Today, the United States is calling on them once again. Under President Trump’s leadership, the Genesis Mission will unleash the full power of our National Laboratories, supercomputers, and dataresources to ensure that America is the global leader in artificial intelligence and to usher in a new golden era of American discovery.”  The announcement builds on President Trump’s Executive Order Removing Barriers to American Leadership In Artificial Intelligence and advances his America’s AI Action Plan released earlier this year—a directive to remove barriers to innovation, reduce dependence on foreign adversaries, and unleash the full strength of America’s scientific enterprise.   Secretary Wright has designated Under Secretary for Science Darío Gil to lead the initiative. The Genesis Mission will mobilize the Department of Energy’s 17 National Laboratories, industry, and academia to build an integrated discovery platform.   The platform will connect the world’s best supercomputers, AI systems, and next-generation quantum systems with the most advanced scientific instruments in the nation. Once complete, the platform will be the world’s most complex and powerful scientific instrument ever built. It will draw on the expertise of

Read More »

Oil Closes the Day Up as Equities Rally

Oil pushed higher as equities rose and traders weighed the prospect of a Ukraine-Russia peace deal that could deflate political risk from an already well-supplied market. West Texas Intermediate rose about 1.3% to settle near $59 per barrel, snapping a three day losing streak as crude ticks up following its biggest weekly loss since early October.  While oil followed other risk assets higher, traders awaited further news after Ukraine and its European allies signaled that key sticking points remained in US-brokered peace talks to end Russia’s invasion, even as senior officials hailed progress in winning more favorable terms for Kyiv. “Something good just may be happening,” President Donald Trump wrote in a Truth Social post about the talks.  An end to the hostiltites would also take some risk premium out of the market. “Oil markets are moving in sympathy with equities and awaiting on more news of the Ukraine/Russia talks” said Dennis Kissler, senior vice president for trading at BOK Financial. He expects continued choppy trading and some short covering into the holiday period.  Crude has slumped this year, with futures on course for a fourth monthly loss in November, in what would be the longest losing run since 2023. The decline has been driven by expanded global output, including from OPEC+, with the International Energy Agency forecasting a record surplus for 2026. Traders are monitoring whether a deal on Ukraine will materialize, and if sanctions on Russia will be lifted — developments that could inject more supply. “We should expect a nervous oil market ahead of Thanksgiving on Thursday,” said Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management. “Several factors point to a peace agreement or possibly a ceasefire moving closer over the weekend, which supports further price declines this week.” Ukraine President Volodymyr Zelenskiy said Monday

Read More »

NFL, AWS drive football modernization with cloud, AI

AWS Next Gen Stats: Initially used for player participation tracking (replacing manual photo-taking), Next Gen Stats uses sensors to capture center-of-mass and contact information, which is then used to generate performance insights. Computer vision: Computer vision was initially insufficient, but the technology has improved greatly over the past few years. The NFL has now embraced computer vision, notably using six 8k cameras in every stadium to measure first downs. This replaced the 100-year tradition of using physical sticks connected with a chain to determine first downs. This blended approach of using sensors and computer vision maximizes data capture for complex plays where one source may not be enough. Advanced data use cases: The massive influx of data supports officiating, equipment testing, rule development, player health and safety (e.g., concussion reduction), and team-level strategy/scouting (“Moneyball”). Generative AI: From efficiency to hyper-personalization Very quickly, generative AI has shifted from a “shiny new thing” to a mainstream tool focused on operational efficiency and content maximization. Use cases mentioned include: Data governance: A key internal challenge is the NFL’s disparate data silos (sensor, video, rules, business logic) and applying governance layers so that Gen AI agents (for media, officiating, etc.) can operate consistently and effectively without needing constant re-tooling. Operational efficiency: Gen AI is used to streamline tasks like sifting through policy documents and, notably, in marketing. Campaigns that once took weeks can now iterate hundreds of versions in minutes, offering contextual localization, language translation, and featuring the most relevant players for specific global markets. Content maximization: Gen AI is used to create derivatives of long-form content (e.g., TikTok and Twitter versions) efficiently. There’s also innovation in using data feeds to generate automated commentary and context, creating new, scalable audio/visual experiences. Solving hard-to-solve problems The NFL/AWS partnership is something companies in all industries should

Read More »

ADNOC Keeps $150B Spending in Growth Push

(Update) November 24, 2025, 4:17 PM GMT: Updates with oil production capacity in the last paragraph. Abu Dhabi National Oil Co. will maintain spending at $150 billion over the next five years as it targets growth in production capacity at home and expands internationally. The company’s board approved the capital expenditure plan that’s in line with the previous layout that was announced three years ago. Since then, Abu Dhabi’s biggest oil producer has carved out an international investment business called XRG that is scouring the globe for deals. XRG has boosted its enterprise value to $151 billion from $80 billion since it was set up about a year ago, Adnoc said in a statement. The unit, which this year got stakes in Adnoc’s listed companies with a total market value exceeding $100 billion, aims to become among the world’s top five suppliers of natural gas and petrochemicals, along with the energy needed to meet demand from the AI and tech booms. XRG has also snapped up contracts for liquefied natural gas in the US and Africa, bought into gas fields around the Mediterranean and is in the final stages of a nearly $14 billion takeover of German chemical maker Covestro AG. Still, the company’s biggest effort yet fell apart in September when the firm dropped its planned $19 billion takeover of Australian natural gas producer Santos Ltd. It bounced back with a deal announced this month to explore buying into an LNG project in Argentina. Adnoc’s board, chaired by UAE President and Abu Dhabi ruler Sheikh Mohamed bin Zayed Al Nahyan, reviewed plans to expand oil and gas production capacity. It formed a operating company for the Hail and Ghasha offshore natural gas concession and boosted the project’s production target to 1.8 billion cubic feet per day, from 1.5 billion, by the end of the decade.

Read More »

Saudi’s AlKhorayef Petroleum Said To Prepare For IPO

Saudi Arabia’s AlKhorayef Group has started preparations for a potential listing of its oil and gas services subsidiary, according to people familiar with the matter, adding to the list of companies looking to go public in the kingdom. The group has reached out to firms that could help arrange a possible IPO of AlKhorayef Petroleum, the people said, declining to be identified discussing confidential information. The preparations are at an early stage, and no final decision has been taken on whether to proceed with a transaction, the people said.  Representatives for AlKhorayef Group did not respond to a request for comment. Representatives for the Public Investment Fund, which acquired a 25% stake in Al Khorayef Petroleum in 2023, declined to comment. Saudi Arabia has been the Middle East’s most active IPO market this year, with companies raising nearly $4 billion. Still, performance has been uneven, and only two of the ten largest debuts are currently trading above their offer prices. The kingdom’s benchmark stock index is among the worst-performing in emerging markets, as investors grow wary of prolonged oil price weakness and the potential hit to government spending. The PIF has played a key role in deepening Saudi Arabia’s capital markets by listing portfolio companies. However, it has slowed the pace of share sales, including in firms like Saudi Global Ports, amid softer market conditions, Bloomberg News has reported. Headquartered in Dammam, AlKhorayef Petroleum operates across the Middle East, Africa and Latin America. It is majority-owned by AlKhorayef Group, a conglomerate with businesses spanning industrial services, lubricants and water solutions. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

The State of AI: Chatbot companions and the future of our privacy

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots. Eileen Guo writes: Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up. 
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide.  Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups. 
But tellingly, one area the laws fail to address is user privacy. This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people. After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.”  Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023:  “Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to  generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.” This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.)  All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place.  So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question. 

What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?  Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Melissa Heikkilä replies: Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids.  In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.  Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.  This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.  Because people generally like answers that are agreeable, such responses are weighted more heavily in training.  AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive. 
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features.  AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way. 
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before.  By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.  We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.  Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level. We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again. Eileen responds:
I think the comparison between AI companions and social media is both apt and concerning.  As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information. Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI. And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.
In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening.  Further reading  FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges.  Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy  In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots. Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE