Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Stanford’s ChatEHR allows clinicians to query patient medical records using natural language, without compromising patient data
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more What would it be like to chat with health records the way one could with ChatGPT? Initially posed by a medical student, this question sparked the development of ChatEHR at Stanford Health Care. Now in production, the tool accelerates chart reviews for emergency room admissions, streamlines patient transfer summaries and synthesizes information from complex medical histories. In early pilot results, clinical users have experienced significantly sped-up information retrieval; notably, emergency physicians saw 40% reduced chart review time during critical handoffs, Michael A. Pfeffer, Stanford’s SVP and chief information and digital officer, said today in a fireside chat at VB Transform. This helps to decrease physician burnout while improving patient care, and builds upon decades of work medical facilities have been doing to collect and automate critical data. “It’s such an exciting time in healthcare because we’ve been spending the last 20 years digitizing healthcare data and putting it into an electronic health record, but not really transforming it,” Pfeffer said in a chat with VB editor-in-chief Matt Marshall. “With the new large language model technologies, we’re actually starting to do that digital transformation.” How ChatEHR helps reduce ‘pajama time,’ get back to real face-to-face interactions Physicians spend up to 60% of their time on administrative tasks rather than direct patient care. They often put in significant “pajama time,” sacrificing personal and family hours to complete administrative tasks outside of regular work hours. One of Pfeffer’s big goals is to streamline workflows and reduce those extra hours so clinicians and administrative staff can focus on more important work. For example, a lot of information comes in through online patient portals. AI now has the ability to read messages

What’s inside Genspark? A new vibe working approach that ditches rigid workflows for autonomous agents
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Vibe coding has been all the rage in recent months as a simple way for anyone to build applications with generative AI. But what if that same easy-going, natural language approach was extended to other enterprise workflows? That’s the promise of an emerging category of agentic AI applications. At VB Transform 2025 today, one such application was on display with the Genspark Super Agent, which was originally launched earlier this year. The Genspark Super Agent’s promise and approach could well extend the concept of vibe coding into vibe working. A key tenet of enabling vibe working, though, is to go with the flow and exert less control rather than more over AI agents. “The vision is simple, we want to bring the Cursor experience for developers to the workspace for everyone,” Kay Zhu, CTO of Genspark, said at VB Transform. “Everyone here should be able to do vibe working… it’s not only the software engineer that can do vibe coding.” >>See all our Transform 2025 coverage here<< Less is more when it comes to enterprise agentic AI According to Zhu, a foundational premise for enabling a vibe working era is letting go of some rigid rules that have defined enterprise workflows for generations. Zhu provocatively challenged enterprise AI orthodoxy, arguing that rigid workflows fundamentally limit what AI agents can accomplish for complex business tasks. During a live demonstration, he showed the system autonomously researching conference speakers, creating presentations, making phone calls and analyzing marketing data. Most notably, the system placed an actual phone call to the event organizer, VentureBeat founder Matt Marshall, during the live presentation. “This is normally the call that I don’t really want to

Windsurf CEO Varun Mohan throws cold water on 1-person, billion-dollar startup idea at VB Transform: ‘more people allow you to grow faster’
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more As AI-powered tools spread through enterprise software stacks, the rapid growth of the AI coding platform Windsurf is becoming a case study of what happens when developers adopt agentic tooling at scale. In a session at today’s VB Transform 2025 conference, CEO and co-founder Varun Mohan discussed how Windsurf’s integrated development environment (IDE) surpassed one million developers within four months of launch. More notably, the platform now writes over half of the code committed by its user base. The conversation, moderated by VentureBeat CEO Matt Marshall, opened with a brief but pointed disclaimer: Mohan could not comment on OpenAI’s widely reported potential acquisition of Windsurf. The issue has drawn attention following a Wall Street Journal report detailing a brewing standoff between OpenAI and Microsoft over the terms of that deal and broader tensions within their multi-billion-dollar partnership. According to the WSJ, OpenAI seeks to acquire Windsurf without giving Microsoft access to its intellectual property— an issue that could reshape the enterprise AI coding landscape. With that context set aside, the session focused on Windsurf’s technology, enterprise traction, and vision for agentic development. >>See all our Transform 2025 coverage here<< Moving past autocomplete Windsurf’s IDE is built around what the company calls a “mind-meld” loop—a shared project state between humans and AI that enables full coding flows rather than autocomplete suggestions. With this setup, agents can perform multi-file refactors, write test suites, and even launch UI changes when a pull request is initiated. Mohan emphasized that coding assistance can’t stop at code generation. “Only about 20 to 30% of a developer’s time is spent writing code. The rest is debugging, reviewing, and testing. To truly assist, an AI

Crop signals
Bacteria can be engineered to sense a variety of molecules, such as pollutants or soil nutrients, but usually these signals must be detected microscopically. Now Christopher Voigt, head of MIT’s Department of Biological Engineering, and colleagues have triggered bacterial cells to produce signals that can be read from as far as 90 meters away. Their work could lead to the development of sensors for agricultural and other applications, which could be monitored by drones or satellites. The researchers engineered two different types of bacteria, one found in soil and one in water, so that when they encounter certain target chemicals, they produce hyperspectral reporters—molecules that absorb distinctive wavelengths of light across the visible and infrared spectra. These signatures can be detected with hyperspectral cameras, which determine how much of each color wavelength is present in any given pixel. Though the reporting molecules they developed were linked to genetic circuits that detect nearby bacteria, this approach could also be combined with sensors detecting radiation, soil nutrients, or arsenic and other contaminants. “The nice thing about this technology is that you can plug and play whichever sensor you want,” says Yonatan Chemla, an MIT postdoc who is a lead author of a paper on the work along with Itai Levin, PhD ’24. “There is no reason that any sensor would not be compatible with this technology.” The work is being commercialized through Fieldstone Bio.

Cancer-targeting nanoparticles are moving closer to human trials
Over the past decade, Institute Professor Paula Hammond ’84, PhD ’93, and her students have used a technique known as layer-by-layer assembly to create a variety of polymer-coated nanoparticles that can be loaded with cancer-fighting drugs. The particles, which could prevent many side effects of chemotherapy by targeting tumors directly, have proved effective in mouse studies. Now the researchers have come up with a technique that allows them to manufacture many more particles in much less time, moving them closer to human use. “There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, the senior author of a paper on the new technique along with Darrell Irvine, a professor at the Scripps Research Institute. In the original production technique, layers with different properties can be laid down by alternately exposing a particle to positively and negatively charged polymers, with extensive purification to remove excess polymer after each application. Each layer can carry therapeutics as well as molecules that help the particles find and enter cancer cells. But the process is time-consuming and would be difficult to scale up. In the new work, the researchers used a microfluidic mixing device that allows them to sequentially add layers as the particles flow through a microchannel. For each layer, they can calculate exactly how much polymer is needed, which eliminates the slow and costly purification step and saves significantly on material costs.
This microfluidic device can be used to assemble the drug delivery nanoparticles rapidly and in large quantities.GRETCHEN ERTL This strategy also facilitates compliance with the FDA’s GMP (good manufacturing practice) requirements, which ensure that products meet safety standards and can be manufactured consistently. “There’s much less chance of any sort of operator mistake or mishaps,” says Ivan Pires, PhD ’24, a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, who is the paper’s lead author along with Ezra Gordon ’24. “We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.” In minutes, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses for certain cargos), which would have taken close to an hour with the original process. They say this means it would be realistic to produce more than enough for clinical trials and patient use. To demonstrate the technique, the researchers created layered nanoparticles loaded with the immune molecule interleukin-12; they have previously shown that such particles can slow growth of ovarian tumors in mice. Those manufactured using the new technique performed similarly to the originals and managed to bind to cancer tissue without entering the cancer cells. This lets them serve as markers that activate the immune system in the tumor, which can delay tumor growth and even lead to cures in mouse models of ovarian cancer. The researchers have filed for a patent and are working with MIT’s Deshpande Center for Technological Innovation in hopes of forming a company to commercialize the technology, which they say could also be applied to glioblastoma and other types of cancer.

Immune molecules may affect mood
Two new studies from MIT and Harvard Medical School add to a growing body of evidence that infection-fighting molecules called cytokines also influence the brain, leading to behavioral changes during illness. By mapping the locations in the brain of receptors for different forms of IL-17, the researchers found that the cytokine acts on the somatosensory cortex to promote sociable behavior and on the amygdala to elicit anxiety. These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences and one of both studies’ senior authors. “If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says. In the cortex, the researchers found certain receptors in a population of neurons that, when overactivated, can lead to autism-like symptoms such as reduced sociability in mice. But the researchers determined that the neurons become less excitable when a specific form of IL-17 binds to the receptors, shedding possible light on why autism symptoms in children often abate when they have fevers. Choi hypothesizes that IL-17 may have evolved as a neuromodulator and was “hijacked” by the immune system only later. Meanwhile, the researchers also found two types of IL-17 receptors in a certain population of neurons in the amygdala, which plays an important role in processing emotions. When these receptors bind to two forms of IL-17, the neurons become more excitable, leading to an increase in anxiety. Eventually, findings like these may help researchers develop new treatments for conditions such as autism and depression.

Stanford’s ChatEHR allows clinicians to query patient medical records using natural language, without compromising patient data
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more What would it be like to chat with health records the way one could with ChatGPT? Initially posed by a medical student, this question sparked the development of ChatEHR at Stanford Health Care. Now in production, the tool accelerates chart reviews for emergency room admissions, streamlines patient transfer summaries and synthesizes information from complex medical histories. In early pilot results, clinical users have experienced significantly sped-up information retrieval; notably, emergency physicians saw 40% reduced chart review time during critical handoffs, Michael A. Pfeffer, Stanford’s SVP and chief information and digital officer, said today in a fireside chat at VB Transform. This helps to decrease physician burnout while improving patient care, and builds upon decades of work medical facilities have been doing to collect and automate critical data. “It’s such an exciting time in healthcare because we’ve been spending the last 20 years digitizing healthcare data and putting it into an electronic health record, but not really transforming it,” Pfeffer said in a chat with VB editor-in-chief Matt Marshall. “With the new large language model technologies, we’re actually starting to do that digital transformation.” How ChatEHR helps reduce ‘pajama time,’ get back to real face-to-face interactions Physicians spend up to 60% of their time on administrative tasks rather than direct patient care. They often put in significant “pajama time,” sacrificing personal and family hours to complete administrative tasks outside of regular work hours. One of Pfeffer’s big goals is to streamline workflows and reduce those extra hours so clinicians and administrative staff can focus on more important work. For example, a lot of information comes in through online patient portals. AI now has the ability to read messages

What’s inside Genspark? A new vibe working approach that ditches rigid workflows for autonomous agents
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Vibe coding has been all the rage in recent months as a simple way for anyone to build applications with generative AI. But what if that same easy-going, natural language approach was extended to other enterprise workflows? That’s the promise of an emerging category of agentic AI applications. At VB Transform 2025 today, one such application was on display with the Genspark Super Agent, which was originally launched earlier this year. The Genspark Super Agent’s promise and approach could well extend the concept of vibe coding into vibe working. A key tenet of enabling vibe working, though, is to go with the flow and exert less control rather than more over AI agents. “The vision is simple, we want to bring the Cursor experience for developers to the workspace for everyone,” Kay Zhu, CTO of Genspark, said at VB Transform. “Everyone here should be able to do vibe working… it’s not only the software engineer that can do vibe coding.” >>See all our Transform 2025 coverage here<< Less is more when it comes to enterprise agentic AI According to Zhu, a foundational premise for enabling a vibe working era is letting go of some rigid rules that have defined enterprise workflows for generations. Zhu provocatively challenged enterprise AI orthodoxy, arguing that rigid workflows fundamentally limit what AI agents can accomplish for complex business tasks. During a live demonstration, he showed the system autonomously researching conference speakers, creating presentations, making phone calls and analyzing marketing data. Most notably, the system placed an actual phone call to the event organizer, VentureBeat founder Matt Marshall, during the live presentation. “This is normally the call that I don’t really want to

Windsurf CEO Varun Mohan throws cold water on 1-person, billion-dollar startup idea at VB Transform: ‘more people allow you to grow faster’
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more As AI-powered tools spread through enterprise software stacks, the rapid growth of the AI coding platform Windsurf is becoming a case study of what happens when developers adopt agentic tooling at scale. In a session at today’s VB Transform 2025 conference, CEO and co-founder Varun Mohan discussed how Windsurf’s integrated development environment (IDE) surpassed one million developers within four months of launch. More notably, the platform now writes over half of the code committed by its user base. The conversation, moderated by VentureBeat CEO Matt Marshall, opened with a brief but pointed disclaimer: Mohan could not comment on OpenAI’s widely reported potential acquisition of Windsurf. The issue has drawn attention following a Wall Street Journal report detailing a brewing standoff between OpenAI and Microsoft over the terms of that deal and broader tensions within their multi-billion-dollar partnership. According to the WSJ, OpenAI seeks to acquire Windsurf without giving Microsoft access to its intellectual property— an issue that could reshape the enterprise AI coding landscape. With that context set aside, the session focused on Windsurf’s technology, enterprise traction, and vision for agentic development. >>See all our Transform 2025 coverage here<< Moving past autocomplete Windsurf’s IDE is built around what the company calls a “mind-meld” loop—a shared project state between humans and AI that enables full coding flows rather than autocomplete suggestions. With this setup, agents can perform multi-file refactors, write test suites, and even launch UI changes when a pull request is initiated. Mohan emphasized that coding assistance can’t stop at code generation. “Only about 20 to 30% of a developer’s time is spent writing code. The rest is debugging, reviewing, and testing. To truly assist, an AI

Crop signals
Bacteria can be engineered to sense a variety of molecules, such as pollutants or soil nutrients, but usually these signals must be detected microscopically. Now Christopher Voigt, head of MIT’s Department of Biological Engineering, and colleagues have triggered bacterial cells to produce signals that can be read from as far as 90 meters away. Their work could lead to the development of sensors for agricultural and other applications, which could be monitored by drones or satellites. The researchers engineered two different types of bacteria, one found in soil and one in water, so that when they encounter certain target chemicals, they produce hyperspectral reporters—molecules that absorb distinctive wavelengths of light across the visible and infrared spectra. These signatures can be detected with hyperspectral cameras, which determine how much of each color wavelength is present in any given pixel. Though the reporting molecules they developed were linked to genetic circuits that detect nearby bacteria, this approach could also be combined with sensors detecting radiation, soil nutrients, or arsenic and other contaminants. “The nice thing about this technology is that you can plug and play whichever sensor you want,” says Yonatan Chemla, an MIT postdoc who is a lead author of a paper on the work along with Itai Levin, PhD ’24. “There is no reason that any sensor would not be compatible with this technology.” The work is being commercialized through Fieldstone Bio.

Cancer-targeting nanoparticles are moving closer to human trials
Over the past decade, Institute Professor Paula Hammond ’84, PhD ’93, and her students have used a technique known as layer-by-layer assembly to create a variety of polymer-coated nanoparticles that can be loaded with cancer-fighting drugs. The particles, which could prevent many side effects of chemotherapy by targeting tumors directly, have proved effective in mouse studies. Now the researchers have come up with a technique that allows them to manufacture many more particles in much less time, moving them closer to human use. “There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, the senior author of a paper on the new technique along with Darrell Irvine, a professor at the Scripps Research Institute. In the original production technique, layers with different properties can be laid down by alternately exposing a particle to positively and negatively charged polymers, with extensive purification to remove excess polymer after each application. Each layer can carry therapeutics as well as molecules that help the particles find and enter cancer cells. But the process is time-consuming and would be difficult to scale up. In the new work, the researchers used a microfluidic mixing device that allows them to sequentially add layers as the particles flow through a microchannel. For each layer, they can calculate exactly how much polymer is needed, which eliminates the slow and costly purification step and saves significantly on material costs.
This microfluidic device can be used to assemble the drug delivery nanoparticles rapidly and in large quantities.GRETCHEN ERTL This strategy also facilitates compliance with the FDA’s GMP (good manufacturing practice) requirements, which ensure that products meet safety standards and can be manufactured consistently. “There’s much less chance of any sort of operator mistake or mishaps,” says Ivan Pires, PhD ’24, a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, who is the paper’s lead author along with Ezra Gordon ’24. “We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.” In minutes, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses for certain cargos), which would have taken close to an hour with the original process. They say this means it would be realistic to produce more than enough for clinical trials and patient use. To demonstrate the technique, the researchers created layered nanoparticles loaded with the immune molecule interleukin-12; they have previously shown that such particles can slow growth of ovarian tumors in mice. Those manufactured using the new technique performed similarly to the originals and managed to bind to cancer tissue without entering the cancer cells. This lets them serve as markers that activate the immune system in the tumor, which can delay tumor growth and even lead to cures in mouse models of ovarian cancer. The researchers have filed for a patent and are working with MIT’s Deshpande Center for Technological Innovation in hopes of forming a company to commercialize the technology, which they say could also be applied to glioblastoma and other types of cancer.

Immune molecules may affect mood
Two new studies from MIT and Harvard Medical School add to a growing body of evidence that infection-fighting molecules called cytokines also influence the brain, leading to behavioral changes during illness. By mapping the locations in the brain of receptors for different forms of IL-17, the researchers found that the cytokine acts on the somatosensory cortex to promote sociable behavior and on the amygdala to elicit anxiety. These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences and one of both studies’ senior authors. “If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says. In the cortex, the researchers found certain receptors in a population of neurons that, when overactivated, can lead to autism-like symptoms such as reduced sociability in mice. But the researchers determined that the neurons become less excitable when a specific form of IL-17 binds to the receptors, shedding possible light on why autism symptoms in children often abate when they have fevers. Choi hypothesizes that IL-17 may have evolved as a neuromodulator and was “hijacked” by the immune system only later. Meanwhile, the researchers also found two types of IL-17 receptors in a certain population of neurons in the amygdala, which plays an important role in processing emotions. When these receptors bind to two forms of IL-17, the neurons become more excitable, leading to an increase in anxiety. Eventually, findings like these may help researchers develop new treatments for conditions such as autism and depression.

Oil Prices Plunge on Trump Ceasefire Push
Oil plunged for the second straight day as US President Donald Trump signaled he wants to keep oil flowing out of Iran after brokering a fragile ceasefire between Tehran and Israel. West Texas Intermediate crude plunged by nearly 15% over two sessions to settle near $64 a barrel, while Brent was just above $67. Prices have slumped amid the significant deescalation of a conflict that has rocked the energy-rich Middle East. Trump said in a social media post that China can continue buying Iranian oil and that he hopes the country will also be purchasing “plenty” from the US. Crude fell further as both sides made deescalatory remarks. The move is a stark departure from an earlier US strategy of squeezing Iranian energy exports to apply pressure at the negotiating table, a move many investors thought might be contingent on upholding the ceasefire or assurances on nuclear intentions, said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Crude has declined sharply this week — including a 7% rout on Monday — despite the arrival of a long-feared clash that saw America bomb Iran’s nuclear sites and the Islamic Republic retaliate against US bases in Qatar. While prices spiked in the wake of Israel and America’s initial attacks, the conflict hasn’t had any significant impact on oil flows from the Persian Gulf, and exports from Iran have surged. Trump cheered crude’s slide earlier on Tuesday, saying “I love it.” Furthermore, the shale boom of the early 2000s has helped to greatly reduce US reliance on Middle Eastern oil, blunting the impact of a conflict in the region on energy prices. The initial price surge has instead presented a major opportunity for domestic producers to lock in higher prices, with swap dealer positions in US crude futures climbing to

Angolan President Urges USA Firms to Invest Beyond Oil, Minerals
Angolan President Joao Lourenco called on US companies to expand their investments in Africa beyond traditional oil and mineral extraction to industries such as automobiles, shipbuilding, tourism, cement and steel. “American companies operating in Angola are already benefiting from a favorable business climate,” he said on Monday at the opening of the US-Africa Business Summit in Luanda, the capital. “Now we want to see broader engagement.” The event was held as US trade rival China seeks to extend its influence on the continent by offering to remove levies on imports from almost all African countries, while America threatens reciprocal tariffs after a 90-day pause ends on July 9. The US has also cut aid to the continent and banned travel from certain African nations. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Secretary Wright Issues Emergency Order to Secure Southeast Power Grid Amid Heat Wave
WASHINGTON—The Department of Energy (DOE) today issued an emergency order authorized by Section 202(c) of the Federal Power Act to address potential grid shortfall issues in the Southeast U.S. The order, issued amid surging power demand, will help mitigate the risk of blackouts brought on by high temperatures across the Southeast region. “As electricity demand reaches its peak, Americans should not be forced to wonder if their power grid can support their homes and businesses. Under President Trump’s leadership, the Department of Energy will use all tools available to maintain a reliable, affordable, and secure energy system for the American people,” said U.S. Secretary of Energy Chris Wright. “This order ensures Duke Energy Carolinas can supply its customers with consistent and reliable power throughout peak summer demand.” The Order authorizes Duke Energy Carolina to utilize specific electric generating units located within the Duke Energy Carolina area to operate at their maximum generation output levels due to ongoing extreme weather conditions and to preserve the reliability of bulk electric power system. Orders such as this, issued by the Office of Cybersecurity, Energy Security, and Emergency Response (CESER), are in accordance with President Trump’s Executive Order: Declaring a National Energy Emergency and will ensure the availability of generation needed to meet high electricity demand and minimize the risk of blackouts. The order is in effect from June 24 – June 25, 2025. Background: FPA Section 202(c) gives DOE the ability to support energy companies to serve their customers during times of emergencies when they would otherwise not be capable of supplying Americans with reliable, consistent power by providing a waiver of federal, state, or local environmental laws and regulations. The waivers have limitations to ensure public safety and interest are prioritized. ###

Pentagon-backed battery innovation facility opens at UT Dallas
Dive Brief: The University of Texas at Dallas earlier this month announced the opening of its Batteries and Energy to Advance Commercialization and National Security – or BEACONS – facility, which aims to help commercialize new battery technologies, China-proof the lithium-ion battery supply chain and bolster the national battery workforce. The facility is funded by a $30 million award from the U.S. Department of Defense, and is also collaborating with industry partners including Associated Universities Inc. and LEAP Manufacturing. “We want to have that supply chain resilience and independence from the Chinese supply chain,” said BEACONS Director Kyeongjae Cho. “So that even if things really go bad and China decides to cut off [access to] all of these critical mineral supplies, the [domestic battery supply] will not be impacted by that, especially those going to defense applications.” Dive Insight: DOD provides a lot of battery demand, Cho said, due to their need to operate energy-intensive technology in the field. The Pentagon’s battery supply chain is set to shrink after the 2024 National Defense Authorization Act barred DOD from procuring batteries from some Chinese-owned entities starting in October 2027. The banned suppliers are China’s Contemporary Amperex Technology, BYD, Envision Energy, EVE Energy Company, Gotion High-tech and Hithium Energy Storage. China currently dominates the “active materials production portion” of the lithium battery supply chain, according to a 2024 article from the Center for Strategic and International Studies. “Previously, a lot of defense applications were purchasing batteries from Chinese manufacturers,” Cho said. “So that’s creating this dependence on the Chinese supply, and under the unlikely but unfavorable scenario, our defense would be stuck in their supply chain. That’s something we want to avoid.” The program is particularly focused on advancing solid state battery technology, which is more commonly used for drones and defense applications

Summer power bills are going up, federal government warns
Summer power bills are going up, federal government warns | Utility Dive Skip to main content An article from Consumers will see a “slight increase” in power bills this summer, the U.S. Energy Information Administration said Monday. But that analysis is based on expectations for cooler weather. Published June 24, 2025 According to NASA, 2024 was the hottest summer on record. And the National Oceanic and Atmospheric Administration is anticipating hotter-than-average temperatures this summer. Chris Hondros/Getty Images via Getty Images As parts of the United States bake under the first heat wave of summer, the federal government has warned consumers in many regions to brace for higher power bills over the next few months. “From June through September, residential customers in the United States can expect average monthly electricity bills of $178, a slight increase from last summer’s average of $173,” the U.S. Energy Information Administration said Monday. The largest increase will hit New England, where the average monthly bill is expected to rise $13 from last summer, to $193 a month. The Pacific and Mountain regions could see bills decline slightly, according to EIA’s analysis. Retrieved from U.S. Energy Information Administration. EIA said it based its bill analysis on expectations for a “slight decrease in consumption, driven by cooler forecast summer temperatures relative to last summer.” But other experts say the federal government’s energy data arm is being optimistic. EIA’s projections “are very conservative and assume temperatures about the same,” said Mark Wolfe, executive director of the National Energy Assistance Directors Association. But, Wolfe added, “the issue is not temperatures as much as increasing prices.” Residential electricity prices across the U.S. will average 17 cents/kWh in 2025, rising to 17.6 cents/kWh in 2026, according to EIA’s Short-Term Energy Outlook. Prices averaged 16 cents/kWh in 2023. Consumers “will be hit with yet another year

DOE grants Duke Energy authority to exceed power plant permit limits during extreme heat
The U.S. Department of Energy on Tuesday issued an emergency order allowing Duke Energy to exceed emissions limits in its power plant permits in the Carolinas during a heat wave. The emergency order expires at 10 p.m. on Wednesday when the heat and humidity is expected to ease, according to DOE. The order, issued under the Federal Power Act’s section 202(c), will help reduce the risk of blackouts brought on by high temperatures across the Southeast region, the department said. Under the order, Duke will be allowed to exceed power plant emissions limits when it declares an Energy Emergency Alert Level 2, which it expects to do, DOE said. The North American Electric Reliability Corp. defines an EEA-2 as when a grid operator cannot provide its expected energy requirements, but is still able to maintain minimum contingency reserve requirements, according to the PJM Interconnection. Once Duke declares that the EEA Level 2 event has ended, its generating units would be required to immediately return to operation within their permitted limits, the department said. In its request to DOE for the emergency order, Duke said about 1,500 MW of its power plants in the Carolinas are offline while other generating units may be limited by conditions and limitations in their environmental permits, according to the department. DOE issued a similar order for Duke Energy Florida in October in response to Hurricane Milton. DOE has also issued 90-day emergency orders to keep generating units that were set to retire on May 30 operating this summer in Michigan and Pennsylvania.

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Stanford’s ChatEHR allows clinicians to query patient medical records using natural language, without compromising patient data
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more What would it be like to chat with health records the way one could with ChatGPT? Initially posed by a medical student, this question sparked the development of ChatEHR at Stanford Health Care. Now in production, the tool accelerates chart reviews for emergency room admissions, streamlines patient transfer summaries and synthesizes information from complex medical histories. In early pilot results, clinical users have experienced significantly sped-up information retrieval; notably, emergency physicians saw 40% reduced chart review time during critical handoffs, Michael A. Pfeffer, Stanford’s SVP and chief information and digital officer, said today in a fireside chat at VB Transform. This helps to decrease physician burnout while improving patient care, and builds upon decades of work medical facilities have been doing to collect and automate critical data. “It’s such an exciting time in healthcare because we’ve been spending the last 20 years digitizing healthcare data and putting it into an electronic health record, but not really transforming it,” Pfeffer said in a chat with VB editor-in-chief Matt Marshall. “With the new large language model technologies, we’re actually starting to do that digital transformation.” How ChatEHR helps reduce ‘pajama time,’ get back to real face-to-face interactions Physicians spend up to 60% of their time on administrative tasks rather than direct patient care. They often put in significant “pajama time,” sacrificing personal and family hours to complete administrative tasks outside of regular work hours. One of Pfeffer’s big goals is to streamline workflows and reduce those extra hours so clinicians and administrative staff can focus on more important work. For example, a lot of information comes in through online patient portals. AI now has the ability to read messages

What’s inside Genspark? A new vibe working approach that ditches rigid workflows for autonomous agents
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Vibe coding has been all the rage in recent months as a simple way for anyone to build applications with generative AI. But what if that same easy-going, natural language approach was extended to other enterprise workflows? That’s the promise of an emerging category of agentic AI applications. At VB Transform 2025 today, one such application was on display with the Genspark Super Agent, which was originally launched earlier this year. The Genspark Super Agent’s promise and approach could well extend the concept of vibe coding into vibe working. A key tenet of enabling vibe working, though, is to go with the flow and exert less control rather than more over AI agents. “The vision is simple, we want to bring the Cursor experience for developers to the workspace for everyone,” Kay Zhu, CTO of Genspark, said at VB Transform. “Everyone here should be able to do vibe working… it’s not only the software engineer that can do vibe coding.” >>See all our Transform 2025 coverage here<< Less is more when it comes to enterprise agentic AI According to Zhu, a foundational premise for enabling a vibe working era is letting go of some rigid rules that have defined enterprise workflows for generations. Zhu provocatively challenged enterprise AI orthodoxy, arguing that rigid workflows fundamentally limit what AI agents can accomplish for complex business tasks. During a live demonstration, he showed the system autonomously researching conference speakers, creating presentations, making phone calls and analyzing marketing data. Most notably, the system placed an actual phone call to the event organizer, VentureBeat founder Matt Marshall, during the live presentation. “This is normally the call that I don’t really want to

Windsurf CEO Varun Mohan throws cold water on 1-person, billion-dollar startup idea at VB Transform: ‘more people allow you to grow faster’
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more As AI-powered tools spread through enterprise software stacks, the rapid growth of the AI coding platform Windsurf is becoming a case study of what happens when developers adopt agentic tooling at scale. In a session at today’s VB Transform 2025 conference, CEO and co-founder Varun Mohan discussed how Windsurf’s integrated development environment (IDE) surpassed one million developers within four months of launch. More notably, the platform now writes over half of the code committed by its user base. The conversation, moderated by VentureBeat CEO Matt Marshall, opened with a brief but pointed disclaimer: Mohan could not comment on OpenAI’s widely reported potential acquisition of Windsurf. The issue has drawn attention following a Wall Street Journal report detailing a brewing standoff between OpenAI and Microsoft over the terms of that deal and broader tensions within their multi-billion-dollar partnership. According to the WSJ, OpenAI seeks to acquire Windsurf without giving Microsoft access to its intellectual property— an issue that could reshape the enterprise AI coding landscape. With that context set aside, the session focused on Windsurf’s technology, enterprise traction, and vision for agentic development. >>See all our Transform 2025 coverage here<< Moving past autocomplete Windsurf’s IDE is built around what the company calls a “mind-meld” loop—a shared project state between humans and AI that enables full coding flows rather than autocomplete suggestions. With this setup, agents can perform multi-file refactors, write test suites, and even launch UI changes when a pull request is initiated. Mohan emphasized that coding assistance can’t stop at code generation. “Only about 20 to 30% of a developer’s time is spent writing code. The rest is debugging, reviewing, and testing. To truly assist, an AI

From MIT to low Earth orbit
Not everyone can point to the specific moment that set them on their life’s course. But for me, there’s no question: It happened in 1982, when I was a junior at MIT, in the Infinite Corridor. In those pre-internet days, it was where we got the scoop about everything that was happening on campus. One day, as I was racing to the chemistry labs, a poster caught my eye. As I remember it, the poster showed a smiling woman in a flight suit, holding a helmet by her side. I recognized her immediately: Sally Ride, one of America’s first group of female astronauts. It had just been announced that she would be part of the crew for one of the upcoming space shuttle flights, making her the first American woman in space. And while she was visiting Lincoln Lab for training, she would be giving a speech and attending a reception hosted by the Association of MIT Alumnae. A woman speaker was still a novelty at MIT in those days. But a woman astronaut? I knew this was one event I had to attend. Coleman sits in the rear seat of a supersonic T-38 jet for pilot training as a newly minted NASA astronaut candidate in 1992. “When a chemist gets to fly a T-38, she will always be smiling,” she says. On the day of Sally Ride’s talk, I hurried into 10-250, the large lecture hall beneath the Great Dome that is the emblem of MIT. Sandy Yulke, the chair of the Association of MIT Alumnae, was already introducing Sally. Sally. Just a first name. As if she were one of us. I slid into an empty seat just a few rows back as Sandy talked about how proud she was to welcome the soon-to-be first American woman in space. And Sally was standing there, right where our professors stood every day. A woman. And an astronaut. When I was growing up in the 1960s and ’70s, the image I’d had of astronauts—or any kind of explorer, for that matter—could not have been further from the figure before me that day. And I’m not just talking about images I saw in the media—I had one much closer to home. My dad—James Joseph Coleman, known as JJ—was a career naval officer who ultimately led the Experimental Diving Unit. A legend among Navy divers, he had also been a project officer for the Sealab program that built the first underwater habitats, allowing men—and it was all men at the time—to live and work in the deep seas for extended periods. The spirit of exploration, the desire to understand fascinating and challenging environments, seemed normal to me. But because none of the explorers I saw looked like me, it didn’t occur to me that I could be one. My dad worked in a male-dominated world where I’m sure very few of his colleagues imagined that people like me might belong too.
By the time I got to MIT, in 1979, only six women had been selected as NASA astronauts. But seeing Sally Ride on the stage that day turned a possibility into a reality—a reality that could include me. Instead of being larger than life, she was surprisingly real and relatable: a young, bright-eyed woman, with wavy brown hair kind of like mine, wearing a blue flight suit and black boots. She seemed a little shy, looking down at her hands as she was introduced and applauded. Sally was obviously passionate about her scientific work—she was an accomplished astrophysicist—but she also had this amazing job where she flew jets, practiced spacewalking, and was part of a crew with a mission. Both scientist and adventurer, she was accomplishing something that no American woman ever had—and, in the process, opening the door for the rest of us. As I listened to her speak that day, an utterly unexpected idea popped into my head: Maybe I—Cady Coleman—could have that job.
If you can see it, you can be it. Representation doesn’t fix everything, but it changes, on a visceral level, the menu of options that you feel you can reach for. No matter how many people tell us we can be whatever we want to be—and my mother told me that from the moment I was old enough to understand—some of us need more than words. Representation matters. A lot. We are enormously influenced by the signals that we get from our surroundings. What do people expect of us? What models do we have? What limitations do we internalize without knowing it? In her quiet, matter-of-fact way, Sally Ride shattered assumptions I didn’t know I’d taken on. Like so many people at MIT, I was an explorer at heart. What if I could explore in space as well as in the lab? Becoming an astronaut No one just becomes an astronaut. Every astronaut is something else first. At MIT, I had fallen in love with organic chemistry and was determined to become a research chemist, hoping to use science to improve people’s lives. Because I attended MIT on an ROTC scholarship, I was commissioned as a second lieutenant in the US Air Force upon graduation, but I was given permission to get my doctorate in polymer science and engineering from UMass Amherst before serving. I was then stationed at Wright-Patterson Air Force Base, where I worked on new materials for airplanes and consulted on NASA’s Long Duration Exposure Facility experiment. I also set endurance and tolerance records as a volunteer test subject in the centrifuge at the aeromedical laboratory, testing new equipment. But the ideas that Sally Ride had sparked were never far from my mind, and when NASA put out a call for new astronauts in 1991, I applied—along with 2,053 others. I was among the 500 who got our references checked, and then one of about 90 invited to Houston for an intense weeklong interview and physical. In 1992, after months of suspense, I got the fateful phone call asking, “Would you still like to come and work with us at NASA?” Thrilled beyond words, I felt a kind of validation I’d never experienced before and have never forgotten. Four months later, I reported for duty at the Johnson Space Center. Knowing that years of rigorous training lay ahead before I might launch into space on a mission, I couldn’t wait to dive in. That training turned out to be a wild ride. Within days of our arrival in Houston, we ASCANs (NASA-speak for astronaut candidates) headed to Fairchild Air Force Base in Washington state for land survival training. We practiced navigation skills and shelter building. Knots were tied. Food was scavenged. Worms were eaten. Tired, grubby people made hard decisions together. Rules were broken. Fun was had. And, importantly, we got to know one another. Water survival skills were next—we learned to disconnect from our parachutes, climb into a raft, and make the most of the supplies we had in case we had to eject from a jet or the space shuttle. Coleman and the rest of the STS-93 crew head to Launch Pad 39-B for their second attempt at liftoff on the space shuttle Columbia. With this mission, Eileen M. Collins (front row, right) would become the first woman to serve as commander of a shuttle mission. Back in Houston, we learned about each of the shuttle systems, studying the function of every switch and circuit breaker. (For perspective, the condensed manual for the space shuttle is five inches thick.) The rule of thumb was that if something was important, then we probably had three, so we’d still be okay if two of them broke. We worked together in simulators (sims) to practice the normal procedures and learn how to react when the systems malfunctioned. For launch sims, even those normal procedures were an adventure, because the sim would shake, pitch, and roll just as the real shuttle could be expected to on launch day. We learned the basics of robotics, spacewalking, and rendezvous (how to dock with another spacecraft without colliding), and we spent time at the gym, often after hours, so we’d be in shape to work in heavy space suits. Our training spanned everything from classes in how to use—and fix—the toilet in space to collecting meteorites in Antarctica, living in an underwater habitat, and learning to fly the T-38, an amazing high-performance acrobatic jet used to train Air Force pilots. (On our first training flight, we got to fly faster than the speed of sound.) All of this helped us develop an operational mindset—one geared to making decisions and solving problems in high-speed, high-pressure, real-risk situations that can’t be simulated, like the ones we might encounter in space. Mission: It’s not about you, but it depends on you Each time a crew of astronauts goes to space, we call it a mission. It’s an honor to be selected for a mission, and an acknowledgment that you bring skills thatwillmake it successful. Being part of a mission means you are part of something that’s bigger than yourself, but at the same time, the role you play is essential. It’s a strange paradox: It’s not about you, but it depends on you. On each of my missions, that sense of purpose brought us together, bridging our personal differences and disagreements and allowing us to achieve things we might never have thought possible. A crew typically spends at least a year, if not a few years, training together before the actual launch, and that shared mission connects us throughout.
In 1993, I got word that I’d been assigned to my first mission aboard the space shuttle. As a mission specialist on STS-73, I would put my background as a research scientist to use byperforming 30 experiments in microgravity. These experiments, which included growing potatoes inside a locker (just like Matt Damon in The Martian), using sound to manipulate large liquid droplets, and growing protein crystals, would advance our understanding of science, medicine, and engineering and help pave the way for the International Space Station laboratory. While training for STS-73, I got a call from an astronaut I greatly admired: Colonel Eileen Collins. One of the first female test pilots, she would become the first woman to pilot the space shuttle in 1995, when the STS-63 mission launched. Collins had invited some of her heroes—the seven surviving members of the Mercury 13—to attend the launch, and she was calling to ask me to help host them. The Mercury 13 were a group of 13 women who in the early 1960s had received personal letters from the head of life sciences at NASA asking them to be part of a privately funded program to include women as astronauts. They had accepted the challenge and undergone the same grueling physical tests required of NASA’s first astronauts. Although the women performed as well as or better than the Mercury 7 astronauts on the selection tests, which many of them had made sacrifices even to pursue, the program was abruptly shut down just days before they were scheduled to start the next phase of testing. It would be almost two decades before NASA selected its first female astronauts. Never had I felt more acutely aware of being part of that lineage of brave and boundary-breaking women than I did that day, standing among those pioneers, watching Eileen make history. I can’t know what the Mercury 13 were thinking as they watched Eileen’s launch, but I sensed that they knew how much it meant to Eileen to be carrying their legacy with her in the pilot seat of that space shuttle. Missions and malfunctions Acouple of years after I had added my name to the still-too-short list of women who had flown in space, Eileen called again. This time she told me that I would be joining her on her next mission, STS-93, scheduled to launch in July 1999. Our Mercury 13 heroes would attend that launch too, and Eileen would be making history once again, this time as NASA’s first female space shuttle commander. I would be the lead mission specialist for delivering the shuttle’s precious payload, the Chandra X-ray Observatory, to orbit. I’d also be one of the EVA (extravehicular activity) crew members, if any spacewalking repairs were needed. Our mission to launch the world’s most powerful x-ray telescope would change the world of astrophysics. With eight times the resolution of its predecessors and the ability to observe sources that were fainter by a factor of more than 20, Chandra was designed to detect x-rays from exploding stars, black holes, clusters of galaxies, and other high-energy sources throughout the universe. Because cosmic x-rays are absorbed by our atmosphere, we can’t study them from Earth, so an x-ray telescope must operate from well above our atmosphere. Chandra wouldlaunch into low Earth orbit on the shuttle and then require additional propulsion to achieve its final orbit, a third of the way to the moon. I was thrilled by the idea that my team and I would be launching a telescope whose work would continue long after we were back on Earth. Preparation for launch was intense. As Chandra’s shepherd, I needed to be able to perform what we called the deploy sequence in my sleep. And I had to have a close relationship with the folks at the Chandra Mission Control, which was separate from NASA Mission Control, and make sure the two groups were working together. In a very real sense, Chandra represented the future of astrophysics—a window that promised a deeper understanding of the universe. When the moment came for the telescope to be deployed, all of this would be, quite literally, in my hands. But first it was in the hands of the launch team at the Kennedy Space Center, whose job it was to get us off the ground and into orbit. And we almost didn’t make it. Our first launch attempt was aborted eight seconds before liftoff. There we were, waiting for the solid rocket boosters to ignite and the bolts holding us to the launchpad to explode. Instead, we heard “Abort for a hydrogen leak” from Launch Control. Later it was revealed that a faulty sensor had been the issue.
For our second attempt, we were confidently told we were “one hundred percent GO for weather.” In other words, there was not even a hint of bad weather to delay us. And then there were lightning strikes at the launchpad. Really. For our third launch attempt, under a bright moon on a cool, clear night, we strapped in and the countdown began. This time I was determined I wouldn’t take anything for granted—even in those final 30 seconds after control switched over to the shuttle’s internal computers. Even when the engines kicked in and I felt the twang of the nose tipping forward and then back. Only when the solid rockets ignited did I let myself believe that we were actually heading back to space. As a seasoned second-time flyer, I kept my excitement contained, but inside I was whooping and hollering. And then, as Columbia rolled to the heads-down position just seconds after liftoff, my joyful inner celebration was drowned out by an angry alert tone and Eileen’s voice on the radio:
Houston: Columbia is in the roll and we have a fuel cell pH number one. Almost immediately, we got a response from the flight controllers in Houston: Columbia, Houston: We’d like AC bus sensors to OFF. We see a transient short on AC1. It was incomprehensible to be hearing these words less than 30 seconds into our actual flight. An electrical short had taken out two of our six main engine controllers. My first thought: We know how to deal with this. We did it last week in the simulator. But we weren’t in the simulator anymore. This was a real, no-shit emergency. After we returned to Earth we realized just how close we’d come to several actual life-or-death situations. No matter how much you train for just such a moment, you can’t really anticipate what it will mean to find yourself in one. I was relieved that it wasn’t long before I heard the steady voice of Jeff Ashby, our pilot, confirming that he had successfully flipped the bus sensor switches, reducing our exposure to the potential catastrophe of additional engine shutdowns. The Space Shuttle Columbia lifted off from Kennedy Space Center on July 23, 1999, for a five-day mission that would include releasing the Chandra X-ray Observatory. We were still headed to space, but with the loss of some of our backup capabilities, we were vulnerable. We carefully monitored the milestones that would tell us which options we still had. I tried not to hold my breath as the shuttle continued to climb and we listened for updates from Houston: Columbia, Houston: Two Engine Ben. Translation: We could lose an engine and still safely abort the mission and make it to our transatlantic landing site in Ben Guerir, Morocco. Columbia, Houston: Negative return. Translation: We were too far along to perform an RTLS (return to launch site) and head back to Florida. Then finally, the call we’d been wishing and waiting for: Columbia, Houston: PRESS TO MECO. Translation: We would make it to orbit and main engine cutoff even if one of our engines failed in the next few minutes. Now, assured of a safe orbit as we hurtled through space, we could turn our attention to our mission: sending Chandra off to its new home. An electrical short is a serious problem. After our mission landed, the shuttle fleet would be grounded for months after inspections revealed multiple cases of wire chafing on the other shuttles. Some would call us lucky, but listening to the audio from our cockpit and from Mission Control, I credit the well-trained teams that worked their way patiently through multiple failures catalyzed by the short and by a separate, equally dangerous issue: a slow leak in one of our three engines used during launch. Our STS-93 launch would go down in the history books as the most dangerous ascent of the shuttle program that didn’t result in an accident. Even in the midst of it, my sense of mission helped anchor me. The Chandra X-ray Observatory was deployed from the space shuttle Columbia’s payload bay on July 23, 1999, just a few hours after the shuttle’s arrival in Earth orbit. The plan in 1999 had been that Chandra would last five years. But as of this writing, Chandra is 25 and still sending valuable data back from space. Each year, on its “birthday,” the crew from STS-93 and the teams who worked on the ground connect via email, or in person for the big ones. We’ll always share a bond from that mission and its continuing legacy. And what a legacy it is. Young astronomers who were still toddlers when I pulled that deploy switch are now making discoveries based on the data it’s produced. Chandra is responsible for almost everything that we now know about black holes, and it’s still advancing our understanding of the universe by giant leaps. But these are difficult times. Sadly, budget cuts proposed in 2025 would eliminate Chandra, with no replacements planned. Suiting up and making change People often wonder what would possess any sane person to strap themself on top of a rocket. And by now you’re probably wondering why, after the harrowing malfunctions during the STS-93 launch, I was eager not only to return to space again but to spend six months living and working aboard the International Space Station. It comes back to mission. I don’t consider myself to be braver than most people, though I may be more optimistic than many. I take the risks associated with my job because I believe in what we’re doing together, and I trust my crew and our team to do all that’s humanly possible to keep us safe.
But the odds were stacked against me in my quest to serve on the space station. The world of space exploration, like so many others, is slow to change. Long-standing inequities were still evident when I joined NASA in 1992, and many endured during my time there. But it can be difficult to know when to fight for change at the outset and when to adapt to unfair circumstances to get your foot in the door. The first trained astronauts tended to be tall, athletic, and male—and the biases and assumptions that led to that preference were built into our equipment, especially our space suits. Our one-piece orange “pumpkin suits” worn for launching and landing weren’t designed for people with boobs or hips, so many of us wound up in baggy suits that made fitting a parachute harness tricky and uncomfortable. But fit issues with our 300-pound white spacewalking suits proved to be a much bigger problem, especially for the smaller-framed astronauts—including some men. The bulky EVA suits, which allow astronauts who venture outside a spacecraft to breathe and communicate while regulating our temperature and protecting us from radiation, are essentially human-shaped spaceships. But while they came in small, medium, large, and extra-large, those suits were designed for (male) astronauts of the Apollo era with no thought to how they might work for different body types. Given that ill-fitting equipment would affect performance, astronauts like me—who weren’t shaped like Neil Armstrong, Buzz Aldrin, and their compatriots—were often negatively prejudged before we even started training. As a result, NASA failed for years to leverage the skills of many members of the astronaut corps who physically didn’t fit an institutional template that hadn’t been redesigned for half a century.
Spacewalk training was the most physically difficult thing I did as an astronaut. Training in that way-too-large space suit made it even harder, forcing me to find ways to optimize my ability to function. As she prepares to head into the pool for EVA training, Coleman dons glove liners. Next, the bottom of her suit will be attached to the top and her gloves will be attached at the wrist ring, locked, and tested for a solid seal. Coleman qualified as a spacewalker for all of her missions, even when that required doing so in a medium suit that was much too big. We practice spacewalking underwater in an enormous swimming pool. If the suit is too big for you—as even the small was for me—the extra volume of air inside drags you up to the surface when you’re trying to work underwater. It’s a profound physical disadvantage. Though the fit of the small spacewalking suit wasn’t great, I persevered and adapted, training for many years in that suit with above-average spacewalking grades. And I was chosen to serve as a spacewalker for both of my shuttle missions, should the need arise. Not long before my first mission, Tom Akers, one of the experienced spacewalkers, came up to me and said, “Cady, I can see that you have a real aptitude for spacewalking and also a head that thinks like a spacewalker.” But then he told me that to cut costs, NASA had decided not to use the small suits on the space station. “People are going to look at you and think you’re too small, but I think someone like you could learn to function inside a medium suit,” he said. “So my advice is this: If you are interested in flying on the space station, then when someone asks you what size suit you wear, you tell them a medium will be no problem.” Sure enough, after my second shuttle flight, NASA announced that the small suit would be eliminated. I’ve never forgotten the wording of the rationale: “We’ve looked ahead at the manifest, and we have all of the spacewalkers that we need.” Implied was that they wouldn’t miss the smaller astronauts—not a bit. I think people might not have understood at the time what it meant to get rid of those small space suits. You could not live and work on the space station unless you were space-suit qualified. And because NASA was about to shut down the shuttle program, soon missions to the space station would be the only ones there were. NASA’s decision to eliminate the small suit effectively grounded more than a third of female astronauts. It also meant that few women would have the experience needed to serve in positions where they could have a say in important decisions about everything from prioritizing missions and choosing crews to making changes in NASA’s culture. To me, eliminating the small space suit indicated that the organization didn’t understand the value of having teams whose members contribute a wide range of experiences and viewpoints. When team members are too much alike—in background, ways of thinking and seeing the world, and, yes, gender—the teams are often less effective at finding innovative solutions to complex problems. Determined to contribute to the important scientific work being done on the space station, I had no choice but to qualify in the medium suit. But it would be a tall order because for the instructors, the gear is seldom at fault. You just need to get used to it, understand it better, or practice more. I did all three—but it wasn’t enough. So I also adapted everywhere I could, and I recruited a lot of great help. Kathy Thornton, one of the first female spacewalkers, recommended that I buy a waterskiing vest at Walmart to wear inside the suit. The space-suit team was horrified at the thought of using nonregulation materials, but it got them thinking. Together, we settled on having me wear a large girdle—left over from the Apollo guys—and stuffing it with NASA-approved foam to center me in the suit. This kept the air pockets more evenly distributed and allowed me to practice the required tasks, showing that I could work effectively in a medium. By adapting, which sometimes means staying silent, you may perpetuate a discriminatory system. But if I’d tried to speak the truth from day one, I’d never have made it to the day when I was taken seriously enough to start conversations about the importance of providing all astronauts with equipment that fits. I needed to launch those discussions from a place of strength, where I could be heard and make a difference. How best to catalyze change is always a personal decision. Sometimes drawing a line in the sand is the most effective strategy. Other times, you have to master the ill-fitting equipment before you get a chance to redesign it. Qualifying in the too-large suit was my only option if I wanted to fly on the International Space Station, since every flight to the ISS needed two spacewalkers and a backup spacewalker—and there were only three seats in the space capsule. The alternative would have been waiting at least 11 years for the newer spacecraft, which would have a fourth seat. I had to play by the unfair rules in order to get to a point where I could change those rules. With grit and a lot of support from others, I did end up qualifying in the medium suit. And in 2010, I set off for the International Space Station, serving as the lead robotics and science officer for Expedition 26/27 as I traveled 63,345,600 miles in 2,544 orbits over 159 days in space. Coleman conducts the Capillary Flow Experiment on the International Space Station to study the behavior of liquids at interfaces in microgravity.NASA/PAOLO NESPOLI Today, efforts are underway to redesign NASA’s space suits to fit the full range of sizes represented in the astronaut corps. Because of the work I put in to make it possible for a wider range of people to excel as spacewalkers, NASA hung a portrait of me in the row of space-suit photos outside the women’s locker room. And I’m proud to know that my colleagues—women and men—are continuing the work of making change at NASA. Every change has been hard won. The numbers matter. The astronaut corps is now 40% women. Given that, it is harder to make decisions with the potential to leave women out. When a female NASA astronaut walks on the moon for the first time, she will do so in a redesigned space suit. I hope it fits her like a glove. The crew of spaceship Earth Contributing to an important mission is a privilege. But who gets to contribute is as important to mission success as it is to the individuals who want to play a part. I can’t emphasize enough how much our incredibly complex NASA missions have benefited from the broad range of people involved. Bringing together people of different backgrounds and skills, with different ways of seeing the world and unique perspectives on opportunities and problems, is what makes space exploration possible. At the White House Science Fair in 2016, Coleman sits with the “Supergirls” Junior FIRST Lego League Team from Girl Scout Troop 411 in Tulsa, Oklahoma, as they await the arrival of President Barack ObamaNASA/JOEL KOWSKY Sharing space, to me, means including more people—both in the privilege of going to space and in so many of our endeavors here on Earth. When I applied to be an astronaut, very few women had orbited our planet. Today, that number has grown to 82 of 633 human beings in total, and newer NASA astronaut classes have averaged 50% women. Spaceflight is making progress in terms of including people with a wider range of backgrounds, fields of expertise, and physical abilities. But we have a long way to go. And the same is true in countless fields—the barriers that we struggle with in space exploration seem to be ubiquitous in the working world. As a planet, we’re facing enormous challenges, in areas from climate change to public health to how to sustainably power our endeavors. If there’s one thing I learned above all else from my time in space, it’s that we’re all sharing Earth. No one else is coming to solve our complex problems. And we won’t find solutions with teams of people who share too much in common. We need everyone to contribute where they can, and so we need to create systems, environments, and equipment that make that possible. And we need to be sure that those making contributions are visible, so they can serve as models for future generations. Our job now is to make sure everyone gets enough support to acquire the skills that we—all of us—need to build collaborative teams and solve problems both on Earth and in space. It’s worth repeating: We’re all sharing Earth. Looking down from space, you see very few borders separating humans from one another. You understand—not as an abstract ideal but as a visceral, obvious reality—that we are one family sharing a precious, life-supporting home. It’s so clear from up there that we are all the crew of “Spaceship Earth.” I believe that sharing that perspective, bringing it to life, will help more people see that our differences matter less than what binds us together, and motivate us to combine our efforts to tackle the challenges affecting all of us. In her 24 years at NASA, Cady Coleman ’83, a scientist, musician, and mother of two, flew on two space shuttle missions and began her 159-day mission aboard the International Space Station the day after turning 50. Today, as a public speaker and consultant, she shares her insights on leadership and teamwork gleaned from the high-stakes world of space exploration.

What if computer history were a romantic comedy?
The computer first appeared on the Broadway stage in 1955 in a romantic comedy—William Marchant’s The Desk Set. The play centers on four women who conduct research on behalf of the fictional International Broadcasting Company. Early in the first act, a young engineer named Richard Sumner arrives in the offices of the research department without explaining who he is or why he is studying the behavior of the workers. Bunny Watson, the head of the department, discovers that the engineer plans to install an “electronic brain” called Emmarac, which Sumner affectionately refers to as “Emmy” and describes as “the machine that takes the pause quotient out of the work–man-hour relationship.” What Sumner calls the “pause quotient” is jargon for the everyday activities and mundane interactions that make human beings less efficient than machines. Emmarac would eliminate inefficiencies, such as walking to a bookshelf or talking with a coworker about weekend plans. Bunny Watson comes to believe that the computing machine will eliminate not only inefficiencies in the workplace but also the need for human workers in her department. Sumner, the engineer, presents the computer as a technology of efficiency, but Watson, the department head, views it as a technology of displacement. Bunny Watson’s view was not uncommon during the first decade of computing technology. Thomas Watson Sr., president of IBM, insisted that one of his firm’s first machines be called a “calculator” instead of a “computer” because “he was concerned that the latter term, which had always referred to a human being, would raise the specter of technological unemployment,” according to historians Martin Campbell-Kelly and William Aspray. In keeping with the worry of both Watsons, the computer takes the stage on Broadway as a threat to white-collar work. The women in Marchant’s play fight against the threat of unemployment as soon as they learn why Sumner has arrived. The play thus attests to the fact that the very benefits of speed, accuracy, and information processing that made the computer useful for business also caused it to be perceived as a threat to the professional-managerial class. Comedy provides a template for managing the incongruity of an “electronic brain” arriving in a space oriented around human expertise and professional judgment. This threat was somewhat offset by the fact that for most of the 1950s, the computing industry was not profitable in the United States. Manufacturers produced and sold or leased the machines at steep losses, primarily to preserve a speculative market position and to bolster their image as technologically innovative. For many such firms, neglecting to compete in the emerging market for computers would have risked the perception that they were falling behind. They hoped computing would eventually become profitable as the technology improved, but even by the middle of the decade, it was not obvious to industry insiders when this would be the case. Even if the computer seemed to promise a new world of “lightning speed” efficiency and information management, committing resources to this promise was almost prohibitively costly.
While firms weighed the financial costs of computing, the growing interest in this new technology was initially perceived by white-collar workers as a threat to the nature of managerial expertise. Large corporations dominated American enterprise after the Second World War, and what historian Alfred Chandler called the “visible hand” of managerial professionals exerted considerable influence over the economy. Many observers wondered if computing machines would lead to a “revolution” in professional-managerial tasks. Some even speculated that “electronic brains” would soon coordinate the economy, thus replacing the bureaucratic oversight of most forms of labor. Howard Gammon, an official with the US Bureau of the Budget, explained in a 1954 essay that “electronic information processing machines” could “make substantial savings and render better service” if managers were to accept the technology. Gammon advocated for the automation of office work in areas like “stock control, handling orders, processing mailing lists, or a hundred and one other activities requiring the accumulating and sorting of information.” He even anticipated the development of tools for “erect[ing] a consistent system of decisions in areas where ‘judgment’ can be reduced to sets of clear-cut rules such as (1) ‘purchase at the lowest price,’ or (2) ‘never let the supply of bolts fall below the estimated one-week requirement for any size or type.’”
Gammon’s essay illustrates how many administrative thinkers hoped that computers would allow upper-level managers to oversee industrial production through a series of unambiguous rules that would no longer require midlevel workers for their enactment. This fantasy was impossible in the 1950s for so many reasons, the most obvious being that only a limited number of executable processes in postwar managerial capitalism could be automated through extant technology, and even fewer areas of “judgment,” as Gammon called them, can be reduced to sets of clear-cut rules. Still, this fantasy was part of the cultural milieu when Marchant’s play premiered on Broadway, one year after Gammon’s report and just a few months after IBM had announced the advance in memory storage technology behind its new 705 Model II, the first successful commercial data-processing machine. IBM received 100 orders for the 705, a commercial viability that seemed to signal the beginning of a new age in American corporate life. It soon became clear, however, that this new age was not the one that Gammon imagined. Rather than causing widespread unemployment or the total automation of the visible hand, the computer would transform the character of work itself. Marchant’s play certainly invokes the possibility of unemployment, but its posture toward the computer shifts toward a more accommodative view of what later scholars would call the “computerization of work.” For example, early in the play, Richard Sumner conjures the specter of the machine as a threat when he asks Bunny Watson if the new electronic brains “give you the feeling that maybe—just maybe—that people are a little bit outmoded.” Similarly, at the beginning of the second act, a researcher named Peg remarks, “I understand thousands of people are being thrown out of work because of these electronic brains.” The play seems to affirm Sumner’s sentiment and Peg’s implicit worry about her own unemployment once the computer, Emmarac, has been installed in the third act. After the installation, Sumner and Watson give the machine a research problem that previously took Peg several days to complete. Watson expects the task to stump Emmarac, but the machine takes only a few seconds to produce the same answer. While such moments conjure the specter of “technological unemployment,” the play juxtaposes Emmarac’s feats with Watson’s wit and spontaneity. For instance, after Sumner suggests people may be “outmoded,” Watson responds, “Yes, I wouldn’t be a bit surprised if they stopped making them.” Sumner gets the joke but doesn’t find it funny: “Miss Watson, Emmarac is not a subject for levity.” The staging of the play contradicts Sumner’s assertion. Emmarac occasions all manner of levity in The Desk Set, ranging from Watson’s joke to Emmarac’s absurd firing of every member of the International Broadcasting Company, including its president, later in the play. This shifting portrayal of Emmarac follows a much older pattern in dramatic comedy. As literary critic Northrop Frye explains, many forms of comedy follow an “argument” in which a “new world” appears on the stage and transforms the society entrenched at the beginning of the play. The movement away from established society hinges on a “principle of conversion” that “include[s] as many people as possible in its final society: the blocking characters are more often reconciled or converted than simply repudiated.” We see a similar dynamic in how Marchant’s play portrays the efficiency expert as brusque, rational, and incapable of empathy or romantic interests. After his arrival in the office, a researcher named Sadel says, “You notice he never takes his coat off? Do you think maybe he’s a robot?” Another researcher, Ruthie Saylor, later kisses Sumner on the cheek and invites him to a party. He says, “Sorry, I’ve got work to do,” to which Ruthie responds, “Sadel’s right—you are a robot!” Even as Sumner’s robotic behavior portrays him as antisocial, Emmarac further isolates him from the office by posing a threat to the workers. The play accentuates this blocking function by assigning Emmarac a personality and gender: Sumner calls the machine “Emmy,” and its operator, a woman named Miss Warriner, describes the machine as a “good girl.” By taking its place in the office, Emmarac effectively moves into the same space of labor and economic power as Bunny Watson, who had previously overseen the researchers and their activities. After being installed in the office, the large mainframe computer begins to coordinate this knowledge work. The gendering of the computer thus presents Emmarac as a newer model of the so-called New Woman, as if the computer imperils the feminist ideal that Bunny Watson clearly embodies. By directly challenging Watson’s socioeconomic independence and professional identity, the computer’s arrival in the workplace threatens to make the New Woman obsolete. Yet much like Frye’s claims about the “argument” of comedy, the conflict between Emmarac and Watson resolves as the machine transforms from a direct competitor into a collaborator. We see this shift during a final competition between Emmarac and the research department. The women have been notified that their positions have been terminated, and they begin packing up their belongings. Two requests for information suddenly arrive, but Watson and her fellow researchers refuse to process them because of their dismissal, so Warriner and Sumner attempt to field the requests. The research tasks are complicated, and Warriner mistakenly directs Emmarac to print a long, irrelevant answer. The machine inflexibly continues although the other inquiry needs to be addressed. Sumner and Warriner try to stop the machine, but this countermanding order causes the machine’s “magnetic circuit” to emit smoke and a loud noise. Sumner yells at Warriner, who runs offstage, and the efficiency expert is now the only one to field the requests and salvage the machine. However, he doesn’t know how to stop Emmarac from malfunctioning. Marchant’s stage directions here say that Watson, who has studied the machine’s maintenance and operation, “takes a hairpin from her hair and manipulates a knob on Emmarac—the NOISE obligingly stops.” Watson then explains, “You forget, I know something about one of these. All that research, remember?”
The madcap quality of this scene continues after Sumner discovers that Emmarac’s “little sister” in the payroll office has sent pink slips to every employee at the broadcasting firm. Sumner then receives a letter containing his own pink slip, which prompts Watson to quote Horatio’s lament as Hamlet dies: “Good night, sweet prince.” The turn of events poses as tragedy, but of course it leads to the play’s comic resolution. Once Sumner discovers that the payroll computer has erred—or, at least, that someone improperly programmed it—he explains that the women in the research department haven’t been fired. Emmarac, he says, “was not meant to replace you. It was never intended to take over. It was installed to free your time for research—to do the daily mechanical routine.” Even as Watson “fixes” the machine, the play fixes the robotic man through his professional failures. After this moment of discovery, Sumner apologizes to Watson and reconciles with the other women in the research department. He then promises to take them out to lunch and buy them “three martinis each.” Sumner exits with the women “laughing and talking,” thus reversing the antisocial role that he has occupied for most of the play. Emmarac’s failure, too, becomes an opportunity for its conversion. It may be that a programming error led to the company-wide pink slips, but the computer’s near-breakdown results from its rigidity. In both cases, the computer fails to navigate the world of knowledge work, thus becoming less threatening and more absurd through its flashing lights, urgent noises, and smoking console. This shift in the machine’s stage presence—the fact that it becomes comic—does not lead to its banishment or dismantling. Rather, after Watson “fixes” Emmarac, she uses it to compute a final inquiry submitted to her office: “What is the total weight of the Earth?” Given a problem that a human researcher “can spend months finding out,” she chooses to collaborate. Watson types out the question and Emmarac emits “its boop-boop-a-doop noise” in response, prompting her to answer, “Boop-boop-a-doop to you.” Emmarac is no longer Watson’s automated replacement but her partner in knowledge work. In Marchant’s play, comedy provides a template for managing the incongruity of an “electronic brain” arriving in a space oriented around human expertise and professional judgment. This template converts the automation of professional-managerial tasks from a threat into an opportunity, implying that a partnership with knowledge workers can convert the electronic brain into a machine compatible with their happiness. The computerization of work thus becomes its own kind of comic plot.

Art rhymes
As an MIT visiting scholar, rap legend Lupe Fiasco decided to go fishing for ideas on campus. In an approach he calls “ghotiing” (pronounced “fishing”), he composed nine raps inspired by works in MIT’s public art collection, writing and recording them on site. On May 2, he and the MIT Festival Jazz Ensemble debuted six of them, performing in front of a packed audience in Kresge for the final performance of the MIT Artfinity festival. The concert featured arrangements of Fiasco’s music done by Kevin Costello ’21, grad student Matthew Michalek, students in Fiasco’s Rap Theory and Practice class, and professor Evan Ziporyn. Produced in collaboration with the MIT List Visual Arts Center, Fiasco’s “Ghotiing MIT: Public Art” project also lets campus visitors scan a QR code and listen to his site-specific raps on their phones as they view the artworks in person. Click here to go on a virtual tour of seven pieces from MIT’s public art collection as you listen to Lupe Fiasco’s raps inspired by each piece. WBUR’s coverage of the project is available here and you can also read more about it in the Boston Globe and The Guardian. CAROLINE ALDEN CAROLINE ALDEN CAROLINE ALDEN

Stanford’s ChatEHR allows clinicians to query patient medical records using natural language, without compromising patient data
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more What would it be like to chat with health records the way one could with ChatGPT? Initially posed by a medical student, this question sparked the development of ChatEHR at Stanford Health Care. Now in production, the tool accelerates chart reviews for emergency room admissions, streamlines patient transfer summaries and synthesizes information from complex medical histories. In early pilot results, clinical users have experienced significantly sped-up information retrieval; notably, emergency physicians saw 40% reduced chart review time during critical handoffs, Michael A. Pfeffer, Stanford’s SVP and chief information and digital officer, said today in a fireside chat at VB Transform. This helps to decrease physician burnout while improving patient care, and builds upon decades of work medical facilities have been doing to collect and automate critical data. “It’s such an exciting time in healthcare because we’ve been spending the last 20 years digitizing healthcare data and putting it into an electronic health record, but not really transforming it,” Pfeffer said in a chat with VB editor-in-chief Matt Marshall. “With the new large language model technologies, we’re actually starting to do that digital transformation.” How ChatEHR helps reduce ‘pajama time,’ get back to real face-to-face interactions Physicians spend up to 60% of their time on administrative tasks rather than direct patient care. They often put in significant “pajama time,” sacrificing personal and family hours to complete administrative tasks outside of regular work hours. One of Pfeffer’s big goals is to streamline workflows and reduce those extra hours so clinicians and administrative staff can focus on more important work. For example, a lot of information comes in through online patient portals. AI now has the ability to read messages

What’s inside Genspark? A new vibe working approach that ditches rigid workflows for autonomous agents
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Vibe coding has been all the rage in recent months as a simple way for anyone to build applications with generative AI. But what if that same easy-going, natural language approach was extended to other enterprise workflows? That’s the promise of an emerging category of agentic AI applications. At VB Transform 2025 today, one such application was on display with the Genspark Super Agent, which was originally launched earlier this year. The Genspark Super Agent’s promise and approach could well extend the concept of vibe coding into vibe working. A key tenet of enabling vibe working, though, is to go with the flow and exert less control rather than more over AI agents. “The vision is simple, we want to bring the Cursor experience for developers to the workspace for everyone,” Kay Zhu, CTO of Genspark, said at VB Transform. “Everyone here should be able to do vibe working… it’s not only the software engineer that can do vibe coding.” >>See all our Transform 2025 coverage here<< Less is more when it comes to enterprise agentic AI According to Zhu, a foundational premise for enabling a vibe working era is letting go of some rigid rules that have defined enterprise workflows for generations. Zhu provocatively challenged enterprise AI orthodoxy, arguing that rigid workflows fundamentally limit what AI agents can accomplish for complex business tasks. During a live demonstration, he showed the system autonomously researching conference speakers, creating presentations, making phone calls and analyzing marketing data. Most notably, the system placed an actual phone call to the event organizer, VentureBeat founder Matt Marshall, during the live presentation. “This is normally the call that I don’t really want to

Windsurf CEO Varun Mohan throws cold water on 1-person, billion-dollar startup idea at VB Transform: ‘more people allow you to grow faster’
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more As AI-powered tools spread through enterprise software stacks, the rapid growth of the AI coding platform Windsurf is becoming a case study of what happens when developers adopt agentic tooling at scale. In a session at today’s VB Transform 2025 conference, CEO and co-founder Varun Mohan discussed how Windsurf’s integrated development environment (IDE) surpassed one million developers within four months of launch. More notably, the platform now writes over half of the code committed by its user base. The conversation, moderated by VentureBeat CEO Matt Marshall, opened with a brief but pointed disclaimer: Mohan could not comment on OpenAI’s widely reported potential acquisition of Windsurf. The issue has drawn attention following a Wall Street Journal report detailing a brewing standoff between OpenAI and Microsoft over the terms of that deal and broader tensions within their multi-billion-dollar partnership. According to the WSJ, OpenAI seeks to acquire Windsurf without giving Microsoft access to its intellectual property— an issue that could reshape the enterprise AI coding landscape. With that context set aside, the session focused on Windsurf’s technology, enterprise traction, and vision for agentic development. >>See all our Transform 2025 coverage here<< Moving past autocomplete Windsurf’s IDE is built around what the company calls a “mind-meld” loop—a shared project state between humans and AI that enables full coding flows rather than autocomplete suggestions. With this setup, agents can perform multi-file refactors, write test suites, and even launch UI changes when a pull request is initiated. Mohan emphasized that coding assistance can’t stop at code generation. “Only about 20 to 30% of a developer’s time is spent writing code. The rest is debugging, reviewing, and testing. To truly assist, an AI

Crop signals
Bacteria can be engineered to sense a variety of molecules, such as pollutants or soil nutrients, but usually these signals must be detected microscopically. Now Christopher Voigt, head of MIT’s Department of Biological Engineering, and colleagues have triggered bacterial cells to produce signals that can be read from as far as 90 meters away. Their work could lead to the development of sensors for agricultural and other applications, which could be monitored by drones or satellites. The researchers engineered two different types of bacteria, one found in soil and one in water, so that when they encounter certain target chemicals, they produce hyperspectral reporters—molecules that absorb distinctive wavelengths of light across the visible and infrared spectra. These signatures can be detected with hyperspectral cameras, which determine how much of each color wavelength is present in any given pixel. Though the reporting molecules they developed were linked to genetic circuits that detect nearby bacteria, this approach could also be combined with sensors detecting radiation, soil nutrients, or arsenic and other contaminants. “The nice thing about this technology is that you can plug and play whichever sensor you want,” says Yonatan Chemla, an MIT postdoc who is a lead author of a paper on the work along with Itai Levin, PhD ’24. “There is no reason that any sensor would not be compatible with this technology.” The work is being commercialized through Fieldstone Bio.

Cancer-targeting nanoparticles are moving closer to human trials
Over the past decade, Institute Professor Paula Hammond ’84, PhD ’93, and her students have used a technique known as layer-by-layer assembly to create a variety of polymer-coated nanoparticles that can be loaded with cancer-fighting drugs. The particles, which could prevent many side effects of chemotherapy by targeting tumors directly, have proved effective in mouse studies. Now the researchers have come up with a technique that allows them to manufacture many more particles in much less time, moving them closer to human use. “There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, the senior author of a paper on the new technique along with Darrell Irvine, a professor at the Scripps Research Institute. In the original production technique, layers with different properties can be laid down by alternately exposing a particle to positively and negatively charged polymers, with extensive purification to remove excess polymer after each application. Each layer can carry therapeutics as well as molecules that help the particles find and enter cancer cells. But the process is time-consuming and would be difficult to scale up. In the new work, the researchers used a microfluidic mixing device that allows them to sequentially add layers as the particles flow through a microchannel. For each layer, they can calculate exactly how much polymer is needed, which eliminates the slow and costly purification step and saves significantly on material costs.
This microfluidic device can be used to assemble the drug delivery nanoparticles rapidly and in large quantities.GRETCHEN ERTL This strategy also facilitates compliance with the FDA’s GMP (good manufacturing practice) requirements, which ensure that products meet safety standards and can be manufactured consistently. “There’s much less chance of any sort of operator mistake or mishaps,” says Ivan Pires, PhD ’24, a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, who is the paper’s lead author along with Ezra Gordon ’24. “We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.” In minutes, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses for certain cargos), which would have taken close to an hour with the original process. They say this means it would be realistic to produce more than enough for clinical trials and patient use. To demonstrate the technique, the researchers created layered nanoparticles loaded with the immune molecule interleukin-12; they have previously shown that such particles can slow growth of ovarian tumors in mice. Those manufactured using the new technique performed similarly to the originals and managed to bind to cancer tissue without entering the cancer cells. This lets them serve as markers that activate the immune system in the tumor, which can delay tumor growth and even lead to cures in mouse models of ovarian cancer. The researchers have filed for a patent and are working with MIT’s Deshpande Center for Technological Innovation in hopes of forming a company to commercialize the technology, which they say could also be applied to glioblastoma and other types of cancer.

Immune molecules may affect mood
Two new studies from MIT and Harvard Medical School add to a growing body of evidence that infection-fighting molecules called cytokines also influence the brain, leading to behavioral changes during illness. By mapping the locations in the brain of receptors for different forms of IL-17, the researchers found that the cytokine acts on the somatosensory cortex to promote sociable behavior and on the amygdala to elicit anxiety. These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences and one of both studies’ senior authors. “If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says. In the cortex, the researchers found certain receptors in a population of neurons that, when overactivated, can lead to autism-like symptoms such as reduced sociability in mice. But the researchers determined that the neurons become less excitable when a specific form of IL-17 binds to the receptors, shedding possible light on why autism symptoms in children often abate when they have fevers. Choi hypothesizes that IL-17 may have evolved as a neuromodulator and was “hijacked” by the immune system only later. Meanwhile, the researchers also found two types of IL-17 receptors in a certain population of neurons in the amygdala, which plays an important role in processing emotions. When these receptors bind to two forms of IL-17, the neurons become more excitable, leading to an increase in anxiety. Eventually, findings like these may help researchers develop new treatments for conditions such as autism and depression.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.