Stay Ahead, Stay ONMINE

Your boss is watching

A full day’s work for Dora Manriquez, who drives for Uber and Lyft in the San Francisco Bay Area, includes waiting in her car for a two-digit number to appear. The apps keep sending her rides that are too cheap to pay for her time—$4 or $7 for a trip across San Francisco, $16 for a trip from the airport for which the customer is charged $100. But Manriquez can’t wait too long to accept a ride, because her acceptance rate contributes to her driving score for both companies, which can then affect the benefits and discounts she has access to.  The systems are black boxes, and Manriquez can’t know for sure which data points affect the offers she receives or how. But what she does know is that she’s driven for ride-share companies for the last nine years, and this year, having found herself unable to score enough better-­paying rides, she has to file for bankruptcy.  Every action Manriquez takes—or doesn’t take—is logged by the apps she must use to work for these companies. (An Uber spokesperson told MIT Technology Review that acceptance rates don’t affect drivers’ fares. Lyft did not return a request for comment on the record.) But app-based employers aren’t the only ones keeping a very close eye on workers today. A study conducted in 2021, when the covid-19 pandemic had greatly increased the number of people working from home, revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers. A New York Times investigation in 2022 found that eight of the 10 largest private companies in the US track individual worker productivity metrics, many in real time. Specialized software can now measure and log workers’ online activities, physical location, and even behaviors like which keys they tap and what tone they use in their written communications—and many workers aren’t even aware that this is happening. What’s more, required work apps on personal devices may have access to more than just work—and as we may know from our private lives, most technology can become surveillance technology if the wrong people have access to the data. While there are some laws in this area, those that protect privacy for workers are fewer and patchier than those applying to consumers. Meanwhile, it’s predicted that the global market for employee monitoring software will reach $4.5 billion by 2026, with North America claiming the dominant share. Working today—whether in an office, a warehouse, or your car—can mean constant electronic surveillance with little transparency, and potentially with livelihood-­ending consequences if your productivity flags. What matters even more than the effects of this ubiquitous monitoring on privacy may be how all that data is shifting the relationships between workers and managers, companies and their workforce. Managers and management consultants are using worker data, individually and in the aggregate, to create black-box algorithms that determine hiring and firing, promotion and “deactivation.” And this is laying the groundwork for the automation of tasks and even whole categories of labor on an endless escalator to optimized productivity. Some human workers are already struggling to keep up with robotic ideals. We are in the midst of a shift in work and workplace relationships as significant as the Second Industrial Revolution of the late 19th and early 20th centuries. And new policies and protections may be necessary to correct the balance of power. Data as power Data has been part of the story of paid work and power since the late 19th century, when manufacturing was booming in the US and a rise in immigration meant cheap and plentiful labor. The mechanical engineer Frederick Winslow Taylor, who would become one of the first management consultants, created a strategy called “scientific management” to optimize production by tracking and setting standards for worker performance. Soon after, Henry Ford broke down the auto manufacturing process into mechanized steps to minimize the role of individual skill and maximize the number of cars that could be produced each day. But the transformation of workers into numbers has a longer history. Some researchers see a direct line between Taylor’s and Ford’s unrelenting focus on efficiency and the dehumanizing labor optimization practices carried out on slave-owning plantations.  As manufacturers adopted Taylorism and its successors, time was replaced by productivity as the measure of work, and the power divide between owners and workers in the United States widened. But other developments soon helped rebalance the scales. In 1914, Section 6 of the Clayton Act established the federal legal right for workers to unionize and stated that “the labor of a human being is not a commodity.” In the years that followed, union membership grew, and the 40-hour work week and the minimum wage were written into US law. Though the nature of work had changed with revolutions in technology and management strategy, new frameworks and guardrails stood up to meet that change. More than a hundred years after Taylor published his seminal book, The Principles of Scientific Management, “efficiency” is still a business buzzword, and technological developments, including new uses of data, have brought work to another turning point. But the federal minimum wage and other worker protections haven’t kept up, leaving the power divide even starker. In 2023, CEO pay was 290 times average worker pay, a disparity that’s increased more than 1,000% since 1978. Data may play the same kind of intermediary role in the boss-worker relationship that it has since the turn of the 20th century, but the scale has exploded. And the stakes can be a matter of physical health. In 2024, a report from a Senate committee led by Bernie Sanders, based on an 18-month investigation of Amazon’s warehouse practices, found that the company had been setting the pace of work in those facilities with black-box algorithms, presumably calibrated with data collected by monitoring employees. (In California, because of a 2021 bill, Amazon is required to at least reveal the quotas and standards workers are expected to comply with; elsewhere the bar can remain a mystery to the very people struggling to meet it.) The report also found that in each of the previous seven years, Amazon workers had been almost twice as likely to be injured as other warehouse workers, with injuries ranging from concussions to torn rotator cuffs to long-term back pain. An internal team tasked with evaluating Amazon warehouse safety found that letting robots set the pace for human labor was correlated with subsequent injuries. The Sanders report found that between 2020 and 2022, two internal Amazon teams tasked with evaluating warehouse safety recommended reducing the required pace of work and giving workers more time off. Another found that letting robots set the pace for human labor was correlated with subsequent injuries. The company rejected all the recommendations for technical or productivity reasons. But the report goes on to reveal that in 2022, another team at Amazon, called Core AI, also evaluated warehouse safety and concluded that unrealistic pacing wasn’t the reason all those workers were getting hurt on the job. Core AI said that the cause, instead, was workers’ “frailty” and “intrinsic likelihood of injury.” The issue was the limitations of the human bodies the company was measuring, not the pressures it was subjecting those bodies to. Amazon stood by this reasoning during the congressional investigation. Amazon spokesperson Maureen Lynch Vogel told MIT Technology Review that the Sanders report is “wrong on the facts” and that the company continues to reduce incident rates for accidents. “The facts are,” she said, “our expectations for our employees are safe and ­reasonable—and that was validated both by a judge in Washington after a thorough hearing and by the state’s Board of Industrial Insurance Appeals.” A study conducted in 2021 revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers. Yet this line of thinking is hardly unique to Amazon, although the company could be seen as a pioneer in the datafication of work. (An investigation found that over one year between 2017 and 2018, the company fired hundreds of workers at a single facility—by means of automatically generated letters—for not meeting productivity quotas.) An AI startup recently placed a series of billboards and bus signs in the Bay Area touting the benefits of its automated sales agents, which it calls “Artisans,” over human workers. “Artisans won’t complain about work-life balance,” one said. “Artisans won’t come into work ­hungover,” claimed another. “Stop hiring humans,” one hammered home. The startup’s leadership took to the company blog to say that the marketing campaign was intentionally provocative and that Artisan believes in the potential of human labor. But the company also asserted that using one of its AI agents costs 96% less than hiring a human to do the same job. The campaign hit a nerve: When data is king, humans—whether warehouse laborers or knowledge workers—may not be able to outperform machines. AI management and managing AI Companies that use electronic employee monitoring report that they are most often looking to the technologies not only to increase productivity but also to manage risk. And software like Teramind offers tools and analysis to help with both priorities. While Teramind, a globally distributed company, keeps its list of over 10,000 client companies private, it provides resources for the financial, health-care, and customer service industries, among others—some of which have strict compliance requirements that can be tricky to keep on top of. The platform allows clients to set data-driven standards for productivity, establish thresholds for alerts about toxic communication tone or language, create tracking systems for sensitive file sharing, and more. 

A full day’s work for Dora Manriquez, who drives for Uber and Lyft in the San Francisco Bay Area, includes waiting in her car for a two-digit number to appear. The apps keep sending her rides that are too cheap to pay for her time—$4 or $7 for a trip across San Francisco, $16 for a trip from the airport for which the customer is charged $100. But Manriquez can’t wait too long to accept a ride, because her acceptance rate contributes to her driving score for both companies, which can then affect the benefits and discounts she has access to. 

The systems are black boxes, and Manriquez can’t know for sure which data points affect the offers she receives or how. But what she does know is that she’s driven for ride-share companies for the last nine years, and this year, having found herself unable to score enough better-­paying rides, she has to file for bankruptcy. 

Every action Manriquez takes—or doesn’t take—is logged by the apps she must use to work for these companies. (An Uber spokesperson told MIT Technology Review that acceptance rates don’t affect drivers’ fares. Lyft did not return a request for comment on the record.) But app-based employers aren’t the only ones keeping a very close eye on workers today.

A study conducted in 2021, when the covid-19 pandemic had greatly increased the number of people working from home, revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers. A New York Times investigation in 2022 found that eight of the 10 largest private companies in the US track individual worker productivity metrics, many in real time. Specialized software can now measure and log workers’ online activities, physical location, and even behaviors like which keys they tap and what tone they use in their written communications—and many workers aren’t even aware that this is happening.

What’s more, required work apps on personal devices may have access to more than just work—and as we may know from our private lives, most technology can become surveillance technology if the wrong people have access to the data. While there are some laws in this area, those that protect privacy for workers are fewer and patchier than those applying to consumers. Meanwhile, it’s predicted that the global market for employee monitoring software will reach $4.5 billion by 2026, with North America claiming the dominant share.

Working today—whether in an office, a warehouse, or your car—can mean constant electronic surveillance with little transparency, and potentially with livelihood-­ending consequences if your productivity flags. What matters even more than the effects of this ubiquitous monitoring on privacy may be how all that data is shifting the relationships between workers and managers, companies and their workforce. Managers and management consultants are using worker data, individually and in the aggregate, to create black-box algorithms that determine hiring and firing, promotion and “deactivation.” And this is laying the groundwork for the automation of tasks and even whole categories of labor on an endless escalator to optimized productivity. Some human workers are already struggling to keep up with robotic ideals.

We are in the midst of a shift in work and workplace relationships as significant as the Second Industrial Revolution of the late 19th and early 20th centuries. And new policies and protections may be necessary to correct the balance of power.

Data as power

Data has been part of the story of paid work and power since the late 19th century, when manufacturing was booming in the US and a rise in immigration meant cheap and plentiful labor. The mechanical engineer Frederick Winslow Taylor, who would become one of the first management consultants, created a strategy called “scientific management” to optimize production by tracking and setting standards for worker performance.

Soon after, Henry Ford broke down the auto manufacturing process into mechanized steps to minimize the role of individual skill and maximize the number of cars that could be produced each day. But the transformation of workers into numbers has a longer history. Some researchers see a direct line between Taylor’s and Ford’s unrelenting focus on efficiency and the dehumanizing labor optimization practices carried out on slave-owning plantations. 

As manufacturers adopted Taylorism and its successors, time was replaced by productivity as the measure of work, and the power divide between owners and workers in the United States widened. But other developments soon helped rebalance the scales. In 1914, Section 6 of the Clayton Act established the federal legal right for workers to unionize and stated that “the labor of a human being is not a commodity.” In the years that followed, union membership grew, and the 40-hour work week and the minimum wage were written into US law. Though the nature of work had changed with revolutions in technology and management strategy, new frameworks and guardrails stood up to meet that change.

More than a hundred years after Taylor published his seminal book, The Principles of Scientific Management, “efficiency” is still a business buzzword, and technological developments, including new uses of data, have brought work to another turning point. But the federal minimum wage and other worker protections haven’t kept up, leaving the power divide even starker. In 2023, CEO pay was 290 times average worker pay, a disparity that’s increased more than 1,000% since 1978. Data may play the same kind of intermediary role in the boss-worker relationship that it has since the turn of the 20th century, but the scale has exploded. And the stakes can be a matter of physical health.

A humanoid robot with folded arms looms over human workers at an Amazon Warehouse

In 2024, a report from a Senate committee led by Bernie Sanders, based on an 18-month investigation of Amazon’s warehouse practices, found that the company had been setting the pace of work in those facilities with black-box algorithms, presumably calibrated with data collected by monitoring employees. (In California, because of a 2021 bill, Amazon is required to at least reveal the quotas and standards workers are expected to comply with; elsewhere the bar can remain a mystery to the very people struggling to meet it.) The report also found that in each of the previous seven years, Amazon workers had been almost twice as likely to be injured as other warehouse workers, with injuries ranging from concussions to torn rotator cuffs to long-term back pain.

An internal team tasked with evaluating Amazon warehouse safety found that letting robots set the pace for human labor was correlated with subsequent injuries.

The Sanders report found that between 2020 and 2022, two internal Amazon teams tasked with evaluating warehouse safety recommended reducing the required pace of work and giving workers more time off. Another found that letting robots set the pace for human labor was correlated with subsequent injuries. The company rejected all the recommendations for technical or productivity reasons. But the report goes on to reveal that in 2022, another team at Amazon, called Core AI, also evaluated warehouse safety and concluded that unrealistic pacing wasn’t the reason all those workers were getting hurt on the job. Core AI said that the cause, instead, was workers’ “frailty” and “intrinsic likelihood of injury.” The issue was the limitations of the human bodies the company was measuring, not the pressures it was subjecting those bodies to. Amazon stood by this reasoning during the congressional investigation.

Amazon spokesperson Maureen Lynch Vogel told MIT Technology Review that the Sanders report is “wrong on the facts” and that the company continues to reduce incident rates for accidents. “The facts are,” she said, “our expectations for our employees are safe and ­reasonable—and that was validated both by a judge in Washington after a thorough hearing and by the state’s Board of Industrial Insurance Appeals.”

A study conducted in 2021 revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers.

Yet this line of thinking is hardly unique to Amazon, although the company could be seen as a pioneer in the datafication of work. (An investigation found that over one year between 2017 and 2018, the company fired hundreds of workers at a single facility—by means of automatically generated letters—for not meeting productivity quotas.) An AI startup recently placed a series of billboards and bus signs in the Bay Area touting the benefits of its automated sales agents, which it calls “Artisans,” over human workers. “Artisans won’t complain about work-life balance,” one said. “Artisans won’t come into work ­hungover,” claimed another. “Stop hiring humans,” one hammered home.

The startup’s leadership took to the company blog to say that the marketing campaign was intentionally provocative and that Artisan believes in the potential of human labor. But the company also asserted that using one of its AI agents costs 96% less than hiring a human to do the same job. The campaign hit a nerve: When data is king, humans—whether warehouse laborers or knowledge workers—may not be able to outperform machines.

AI management and managing AI

Companies that use electronic employee monitoring report that they are most often looking to the technologies not only to increase productivity but also to manage risk. And software like Teramind offers tools and analysis to help with both priorities. While Teramind, a globally distributed company, keeps its list of over 10,000 client companies private, it provides resources for the financial, health-care, and customer service industries, among others—some of which have strict compliance requirements that can be tricky to keep on top of. The platform allows clients to set data-driven standards for productivity, establish thresholds for alerts about toxic communication tone or language, create tracking systems for sensitive file sharing, and more. 

a person laying in the sidewalk next to a bus sign reading,

MICHAEL BYERS

Electronic monitoring and management are also changing existing job functions in real time. Teramind’s clients must figure out who at their company will handle and make decisions around employee data. Depending on the type of company and its needs, Osipova says, that could be HR, IT, the executive team, or another group entirely—and the definitions of those roles will change with these new responsibilities. 

Workers’ tasks, too, can shift with updated technology, sometimes without warning. In 2020, when a major hospital network piloted using robots to clean rooms and deliver food to patients, Criscitiello heard from SEIU-UHW members that they were confused about how to work alongside them. Workers certainly hadn’t received any training for that. “It’s not ‘We’re being replaced by robots,’” says Criscitiello. “It’s ‘Am I going to be responsible if somebody has a medical event because the wrong tray was delivered? I’m supervising the robot—it’s on my floor.’” 

A New York Times investigation in 2022 found that eight of the 10 largest US private companies track individual worker productivity metrics, often in real time.

Nurses are also seeing their jobs expand to include technology management. Carmen Comsti of National Nurses United, the largest nurses’ union in the country, says that while management isn’t explicitly saying nurses will be disciplined for errors that occur as algorithmic tools like AI transcription systems or patient triaging mechanisms are integrated into their workflows, that’s functionally how it works. “If a monitor goes off and the nurse follows the algorithm and it’s incorrect, the nurse is going to get blamed for it,” Comsti says. Nurses and their unions don’t have access to the inner workings of the algorithms, so it’s impossible to say what data these or other tools have been trained on, or whether the data on how nurses work today will be used to train future algorithmic tools. What it means to be a worker, manager, or even colleague is on shifting ground, and frontline workers don’t have insight into which way it’ll move next.

The state of the law and the path to protection

Today, there isn’t much regulation on how companies can gather and use workers’ data. While the General Data Protection Regulation (GDPR) offers some worker protections in Europe, no US federal laws consistently shield workers’ privacy from electronic monitoring or establish firm guardrails for the implementation of algorithm-driven management strategies that draw on the resulting data. (The Electronic Communications Privacy Act allows employers to monitor employees if there are legitimate business reasons and if the employee has already given consent through a contract; tracking productivity can qualify as a legitimate business reason.)

But in late 2024, the Consumer Financial Protection Bureau did issue guidance warning companies using algorithmic scores or surveillance-based reports that they must follow the Fair Credit Reporting Act—which previously applied only to consumers—by getting workers’ consent and offering transparency into what data was being collected and how it would be used. And the Biden administration’s Blueprint for an AI Bill of Rights had suggested that the enumerated rights should apply in employment contexts. But none of these are laws.

So far, binding regulation is being introduced state by state. In 2023, the California Consumer Privacy Act (CCPA) was officially extended to include workers and not just consumers in its protections, even though workers had been specifically excluded when the act was first passed. That means California workers now have the right to know what data is being collected about them and for what purpose, and they can ask to correct or delete that data. Other states are working on their own measures. But with any law or guidance, whether at the federal or state level, the reality comes down to enforcement. Criscitiello says SEIU is testing out the new CCPA protections. 

“It’s too early to tell, but my conclusion so far is that the onus is on the workers,” she says. “Unions are trying to fill this function, but there’s no organic way for a frontline worker to know how to opt out [of data collection], or how to request data about what’s being collected by their employer. There’s an education gap about that.” And while CCPA covers the privacy aspect of electronic monitoring, it says nothing about how employers can use any collected data for management purposes.

The push for new protections and guardrails is coming in large part from organized labor. Unions like National Nurses United and SEIU are working with legislators to create policies on workers’ rights in the face of algorithmic management. And app-based ­advocacy groups have been pushing for new minimum pay rates and against wage theft—and winning. There are other successes to be counted already, too. One has to do with electronic visit verification (EVV), a system that records information about in-home visits by health-care providers. The 21st Century Cures Act, signed into law in 2016, required all states to set up such systems for Medicaid-funded home health care. The intent was to create accountability and transparency to better serve patients, but some health-care workers in California were concerned that the monitoring would be invasive and disruptive for them and the people in their care.

Brandi Wolf, the statewide policy and research director for SEIU’s long-term-care workers, says that in collaboration with disability rights and patient advocacy groups, the union was able to get language into legislation passed in the 2017–2018 term that would take effect the next fiscal year. It indicated to the federal government that California would be complying with the requirement, but that EVV would serve mainly a timekeeping function, not a management or disciplinary one.

Today advocates say that individual efforts to push back against or evade electronic monitoring are not enough; the technology is too widespread and the stakes too high. The power imbalances and lack of transparency affect workers across industries and sectors—from contract drivers to unionized hospital staff to well-compensated knowledge workers. What’s at issue, says Minsu Longiaru, a senior staff attorney at PowerSwitch Action, a network of grassroots labor organizations, is our country’s “moral economy of work”—that is, an economy based on human values and not just capital. Longiaru believes there’s an urgent need for a wave of socially protective policies on the scale of those that emerged out of the labor movement in the early 20th century. “We’re at a crucial moment right now where as a society, we need to draw red lines in the sand where we can clearly say just because we can do something technological doesn’t mean that we should do it,” she says. 

Like so many technological advances that have come before, electronic monitoring and the algorithmic uses of the resulting data are not changing the way we work on their own. The people in power are flipping those switches. And shifting the balance back toward workers may be the key to protecting their dignity and agency as the technology speeds ahead. “When we talk about these data issues, we’re not just talking about technology,” says Longiaru. “We spend most of our lives in the workplace. This is about our human rights.” 

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco routers knocked out due to Cloudflare DNS change

Exposes architectural fragility Networking consultant Yvette Schmitter, CEO of the Fusion Collective consulting firm, said the Cloudflare change “exposed Cisco’s architectural fragility when [some Cisco] switches worldwide entered fatal reboot loops every 10-30 minutes.” What happened? “Cloudflare changed record ordering. Cisco’s firmware, instead of handling unexpected DNS responses gracefully, treated

Read More »

Distributed load flexibility: The overlooked relief valve for the grid

Utility leaders are managing a hard reality. Load is growing faster than the grid was built to handle, increasing the likelihood of costly upgrades and rate pressure. AI-driven data center development is making headlines as a major source of load growth. These projects are large, fast-moving and highly visible, drawing public attention and political scrutiny. Utilities and commissions are now facing the question: “How do we add capacity quickly, without overbuilding the grid and pushing costs into rates?” One of the most promising answers is leveraging distributed load flexibility. With distribution-level orchestration, this flexibility can enable utilities to serve growing loads on existing distribution infrastructure, supporting affordability for all customers. Bottom-up load growth requires bottom-up solutions Distributed load flexibility refers to the growing share of new electricity demand that is spatially distributed across the grid and can be shifted temporally. The flexibility opportunity now includes electric vehicles, batteries, heat pumps, smart water heaters and connected building systems. These grid-edge devices are distributed across homes, workplaces and commercial zones. Adoption of these resources is accelerating and many already respond to price signals and automation. Managed EV charging is a common example. A vehicle may be plugged in overnight but only needs a few hours of charging. When charging is scheduled to avoid bottlenecks, that flexibility becomes a real capacity tool for the utility. However, many programs, price signals and planning tools were designed for bulk system needs. Time-of-use rates and demand response reduce system peaks by telling every device to do the same thing: shift away from the bulk system peak. The “just shift it off-peak” solution misses a significant risk. Flexibility without orchestration simply moves the problem rather than solving it. EV charging can synchronize around the same low-cost hours. Batteries can do the same if they charge in identical

Read More »

What lies below? Beneath our streets lies treasure — and trouble.

For decades, municipalities and utility companies have buried water mains, gas lines, fiber optic conduits, power cables and other vital infrastructure under streets and sidewalks. Burying these systems improves safety, reliability and aesthetics, but it also creates a long-term problem: once installed, many of these utilities become practically invisible. The ground protects and hides them, but over time the memory of where they run becomes foggy. Records age, streets are widened or rerouted and even highly accurate GPS coordinates can drift or lose relevance as the surface infrastructure changes. The result? What was once a known network of pipes and cables becomes a metaphorical “lost treasure.” When crews dig — for maintenance, upgrades or new construction — that invisibility can turn into a disaster. Minor incidents are common: broken lines that cause service disruptions and headaches. But sometimes the consequences are far worse: ruptured gas mains, flooding from damaged water or sewer lines, or severed fiber cables; all capable of causing serious safety hazards, costly repairs and major public inconvenience. The limitations of “traditional” locating Historically, locating buried plant often relied on conductive materials (metal pipes, conduits or tracer wires). These can be detected with standard locating equipment. But since the 1970s, many new installations, especially gas, water and fiber optic conduits, have been built using nonconductive materials such as PVC or HDPE. When utilities are nonconductive, conventional electromagnetic locators often fail. In blind trials (McMahon, 2000) using standard survey methods, as much as half of all subsurface utilities were missed. [AC1] Additionally, PHMSA (Pipeline Hazardous Materials Safety Administration) has reported (PHMSA, 2025) an average cost of $594,218,469 over the past three years for gas utility pipeline incidents with an average of 11 deaths and 30 injuries. More expensive and specialized methods exist such as ground penetrating radar (GPR), but these have

Read More »

Why robust planning is central to data center success

At one level, the challenge of meeting load growth from data centers is straightforward. After all, the task is fundamentally about quickly delivering a substantial amount of electrons to data centers to power cloud computing and artificial intelligence applications (AI).  The sheer scale of the demand for electricity is enormous, especially for a utility industry that spent decades forecasting flat or even declining load growth. For example, the U.S. Energy Information Administration’s (EIA) most recent short-term energy outlook (STEO) report forecast commercial electricity consumption to grow 3% in 2025 and 5% in 2026, a revision up from the annual increase of 2% EIA had previously projected.  “The revisions are most notable in the commercial sector, where data centers are an expanding source of demand,” the EIA wrote. A report last year by the Lawrence Berkeley National Laboratory (LBNL) reached similar conclusions, finding that the percentage of the nation’s electricity consumed by data centers could rise from 4.4% in 2023 to as much as 12% in 2028. Complexity beyond scale Utilities obviously want to serve data centers. However, the question for both utilities and data center developers is whether extremely large and reliable amounts of electricity can be provided within the required timeframe — speedy access to power is a key strategic imperative for data centers. But the reality is that very little about meeting the enormous electricity demand from data centers is straightforward. For example, besides their sheer size, data centers have unique load characteristics, whether they’re being used to train new models or for inference, when trained AI models produce answers to questions users ask. Additionally, some data center loads are extremely variable and rapidly changing, with power demands fluctuating quickly as compute workloads shift.  This rapidly changing variability can put mechanical stress on a behind-the-meter or grid-connected but

Read More »

OPEC Receives Updated Compensation Plans from 4 Countries

A statement posted on OPEC’s website last week announced that the OPEC Secretariat had received updated compensation plans from Iraq, the United Arab Emirates (UAE), Kazakhstan, and Oman. A table accompanying this statement showed that these compensation plans amount to a total of 267,000 barrels per day in December 2025, 415,000 barrels per day in January 2026, 708,000 barrels per day in February, 710,000 barrels per day in March, 810,000 barrels per day in April, 831,000 barrels per day in May, and 829,000 barrels per day in June. According to the table, Iraq’s compensation plans amount to 120,000 barrels per day in both December 2025 and January 2026, 115,000 barrels per day in both February and March, 101,000 barrels per day in April, and 100,000 barrels per day in both May and June. Kazakhstan’s compensation plans come in at 131,000 barrels per day in December, 279,000 barrels per day in January, 569,000 barrels per day in both February and March, 650,000 barrels per day in April, and 669,000 barrels per day in both May and June, the table showed. The UAE’s compensation plans amount to 10,000 barrels per day in both December and January, 20,000 barrels per day in both February and April, 54,000 barrels per day in both March and May, and 55,000 barrels per day in June, according to the table, which showed that Oman’s compensation plans come in at 6,000 barrels per day in both December and January, 4,000 barrels per day in February, 6,000 barrels per day in March, 5,000 barrels per day in April, 8,000 barrels in May, and 5,000 barrels per day in June. “As agreed during the virtual meeting held by the eight countries with additional voluntary adjustments, including Saudi Arabia, Russia, Iraq, the United Arab Emirates, Kuwait, Kazakhstan, Algeria, and Oman on

Read More »

Iberdrola Completes Its Biggest Transmission Line in Brazil

Iberdrola SA said it has fully energized a 1,600-kilometer (994.19 miles) transmission line between the north of Minas Gerais state and São Paulo. With six substations and 3,250 towers, the Alto Paranaíba project is the biggest transmission project undertaken by Iberdrola, through its local subsidiary Neoenergia, in Brazil and one of the country’s biggest, the Spanish power and gas utility said in a press release. The project cost BRL 4.2 billion ($782.21 million), Iberdrola said. Alto Paranaíba was completed 15 months ahead of the schedule set by Brazil’s National Electric Energy Agency, according to Iberdrola. On November 24, 2025, Iberdrola launched a takeover bid for Neoenergia after which its stake would rise from 83.8 percent to 100 percent. This follows Iberdrola’s acquisition of a 30.29 percent stake in Neoenergia held by Banco do Brasil’s pension fund, PREVI. Iberdrola announced the completion of the transaction with PREVI October 31, 2025. “Iberdrola offers the same price paid in the recent acquisition of Caixa de Previdência dos Funcionários do Banco do Brasil (PREVI) stake corresponding to 30.29 percent of the capital – 32.5 Brazilian reais per share – updated by the official Brazilian interest rate, called SELIC”, Iberdrola said announcing the takeover bid. “A total disbursement (before its update according to the evolution of the SELIC rate and assuming that Neoenergia does not pay any intermediate dividend) of around EUR 1.03 billion is expected. “The transaction will simplify Neoenergia’s structure, providing its operations and financing with greater flexibility and reducing costs arising from maintaining the trading of shares on the market. “With this transaction, Iberdrola reaffirms its commitment to Brazil and to a growth model based on electricity grids, which account for 90 percent of Neoenergia’s business”. According to the November statement, Neoenergia had nearly 40 million electricity customers served through five distribution units in the states of Bahia, Brasilia, Mato Grosso

Read More »

TotalEnergies Gets New Block around Lebanon-Israel Border

TotalEnergies SE and its partners in Block 9 have signed an agreement with the Lebanese government to enter the adjacent Block 8 around the Israeli-Lebanese maritime border. “Although the drilling of the Qana well on Block 9 did not give positive results, we remain committed to pursue our exploration activities in Lebanon”, TotalEnergies chair and chief executive Patrick Pouyanné said in a statement on the company’s website. France’s TotalEnergies is to own 35 percent in Block 8 as operator. Italy’s state-backed Eni SpA would get 35 percent. QatarEnergy would hold 30 percent. “The consortium’s initial work program on Block 8 consists of the acquisition of a 1,200 km2 [463.32 square miles] 3D seismic survey, in order to further assess the area’s exploration potential”, TotalEnergies said. Block 8 sits about 70 kilometers (43.5 miles) off the southern coast of Lebanon in waters about 1,700-2,100 meters (6,889.76 feet) deep, according to a separate statement by QatarEnergy. About three years ago TotalEnergies contracted Transocean Ltd to start drilling in Block 9, as announced by the block operator May 2, 2023. TotalEnergies launched exploration after Lebanon and Israel agreed to delineate their maritime border, around which Block 9 lies. Under the treaty brokered by the United States and signed by the Mediterranean neighbors October 2022, Israel agreed not to develop hydrocarbon deposits in Block 9 in exchange for remuneration by developers. The treaty stated no Israeli or Lebanese corporation shall hold exploration and exploitation rights in Block 9. “The parties understand that there is a hydrocarbon prospect of currently unknown commercial viability that exists at least partially in the area the parties understand to be Lebanon’s Block 9, and at least partially in the area the parties understand to be Israel’s Block 72”, read the agreement, as shared by the websites of the Israeli parliament

Read More »

AI, edge, and security: Shaping the need for modern infrastructure management

The rapidly evolving IT landscape, driven by artificial intelligence (AI), edge computing, and rising security threats, presents unprecedented challenges in managing compute infrastructure. Traditional management tools struggle to provide the necessary scalability, visibility, and automation to keep up with business demand, leading to inefficiencies and increased business risk. Yet organizations need their IT departments to be strategic business partners that enable innovation and drive growth. To realize that goal, IT leaders should rethink the status quo and free up their teams’ time by adopting a unified approach to managing infrastructure that supports both traditional and AI workloads. It’s a strategy that enables companies to simplify IT operations and improve IT job satisfaction. 5 IT management challenges of the AI era Cisco recently commissioned Forrester Consulting to conduct a Total Economic Impact™ analysis of Cisco Intersight. This IT operations platform provides visibility, control, and automation capabilities for the Cisco Unified Computing System (Cisco UCS), including Cisco converged, hyperconverged, and AI-ready infrastructure solutions across data centers, colocation facilities, and edge environments. Intersight uses a unified policy-driven approach to infrastructure management and integrates with leading operating systems, storage providers, hypervisors, and third-party IT service management and security tools. The Forrester study first uncovered the issues IT groups are facing: Difficulty scaling: Manual, repetitive processes cause lengthy IT compute infrastructure build and deployment times. This challenge is particularly acute for organizations that need to evolve infrastructure to support traditional and AI workloads across data centers and distributed edge environments. Architectural specialization and AI workloads: AI is altering infrastructure requirements, Forrester found.  Companies design systems to support specific AI workloads — such as data preparation, model training, and inferencing — and each demands specialized compute, storage, and networking capabilities. Some require custom chip sets and purpose-built infrastructure, such as for edge computing and low-latency applications.

Read More »

DCF Poll: Analyzing AI Data Center Growth

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #1796c1 !important; border-color: #1796c1 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #1796c1 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #1796c1 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #1796c1 !important; border-color: #1796c1 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #1796c1 !important; border-color: #1796c1 !important; } Coming out of 2025, AI data center development remains defined by momentum. But momentum is not the same as certainty. Behind the headlines, operators, investors, utilities, and policymakers are all testing the assumptions that carried projects forward over the past two years, from power availability and capital conditions to architecture choices and community response. Some will hold. Others may not. To open our 2026 industry polling, we’re taking a closer look at which pillars of AI data center growth are under the most pressure. What assumption about AI data center growth feels most fragile right now?

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Samsung warns of memory shortages driving industry-wide price surge in 2026

SK Hynix reported during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, while Micron recently exited the consumer memory market entirely to focus on enterprise and AI customers. Enterprise hardware costs surge The supply constraints have translated directly into sharp price increases across enterprise hardware. Samsung raised prices for 32GB DDR5 modules to $239 from $149 in September, a 60% increase, while contract pricing for DDR5 has surged more than 100%, reaching $19.50 per unit compared to around $7 earlier in 2025. DRAM prices have already risen approximately 50% year to date and are expected to climb another 30% in Q4 2025, followed by an additional 20% in early 2026, according to Counterpoint Research. The firm projected that DDR5 64GB RDIMM modules, widely used in enterprise data centers, could cost twice as much by the end of 2026 as they did in early 2025. Gartner forecast DRAM prices to increase by 47% in 2026 due to significant undersupply in both traditional and legacy DRAM markets, Chauhan said. Procurement leverage shifts to hyperscalers The pricing pressures and supply constraints are reshaping the power dynamics in enterprise procurement. For enterprise procurement, supplier size no longer guarantees stability. “As supply becomes more contested in 2026, procurement leverage will hinge less on volume and more on strategic alignment,” Rawat said. Hyperscale cloud providers secure supply through long-term commitments, capacity reservations, and direct fab investments, obtaining lower costs and assured availability. Mid-market firms rely on shorter contracts and spot sourcing, competing for residual capacity after large buyers claim priority supply.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »