Stay Ahead, Stay ONMINE

Your boss is watching

A full day’s work for Dora Manriquez, who drives for Uber and Lyft in the San Francisco Bay Area, includes waiting in her car for a two-digit number to appear. The apps keep sending her rides that are too cheap to pay for her time—$4 or $7 for a trip across San Francisco, $16 for a trip from the airport for which the customer is charged $100. But Manriquez can’t wait too long to accept a ride, because her acceptance rate contributes to her driving score for both companies, which can then affect the benefits and discounts she has access to.  The systems are black boxes, and Manriquez can’t know for sure which data points affect the offers she receives or how. But what she does know is that she’s driven for ride-share companies for the last nine years, and this year, having found herself unable to score enough better-­paying rides, she has to file for bankruptcy.  Every action Manriquez takes—or doesn’t take—is logged by the apps she must use to work for these companies. (An Uber spokesperson told MIT Technology Review that acceptance rates don’t affect drivers’ fares. Lyft did not return a request for comment on the record.) But app-based employers aren’t the only ones keeping a very close eye on workers today. A study conducted in 2021, when the covid-19 pandemic had greatly increased the number of people working from home, revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers. A New York Times investigation in 2022 found that eight of the 10 largest private companies in the US track individual worker productivity metrics, many in real time. Specialized software can now measure and log workers’ online activities, physical location, and even behaviors like which keys they tap and what tone they use in their written communications—and many workers aren’t even aware that this is happening. What’s more, required work apps on personal devices may have access to more than just work—and as we may know from our private lives, most technology can become surveillance technology if the wrong people have access to the data. While there are some laws in this area, those that protect privacy for workers are fewer and patchier than those applying to consumers. Meanwhile, it’s predicted that the global market for employee monitoring software will reach $4.5 billion by 2026, with North America claiming the dominant share. Working today—whether in an office, a warehouse, or your car—can mean constant electronic surveillance with little transparency, and potentially with livelihood-­ending consequences if your productivity flags. What matters even more than the effects of this ubiquitous monitoring on privacy may be how all that data is shifting the relationships between workers and managers, companies and their workforce. Managers and management consultants are using worker data, individually and in the aggregate, to create black-box algorithms that determine hiring and firing, promotion and “deactivation.” And this is laying the groundwork for the automation of tasks and even whole categories of labor on an endless escalator to optimized productivity. Some human workers are already struggling to keep up with robotic ideals. We are in the midst of a shift in work and workplace relationships as significant as the Second Industrial Revolution of the late 19th and early 20th centuries. And new policies and protections may be necessary to correct the balance of power. Data as power Data has been part of the story of paid work and power since the late 19th century, when manufacturing was booming in the US and a rise in immigration meant cheap and plentiful labor. The mechanical engineer Frederick Winslow Taylor, who would become one of the first management consultants, created a strategy called “scientific management” to optimize production by tracking and setting standards for worker performance. Soon after, Henry Ford broke down the auto manufacturing process into mechanized steps to minimize the role of individual skill and maximize the number of cars that could be produced each day. But the transformation of workers into numbers has a longer history. Some researchers see a direct line between Taylor’s and Ford’s unrelenting focus on efficiency and the dehumanizing labor optimization practices carried out on slave-owning plantations.  As manufacturers adopted Taylorism and its successors, time was replaced by productivity as the measure of work, and the power divide between owners and workers in the United States widened. But other developments soon helped rebalance the scales. In 1914, Section 6 of the Clayton Act established the federal legal right for workers to unionize and stated that “the labor of a human being is not a commodity.” In the years that followed, union membership grew, and the 40-hour work week and the minimum wage were written into US law. Though the nature of work had changed with revolutions in technology and management strategy, new frameworks and guardrails stood up to meet that change. More than a hundred years after Taylor published his seminal book, The Principles of Scientific Management, “efficiency” is still a business buzzword, and technological developments, including new uses of data, have brought work to another turning point. But the federal minimum wage and other worker protections haven’t kept up, leaving the power divide even starker. In 2023, CEO pay was 290 times average worker pay, a disparity that’s increased more than 1,000% since 1978. Data may play the same kind of intermediary role in the boss-worker relationship that it has since the turn of the 20th century, but the scale has exploded. And the stakes can be a matter of physical health. In 2024, a report from a Senate committee led by Bernie Sanders, based on an 18-month investigation of Amazon’s warehouse practices, found that the company had been setting the pace of work in those facilities with black-box algorithms, presumably calibrated with data collected by monitoring employees. (In California, because of a 2021 bill, Amazon is required to at least reveal the quotas and standards workers are expected to comply with; elsewhere the bar can remain a mystery to the very people struggling to meet it.) The report also found that in each of the previous seven years, Amazon workers had been almost twice as likely to be injured as other warehouse workers, with injuries ranging from concussions to torn rotator cuffs to long-term back pain. An internal team tasked with evaluating Amazon warehouse safety found that letting robots set the pace for human labor was correlated with subsequent injuries. The Sanders report found that between 2020 and 2022, two internal Amazon teams tasked with evaluating warehouse safety recommended reducing the required pace of work and giving workers more time off. Another found that letting robots set the pace for human labor was correlated with subsequent injuries. The company rejected all the recommendations for technical or productivity reasons. But the report goes on to reveal that in 2022, another team at Amazon, called Core AI, also evaluated warehouse safety and concluded that unrealistic pacing wasn’t the reason all those workers were getting hurt on the job. Core AI said that the cause, instead, was workers’ “frailty” and “intrinsic likelihood of injury.” The issue was the limitations of the human bodies the company was measuring, not the pressures it was subjecting those bodies to. Amazon stood by this reasoning during the congressional investigation. Amazon spokesperson Maureen Lynch Vogel told MIT Technology Review that the Sanders report is “wrong on the facts” and that the company continues to reduce incident rates for accidents. “The facts are,” she said, “our expectations for our employees are safe and ­reasonable—and that was validated both by a judge in Washington after a thorough hearing and by the state’s Board of Industrial Insurance Appeals.” A study conducted in 2021 revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers. Yet this line of thinking is hardly unique to Amazon, although the company could be seen as a pioneer in the datafication of work. (An investigation found that over one year between 2017 and 2018, the company fired hundreds of workers at a single facility—by means of automatically generated letters—for not meeting productivity quotas.) An AI startup recently placed a series of billboards and bus signs in the Bay Area touting the benefits of its automated sales agents, which it calls “Artisans,” over human workers. “Artisans won’t complain about work-life balance,” one said. “Artisans won’t come into work ­hungover,” claimed another. “Stop hiring humans,” one hammered home. The startup’s leadership took to the company blog to say that the marketing campaign was intentionally provocative and that Artisan believes in the potential of human labor. But the company also asserted that using one of its AI agents costs 96% less than hiring a human to do the same job. The campaign hit a nerve: When data is king, humans—whether warehouse laborers or knowledge workers—may not be able to outperform machines. AI management and managing AI Companies that use electronic employee monitoring report that they are most often looking to the technologies not only to increase productivity but also to manage risk. And software like Teramind offers tools and analysis to help with both priorities. While Teramind, a globally distributed company, keeps its list of over 10,000 client companies private, it provides resources for the financial, health-care, and customer service industries, among others—some of which have strict compliance requirements that can be tricky to keep on top of. The platform allows clients to set data-driven standards for productivity, establish thresholds for alerts about toxic communication tone or language, create tracking systems for sensitive file sharing, and more. 

A full day’s work for Dora Manriquez, who drives for Uber and Lyft in the San Francisco Bay Area, includes waiting in her car for a two-digit number to appear. The apps keep sending her rides that are too cheap to pay for her time—$4 or $7 for a trip across San Francisco, $16 for a trip from the airport for which the customer is charged $100. But Manriquez can’t wait too long to accept a ride, because her acceptance rate contributes to her driving score for both companies, which can then affect the benefits and discounts she has access to. 

The systems are black boxes, and Manriquez can’t know for sure which data points affect the offers she receives or how. But what she does know is that she’s driven for ride-share companies for the last nine years, and this year, having found herself unable to score enough better-­paying rides, she has to file for bankruptcy. 

Every action Manriquez takes—or doesn’t take—is logged by the apps she must use to work for these companies. (An Uber spokesperson told MIT Technology Review that acceptance rates don’t affect drivers’ fares. Lyft did not return a request for comment on the record.) But app-based employers aren’t the only ones keeping a very close eye on workers today.

A study conducted in 2021, when the covid-19 pandemic had greatly increased the number of people working from home, revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers. A New York Times investigation in 2022 found that eight of the 10 largest private companies in the US track individual worker productivity metrics, many in real time. Specialized software can now measure and log workers’ online activities, physical location, and even behaviors like which keys they tap and what tone they use in their written communications—and many workers aren’t even aware that this is happening.

What’s more, required work apps on personal devices may have access to more than just work—and as we may know from our private lives, most technology can become surveillance technology if the wrong people have access to the data. While there are some laws in this area, those that protect privacy for workers are fewer and patchier than those applying to consumers. Meanwhile, it’s predicted that the global market for employee monitoring software will reach $4.5 billion by 2026, with North America claiming the dominant share.

Working today—whether in an office, a warehouse, or your car—can mean constant electronic surveillance with little transparency, and potentially with livelihood-­ending consequences if your productivity flags. What matters even more than the effects of this ubiquitous monitoring on privacy may be how all that data is shifting the relationships between workers and managers, companies and their workforce. Managers and management consultants are using worker data, individually and in the aggregate, to create black-box algorithms that determine hiring and firing, promotion and “deactivation.” And this is laying the groundwork for the automation of tasks and even whole categories of labor on an endless escalator to optimized productivity. Some human workers are already struggling to keep up with robotic ideals.

We are in the midst of a shift in work and workplace relationships as significant as the Second Industrial Revolution of the late 19th and early 20th centuries. And new policies and protections may be necessary to correct the balance of power.

Data as power

Data has been part of the story of paid work and power since the late 19th century, when manufacturing was booming in the US and a rise in immigration meant cheap and plentiful labor. The mechanical engineer Frederick Winslow Taylor, who would become one of the first management consultants, created a strategy called “scientific management” to optimize production by tracking and setting standards for worker performance.

Soon after, Henry Ford broke down the auto manufacturing process into mechanized steps to minimize the role of individual skill and maximize the number of cars that could be produced each day. But the transformation of workers into numbers has a longer history. Some researchers see a direct line between Taylor’s and Ford’s unrelenting focus on efficiency and the dehumanizing labor optimization practices carried out on slave-owning plantations. 

As manufacturers adopted Taylorism and its successors, time was replaced by productivity as the measure of work, and the power divide between owners and workers in the United States widened. But other developments soon helped rebalance the scales. In 1914, Section 6 of the Clayton Act established the federal legal right for workers to unionize and stated that “the labor of a human being is not a commodity.” In the years that followed, union membership grew, and the 40-hour work week and the minimum wage were written into US law. Though the nature of work had changed with revolutions in technology and management strategy, new frameworks and guardrails stood up to meet that change.

More than a hundred years after Taylor published his seminal book, The Principles of Scientific Management, “efficiency” is still a business buzzword, and technological developments, including new uses of data, have brought work to another turning point. But the federal minimum wage and other worker protections haven’t kept up, leaving the power divide even starker. In 2023, CEO pay was 290 times average worker pay, a disparity that’s increased more than 1,000% since 1978. Data may play the same kind of intermediary role in the boss-worker relationship that it has since the turn of the 20th century, but the scale has exploded. And the stakes can be a matter of physical health.

A humanoid robot with folded arms looms over human workers at an Amazon Warehouse

In 2024, a report from a Senate committee led by Bernie Sanders, based on an 18-month investigation of Amazon’s warehouse practices, found that the company had been setting the pace of work in those facilities with black-box algorithms, presumably calibrated with data collected by monitoring employees. (In California, because of a 2021 bill, Amazon is required to at least reveal the quotas and standards workers are expected to comply with; elsewhere the bar can remain a mystery to the very people struggling to meet it.) The report also found that in each of the previous seven years, Amazon workers had been almost twice as likely to be injured as other warehouse workers, with injuries ranging from concussions to torn rotator cuffs to long-term back pain.

An internal team tasked with evaluating Amazon warehouse safety found that letting robots set the pace for human labor was correlated with subsequent injuries.

The Sanders report found that between 2020 and 2022, two internal Amazon teams tasked with evaluating warehouse safety recommended reducing the required pace of work and giving workers more time off. Another found that letting robots set the pace for human labor was correlated with subsequent injuries. The company rejected all the recommendations for technical or productivity reasons. But the report goes on to reveal that in 2022, another team at Amazon, called Core AI, also evaluated warehouse safety and concluded that unrealistic pacing wasn’t the reason all those workers were getting hurt on the job. Core AI said that the cause, instead, was workers’ “frailty” and “intrinsic likelihood of injury.” The issue was the limitations of the human bodies the company was measuring, not the pressures it was subjecting those bodies to. Amazon stood by this reasoning during the congressional investigation.

Amazon spokesperson Maureen Lynch Vogel told MIT Technology Review that the Sanders report is “wrong on the facts” and that the company continues to reduce incident rates for accidents. “The facts are,” she said, “our expectations for our employees are safe and ­reasonable—and that was validated both by a judge in Washington after a thorough hearing and by the state’s Board of Industrial Insurance Appeals.”

A study conducted in 2021 revealed that almost 80% of companies surveyed were monitoring their remote or hybrid workers.

Yet this line of thinking is hardly unique to Amazon, although the company could be seen as a pioneer in the datafication of work. (An investigation found that over one year between 2017 and 2018, the company fired hundreds of workers at a single facility—by means of automatically generated letters—for not meeting productivity quotas.) An AI startup recently placed a series of billboards and bus signs in the Bay Area touting the benefits of its automated sales agents, which it calls “Artisans,” over human workers. “Artisans won’t complain about work-life balance,” one said. “Artisans won’t come into work ­hungover,” claimed another. “Stop hiring humans,” one hammered home.

The startup’s leadership took to the company blog to say that the marketing campaign was intentionally provocative and that Artisan believes in the potential of human labor. But the company also asserted that using one of its AI agents costs 96% less than hiring a human to do the same job. The campaign hit a nerve: When data is king, humans—whether warehouse laborers or knowledge workers—may not be able to outperform machines.

AI management and managing AI

Companies that use electronic employee monitoring report that they are most often looking to the technologies not only to increase productivity but also to manage risk. And software like Teramind offers tools and analysis to help with both priorities. While Teramind, a globally distributed company, keeps its list of over 10,000 client companies private, it provides resources for the financial, health-care, and customer service industries, among others—some of which have strict compliance requirements that can be tricky to keep on top of. The platform allows clients to set data-driven standards for productivity, establish thresholds for alerts about toxic communication tone or language, create tracking systems for sensitive file sharing, and more. 

a person laying in the sidewalk next to a bus sign reading,

MICHAEL BYERS

Electronic monitoring and management are also changing existing job functions in real time. Teramind’s clients must figure out who at their company will handle and make decisions around employee data. Depending on the type of company and its needs, Osipova says, that could be HR, IT, the executive team, or another group entirely—and the definitions of those roles will change with these new responsibilities. 

Workers’ tasks, too, can shift with updated technology, sometimes without warning. In 2020, when a major hospital network piloted using robots to clean rooms and deliver food to patients, Criscitiello heard from SEIU-UHW members that they were confused about how to work alongside them. Workers certainly hadn’t received any training for that. “It’s not ‘We’re being replaced by robots,’” says Criscitiello. “It’s ‘Am I going to be responsible if somebody has a medical event because the wrong tray was delivered? I’m supervising the robot—it’s on my floor.’” 

A New York Times investigation in 2022 found that eight of the 10 largest US private companies track individual worker productivity metrics, often in real time.

Nurses are also seeing their jobs expand to include technology management. Carmen Comsti of National Nurses United, the largest nurses’ union in the country, says that while management isn’t explicitly saying nurses will be disciplined for errors that occur as algorithmic tools like AI transcription systems or patient triaging mechanisms are integrated into their workflows, that’s functionally how it works. “If a monitor goes off and the nurse follows the algorithm and it’s incorrect, the nurse is going to get blamed for it,” Comsti says. Nurses and their unions don’t have access to the inner workings of the algorithms, so it’s impossible to say what data these or other tools have been trained on, or whether the data on how nurses work today will be used to train future algorithmic tools. What it means to be a worker, manager, or even colleague is on shifting ground, and frontline workers don’t have insight into which way it’ll move next.

The state of the law and the path to protection

Today, there isn’t much regulation on how companies can gather and use workers’ data. While the General Data Protection Regulation (GDPR) offers some worker protections in Europe, no US federal laws consistently shield workers’ privacy from electronic monitoring or establish firm guardrails for the implementation of algorithm-driven management strategies that draw on the resulting data. (The Electronic Communications Privacy Act allows employers to monitor employees if there are legitimate business reasons and if the employee has already given consent through a contract; tracking productivity can qualify as a legitimate business reason.)

But in late 2024, the Consumer Financial Protection Bureau did issue guidance warning companies using algorithmic scores or surveillance-based reports that they must follow the Fair Credit Reporting Act—which previously applied only to consumers—by getting workers’ consent and offering transparency into what data was being collected and how it would be used. And the Biden administration’s Blueprint for an AI Bill of Rights had suggested that the enumerated rights should apply in employment contexts. But none of these are laws.

So far, binding regulation is being introduced state by state. In 2023, the California Consumer Privacy Act (CCPA) was officially extended to include workers and not just consumers in its protections, even though workers had been specifically excluded when the act was first passed. That means California workers now have the right to know what data is being collected about them and for what purpose, and they can ask to correct or delete that data. Other states are working on their own measures. But with any law or guidance, whether at the federal or state level, the reality comes down to enforcement. Criscitiello says SEIU is testing out the new CCPA protections. 

“It’s too early to tell, but my conclusion so far is that the onus is on the workers,” she says. “Unions are trying to fill this function, but there’s no organic way for a frontline worker to know how to opt out [of data collection], or how to request data about what’s being collected by their employer. There’s an education gap about that.” And while CCPA covers the privacy aspect of electronic monitoring, it says nothing about how employers can use any collected data for management purposes.

The push for new protections and guardrails is coming in large part from organized labor. Unions like National Nurses United and SEIU are working with legislators to create policies on workers’ rights in the face of algorithmic management. And app-based ­advocacy groups have been pushing for new minimum pay rates and against wage theft—and winning. There are other successes to be counted already, too. One has to do with electronic visit verification (EVV), a system that records information about in-home visits by health-care providers. The 21st Century Cures Act, signed into law in 2016, required all states to set up such systems for Medicaid-funded home health care. The intent was to create accountability and transparency to better serve patients, but some health-care workers in California were concerned that the monitoring would be invasive and disruptive for them and the people in their care.

Brandi Wolf, the statewide policy and research director for SEIU’s long-term-care workers, says that in collaboration with disability rights and patient advocacy groups, the union was able to get language into legislation passed in the 2017–2018 term that would take effect the next fiscal year. It indicated to the federal government that California would be complying with the requirement, but that EVV would serve mainly a timekeeping function, not a management or disciplinary one.

Today advocates say that individual efforts to push back against or evade electronic monitoring are not enough; the technology is too widespread and the stakes too high. The power imbalances and lack of transparency affect workers across industries and sectors—from contract drivers to unionized hospital staff to well-compensated knowledge workers. What’s at issue, says Minsu Longiaru, a senior staff attorney at PowerSwitch Action, a network of grassroots labor organizations, is our country’s “moral economy of work”—that is, an economy based on human values and not just capital. Longiaru believes there’s an urgent need for a wave of socially protective policies on the scale of those that emerged out of the labor movement in the early 20th century. “We’re at a crucial moment right now where as a society, we need to draw red lines in the sand where we can clearly say just because we can do something technological doesn’t mean that we should do it,” she says. 

Like so many technological advances that have come before, electronic monitoring and the algorithmic uses of the resulting data are not changing the way we work on their own. The people in power are flipping those switches. And shifting the balance back toward workers may be the key to protecting their dignity and agency as the technology speeds ahead. “When we talk about these data issues, we’re not just talking about technology,” says Longiaru. “We spend most of our lives in the workplace. This is about our human rights.” 

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI agent traffic drives first profitable year for Fastly

Fetcher bots, which retrieve content in real time when users make queries to AI assistants, show different concentration patterns. OpenAI’s ChatGPT and related bots generated 68% of fetcher bot requests. In some cases, fetcher bot request volumes exceeded 39,000 requests per minute to individual sites. AI agents check multiple websites

Read More »

Philippines Announces New Project Pipeline under 10-Year RE Auction

The Philippine Department of Energy (DOE) has announced new award plans under a 10-year auction for renewable energy development, with over 3,200 megawatts (MW) of non-floating solar capacity targeted to be built between 2027 and 2028. The DOE said in a press release it plans to hold the sixth to ninth rounds of the Green Energy Auction Program (GEAP) for project delivery within the next two years, toward the Southeast Asian archipelago’s target of adding at least 25 gigawatts (GW) of renewable power capacity by 2035. GEA-6 would offer onshore wind and floating solar capacities. GEA-7 covers rooftop solar and solar plus battery energy storage systems in collaboration with the Mindanao Development Authority. “GEA-8 will include solar on stilts with the Department of Agriculture, agri-solar with the Department of Agrarian Reform and the Department of Agriculture and canal-top solar with the National Irrigation Administration”, the DOE said. “GEA-9 will cover biomass, geothermal, solar, hydropower and onshore wind”. Besides the non-floating solar goal of 3,200 MW, the four rounds aim to install 5,565 MW of renewable generation capacity from other technologies between 2028 and 2035. “Succeeding auctions for the remaining capacities in the 25 GW target will be scheduled based on the availability of ready projects covered by RE Contracts or Certificates of Award, power-supply-demand scenarios, grid conditions among others”, the DOE said. Philippine Energy Secretary Sharon S. Garin said in the statement, “By preparing a clear, auction-backed pipeline, we are giving developers and financial institutions the market visibility they need to plan, mobilize capital and deliver projects on schedule”. “As the Philippines accelerates toward its targets of 35 percent renewable energy share by 2030 and 50 percent by 2040, clarity in how power is sold, priced and monetized becomes just as critical as how it is generated”, Garin said. The

Read More »

BP, Eni Discover More Oil offshore Angola

Azule Energy Holdings Ltd, an Angolan joint venture equally owned by Italy’s Eni SpA and Britain’s BP PLC, has declared a new oil discovery in Block 15/06 in the offshore Lower Congo Basin, with an initial estimate of about 500 million barrels. “The Algaita-01 results build on a long successful track record of 22 discoveries, once again confirming the exceptional effectiveness of the petroleum system in Block 15/06”, Azule Energy chief executive Joe Murphy said in an online statement by the company. “The presence of multiple nearby producing facilities further enhances the value of this new exploratory success”. The well sits approximately 18 kilometers (11.18 miles) from the Armada Olombendo floating production, storage and offloading facility in the same block, Italy’s state-backed energy major Eni noted in a separate online statement. The latest discovery showed oil-bearing sandstones in Upper Miocene reservoir intervals, Azule Energy said. “Preliminary interpretation of wireline logging and fluid sampling indicates the presence of multiple reservoir intervals with excellent petrophysical properties and fluid mobilities”, it said. The well had a water depth of 667 meters (2,188.32 feet), Azule Energy said. The discovery “reaffirms the high potential of the Lower Congo Basin and the consistency of the ongoing exploration strategy, creating favorable conditions for swift monetization, with positive impacts on national production and state revenues”, Paulino Jerónimo, chairman and chief executive of the Angolan National Agency of Petroleum, Gas and Biofuels, was quoted as saying in Azule Energy’s statement. “The ANPG encourages the continued identification of new opportunities under the existing incentive mechanisms, particularly Decree 8/24 on Incremental Production, as well as Decree 5/18, which establishes the legal framework that allows exploration within and near development areas”. Incorporated joint venture Azule Energy, based in the Central African country, operates Block 15/06 with a 36.84 percent stake. Sociedade Nacional

Read More »

Naftogaz Seeks USA Funds to Renovate Destroyed Plants

Ukraine’s state-run oil and gas company Naftogaz Group is seeking funds to restore and renovate its facilities after the destruction caused by constant Russian attacks, said its top executive. At least €3 billion ($3.5 billion) of damage has been done to the country’s facilities, with equipment needs exceeding €900 million, according to the company.  Naftogaz is particularly interested in Ukraine’s ongoing talks with partners such as the US Exim Bank and the US International Development Finance Corp., Chief Executive Officer Sergii Koretskyi told Bloomberg News in an interview at his office in Kyiv. He also stressed the importance of European assistance. Some $250 million in unspent Ukraine assistance funds remain with the US State Department, he said — part of which could be used to purchase US-made gas compressor units to allow Kyiv to repair production facilities. Their use would also be a boon to American companies, he added.  “Now we need funding for imports, investments and technologies. This is definitely a win-win situation for all parties — we’re not saying ‘help us’ but offering mutually beneficial cooperation,” said Koretskyi. Naftogaz, which provides gas to 12.5 million households, is a key element of Ukraine’s energy sector. Its infrastructure, as well as that of other power companies, has come under intense Russian bombardment in recent weeks, depriving many civilians of heating amid freezing temperatures. Since the start of this year, Naftogaz infrastructure has already faced 20 strikes, damaging oil and gas production and transportation systems, Koretskyi said.  He said that last year was the most destructive for Ukraine’s energy sector since Russian President Vladimir Putin began his full-scale invasion nearly four years ago, with hundreds of missiles and drones hitting facilities. Last February and October were the hardest months for Naftogaz specifically, the CEO added. The company’s biggest challenge is the unpredictable consequences

Read More »

VEN Plans to Grant More Oil Blocks to Chevron and Repsol

Venezuela plans to grant more oil-production land to Chevron Corp. and Spain’s Repsol SA as the Trump administration pushes for private companies to rebuild the nation’s energy sector, according to people with knowledge of the matter. Officials in Caracas are poised to award the exploration and production blocks as soon as this week, the people said. Giving US and European companies more access to Venezuela’s oil-rich territory is a key piece of US President Donald Trump’s push to revive the nation’s dilapidated energy sector while eroding China and Russia’s local influence.  On Thursday, US Energy Secretary Chris Wright toured a project operated by Chevron in Venezuela’s Orinoco oil belt and told reporters that the opportunity for cooperation between the US and the South American nation is immense following the capture of former Venezuela President Nicolás Maduro.  In an interview with Bloomberg TV, Wright said the US would release additional licenses “soon,” with companies like Chevron seeing benefits from an increase of as much as a 30% in production in the next 18 to 24 months.  “Chevron is being enabled to massively grow their business here. They’re the largest producer in Venezuela today, and they’re going to be able to both expand the reserves they have and expand their operations,” Wright said. “They’re just one of many, but they’re going to be a big one,” he added. Repsol declined to comment. Chevron didn’t immediately respond to a request for comment. The Trump administration is expected to issue general license to allow international oil companies to explore and produce in Venezuela without violating US sanctions, Bloomberg reported earlier this month. It would be the latest is part of a string of authorizations from the Treasury Department to open up the nation’s oil sector since US forces captured Venezuela’s former President Nicolás Maduro on Jan.

Read More »

Oil Posts Second Straight Weekly Drop

Oil notched its first back-to-back weekly drop this year as traders weighed the prospect of expanded OPEC+ supplies against US-Iran nuclear talks and recent weakness in wider markets. West Texas Intermediate fell 1% for the week and ended the day little changed on Friday. President Donald Trump said the US deployed an additional aircraft carrier to the Middle East in case a nuclear deal is not reached with Iran. “If we don’t have a deal, we’ll need it,” Trump said at the White House. He added he thinks negotiations will ultimately be successful. Traders have been watching for any uptick in tensions between Washington and Tehran that could pose a threat to supply from the Middle East. The commodity was down earlier as OPEC+ members see scope for output increases to resume in April, believing concerns about a glut are overblown, delegates said. The group has not yet committed to any course of action or begun formal discussions for a March 1 meeting, they added. A second weekly decline in the futures market stands to snap a long run of gains for early 2026, when recurrent bouts of geopolitical tension including the US stand-off with Iran supported oil prices. At an energy conference in London this week, attendees flagged that they expect worldwide supplies to top demand this year, potentially feeding into higher inventories in the Atlantic basin, the region where global prices are set. Still, a pile-up of sanctioned oil coupled with supply disruptions in various nations has limited the impact thus far. Trading may be thinner ahead of the Presidents’ Day holiday in the US, contributing to exaggerated price swings. Oil Prices WTI for March delivery settled up 0.1% at $62.89 a barrel in New York. Brent for April settlement edged 0.3% higher to $67.75 a barrel. What

Read More »

Reliance Gets USA License to Directly Buy VEN Crude

Indian refiner Reliance Industries Ltd. has received a general license from the US government that will allow it to purchase Venezuelan oil directly, according to a person familiar with the matter.  Reliance, owned by billionaire Mukesh Ambani, applied for the permit last month and received it from the Treasury Department a few days ago, the person said, asking not to be named as the matter is not public. The move comes immediately on the heels of a trade deal with the US that slashes punitive tariffs for Indian exports but demands that the country stop importing discounted Russian oil. The Indian government has asked state-owned refiners to consider buying more Venezuelan crude, as well as oil from the US.  Venezuela is unlikely to produce large volumes of crude anytime soon, but even limited supplies provide a fallback option for India’s largest refiner. The US — which has stepped up involvement in Venezuela’s oil sector after capturing the country’s president last month — has been considering general licenses to permit purchases, trading and investment in a sprawling but threadbare industry. Reliance is the first Indian refiner to receive clearance in the current push.  Reliance has historically been an important consumer of the country’s heavy crude, having struck a term deal to secure as much as 400,000 barrels a day from Petroleos de Venezuela SA in 2012. It is among only a handful of refiners in India that have the capacity to process the high-viscosity, sour oil, which is difficult to extract and refine without diluent.  The Indian refining giant took about 25% of Venezuela’s exports in 2019, before its term deal got suspended in 2019 due to US sanctions. It last received a general license in 2024 and took crude until that expired last year, and was not renewed. Reuters first reported the issuance of

Read More »

Arista laments ‘horrendous’ memory situation

Digging in on campus Arista has been clear about its plans to grow its presence campus networking environments. Last Fall, Ullal said she expects Arista’s campus and WAN business would grow from the current $750 million-$800 million run rate to $1.25 billion, representing a 60% growth opportunity for the company. “We are committed to our aggressive goal of $1.25 billion for ’26 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine and peering use cases,” Ullal said. “In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue.” Ethernet leads the way “In terms of annual 2025 product lines, our core cloud, AI and data center products built upon our highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent,” Ullal said. “This includes our portfolio of EtherLink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage and all of the interconnect zones.” Ullal said she expects Ethernet will get even more of a boost later this year when the multivendor Ethernet for Scale-Up Networking (ESUN) specification is released.  “We have consistently described that today’s configurations are mostly a combination of scale out and scale up were largely based on 800G and smaller ratings. Now that the ESUN specification is well underway, we need a good solid spec. Otherwise, we’ll be shipping proprietary products like some people in the world do today. And so we will tie our

Read More »

From NIMBY to YIMBY: A Playbook for Data Center Community Acceptance

Across many conversations at the start of this year, at PTC and other conferences alike, the word on everyone’s lips seems to be “community.” For the data center industry, that single word now captures a turning point from just a few short years ago: we are no longer a niche, back‑of‑house utility, but a front‑page presence in local politics, school board budgets, and town hall debates. That visibility is forcing a choice in how we tell our story—either accept a permanent NIMBY-reactive framework, or actively build a YIMBY narrative that portrays the real value digital infrastructure brings to the markets and surrounding communities that host it. Speaking regularly with Ilissa Miller, CEO of iMiller Public Relations about this topic, there is work to be done across the ecosystem to build communications. Miller recently reflected: “What we’re seeing in communities isn’t a rejection of digital infrastructure, it’s a rejection of uncertainty driven by anxiety and fear. Most local leaders have never been given a framework to evaluate digital infrastructure developments the way they evaluate roads, water systems, or industrial parks. When there’s no shared planning language, ‘no’ becomes the safest answer.” A Brief History of “No” Community pushback against data centers is no longer episodic; it has become organized, media‑savvy, and politically influential in key markets. In Northern Virginia, resident groups and environmental organizations have mobilized against large‑scale campuses, pressing counties like Loudoun and Prince William to tighten zoning, question incentives, and delay or reshape projects.1 Loudoun County’s move in 2025 to end by‑right approvals for new facilities, requiring public hearings and board votes, marked a watershed moment as the world’s densest data center market signaled that communities now expect more say over where and how these campuses are built. Prince William County’s decision to sharply increase its tax rate on

Read More »

Nomads at the Frontier: PTC 2026 Signals the Digital Infrastructure Industry’s Moment of Execution

Each January, the Pacific Telecommunications Council conference serves as a barometer for where digital infrastructure is headed next. And according to Nomad Futurist founders Nabeel Mahmood and Phillip Koblence, the message from PTC 2026 was unmistakable: The industry has moved beyond hype. The hard work has begun. In the latest episode of The DCF Show Podcast, part of our ongoing ‘Nomads at the Frontier’ series, Mahmood and Koblence joined Data Center Frontier to unpack the tone shift emerging across the AI and data center ecosystem. Attendance continues to grow year over year. Conversations remain energetic. But the character of those conversations has changed. As Mahmood put it: “The hype that the market started to see is actually resulting a bit more into actions now, and those conversations are resulting into some good progress.” The difference from prior years? Less speculation. More execution. From Data Center Cowboys to Real Deployments Koblence offered perhaps the sharpest contrast between PTC conversations in 2024 and those in 2026. Two years ago, many projects felt speculative. Today, developers are arriving with secured power, customers, and construction underway. “If 2024’s PTC was data center cowboys — sites that in someone’s mind could be a data center — this year was: show me the money, show me the power, give me accurate timelines.” In other words, the market is no longer rewarding hypothetical capacity. It is demanding delivered capacity. Operators now speak in terms of deployments already underway, not aspirational campuses still waiting on permits and power commitments. And behind nearly every conversation sits the same gating factor. Power. Power Has Become the Industry’s Defining Constraint Whether discussions centered on AI factories, investment capital, or campus expansion, Mahmood and Koblence noted that every conversation eventually returned to energy availability. “All of those questions are power,” Koblence said.

Read More »

Cooling Consolidation Hits AI Scale: LiquidStack, Submer, and the Future of Data Center Thermal Strategy

As AI infrastructure scales toward ever-higher rack densities and gigawatt-class campuses, cooling has moved from a technical subsystem to a defining strategic issue for the data center industry. A trio of announcements in early February highlights how rapidly the cooling and AI infrastructure stack is consolidating and evolving: Trane Technologies’ acquisition of LiquidStack; Submer’s acquisition of Radian Arc, extending its reach from core data centers into telco edge environments; and Submer’s partnership with Anant Raj to accelerate sovereign AI infrastructure deployment across India. Layered atop these developments is fresh guidance from Oracle Cloud Infrastructure explaining why closed-loop, direct-to-chip cooling is becoming central to next-generation facility design, particularly in regions where water use has become a flashpoint in community discussions around data center growth. Taken together, these developments show how the industry is moving beyond point solutions toward integrated, scalable AI infrastructure ecosystems, where cooling, compute, and deployment models must work together across hyperscale campuses and distributed edge environments alike. Trane Moves to Own the Cooling Stack The most consequential development comes from Trane Technologies, which on February 10 announced it has entered into a definitive agreement to acquire LiquidStack, one of the pioneers and leading innovators in data center liquid cooling. The acquisition significantly strengthens Trane’s ambition to become a full-service thermal partner for data center operators, extending its reach from plant-level systems all the way down to the chip itself. LiquidStack, headquartered in Carrollton, Texas, built its reputation on immersion cooling and advanced direct-to-chip liquid solutions supporting high-density deployments across hyperscale, enterprise, colocation, edge, and blockchain environments. Under Trane, those technologies will now be scaled globally and integrated into a broader thermal portfolio. In practical terms, Trane is positioning itself to deliver cooling across the full thermal chain, including: • Central plant equipment and chillers.• Heat rejection and controls

Read More »

Infrastructure Maturity Defines the Next Phase of AI Deployment

The State of Data Infrastructure Global Report 2025 from Hitachi Vantara arrives at a moment when the data center industry is undergoing one of the most profound structural shifts in its history. The transition from enterprise IT to AI-first infrastructure has moved from aspiration to inevitability, forcing operators, developers, and investors to confront uncomfortable truths about readiness, resilience, and risk. Although framed around “AI readiness,” the report ultimately tells an infrastructure story: one that maps directly onto how data centers are designed, operated, secured, and justified economically. Drawing on a global survey of more than 1,200 IT leaders, the report introduces a proprietary maturity model that evaluates organizations across six dimensions: scalability, reliability, security, governance, sovereignty, and sustainability. Respondents are then grouped into three categories—Emerging, Defined, and Optimized—revealing a stark conclusion: most organizations are not constrained by access to AI models or capital, but by the fragility of the infrastructure supporting their data pipelines. For the data center industry, the implications are immediate, shaping everything from availability design and automation strategies to sustainability planning and evolving customer expectations. In short, extracting value from AI now depends less on experimentation and more on the strength and resilience of the underlying infrastructure. The Focus of the Survey: Infrastructure, Not Algorithms Although the report is positioned as a study of AI readiness, its primary focus is not models, training approaches, or application development, but rather the infrastructure foundations required to operate AI reliably at scale. Drawing on responses from more than 1,200 organizations, Hitachi Vantara evaluates how enterprises are positioned to support production AI workloads across six dimensions as stated above: scalability, reliability, security, governance, sovereignty, and sustainability. These factors closely reflect the operational realities shaping modern data center design and management. The survey’s central argument is that AI success is no longer

Read More »

AI’s New Land Grab: Meta’s Indiana Megaproject and the Rise of Europe’s Neocloud Challengers

While Meta’s Indiana campus anchors hyperscale expansion in the United States, Europe recorded its own major infrastructure milestone this week as Amsterdam-based AI infrastructure provider Nebius unveiled plans for a 240-megawatt data center campus in Béthune, France, near Lille in the country’s northern industrial corridor. When completed, the campus will rank among Europe’s largest AI-focused data center facilities and positions northern France as a growing node in the continent’s expanding AI infrastructure map. The development repurposes a former Bridgestone tire manufacturing site, reflecting a broader trend across Europe in which legacy industrial properties, already equipped with heavy power access, transport links, and industrial zoning, are being converted into large-scale digital infrastructure hubs. Located within reach of connectivity and enterprise corridors linking Paris, Brussels, London, and Amsterdam, the site allows Nebius to serve major European markets while avoiding the congestion and power constraints increasingly shaping Tier 1 data center hubs. Industrial Infrastructure Becomes Digital Infrastructure Developers increasingly view former industrial sites as ideal for AI campuses because they often provide: • Existing grid interconnection capacity built for heavy industry• Transport and logistics infrastructure already in place• Industrial zoning that reduces permitting friction• Large contiguous parcels suited to phased campus expansion For regions like Hauts-de-France, redevelopment projects also offer economic transition opportunities, replacing legacy manufacturing capacity with next-generation digital infrastructure investment. Local officials have positioned the project as part of broader efforts to reposition northern France as a logistics and technology hub within Europe. The Neocloud Model Gains Ground Beyond the site itself, Nebius’ expansion illustrates the rapid emergence of neocloud infrastructure providers, companies building GPU-intensive AI capacity without operating full hyperscale cloud ecosystems. These firms increasingly occupy a strategic middle ground: supplying AI compute capacity to enterprises, startups, and even hyperscalers facing short-term infrastructure constraints. Nebius’ rise over the past year

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »