Stay Ahead, Stay ONMINE

How AI is used to surveil workers

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Opaque algorithms meant to analyze worker productivity have been rapidly spreading through our workplaces, as detailed in a new must-read piece by Rebecca Ackermann, published Monday in MIT Technology Review.  Since the pandemic, lots of companies have adopted software to analyze keystrokes or detect how much time workers are spending at their computers. The trend is driven by a suspicion that remote workers are less productive, though that’s not broadly supported by economic research. Still, that belief is behind the efforts of Elon Musk, DOGE, and the Office of Personnel Management to roll back remote work for US federal employees.  The focus on remote workers, though, misses another big part of the story: algorithmic decision-making in industries where people don’t work at home. Gig workers like ride-share drivers might be kicked off their platforms by an algorithm, with no way to appeal. Productivity systems at Amazon warehouses dictated a pace of work that Amazon’s internal teams found would lead to more injuries, but the company implemented them anyway, according to a 2024 congressional report. Ackermann posits that these algorithmic tools are less about efficiency and more about control, which workers have less and less of. There are few laws requiring companies to offer transparency about what data is going into their productivity models and how decisions are made. “Advocates say that individual efforts to push back against or evade electronic monitoring are not enough,” she writes. “The technology is too widespread and the stakes too high.” Productivity tools don’t just track work, Ackermann writes. They reshape the relationship between workers and those in power. Labor groups are pushing back against that shift in power by seeking to make the algorithms that fuel management decisions more transparent.  The full piece contains so much that surprised me about the widening scope of productivity tools and the very limited means that workers have to understand what goes into them. As the pursuit of efficiency gains political influence in the US, the attitudes and technologies that transformed the private sector may now be extending to the public sector. Federal workers are already preparing for that shift, according to a new story in Wired. For some clues as to what that might mean, read Rebecca Ackermann’s full story.  Deeper Learning Microsoft announced last week that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up.  Why it matters: Quantum computers promise to crunch computations faster than any conventional computer humans could ever build, which could mean faster discovery of new drugs and scientific breakthroughs. The problem is that qubits—the unit of information in quantum computing, rather than the typical 1s and 0s—are very, very finicky. Microsoft’s new type of qubit is supposed to make fragile quantum states easier to maintain, but scientists outside the project say there’s a long way to go before the technology can be proved to work as intended. And on top of that, some experts are asking whether rapid advances in applying AI to scientific problems could negate any real need for quantum computers at all. Read more from Rachel Courtland.  Bits and Bytes X’s AI model appears to have briefly censored unflattering mentions of Trump and Musk Elon Musk has long alleged that AI models suppress conservative speech. In response, he promised that his company xAI’s AI model, Grok, would be “maximally truth-seeking” (though, as we’ve pointed out previously, making things up is just what AI does). Over last weekend, users noticed that if you asked Grok about who is the biggest spreader of misinformation, the model reported it was explicitly instructed not to mention Donald Trump or Elon Musk. An engineering lead at xAI said an unnamed employee had made this change, but it’s now been reversed. (TechCrunch) Figure demoed humanoid robots that can work together to put your groceries away Humanoid robots aren’t typically very good at working with one another. But the robotics company Figure showed off two humanoids helping each other put groceries away, another sign that general AI models for robotics are helping them learn faster than ever before. However, we’ve written about how videos featuring humanoid robots can be misleading, so take these developments with a grain of salt. (The Robot Report) OpenAI is shifting its allegiance from Microsoft to Softbank In calls with its investors, OpenAI has signaled that it’s weakening its ties to Microsoft—its largest investor—and partnering more closely with Softbank. The latter is now working on the Stargate project, a $500 billion effort to build data centers that will support the bulk of the computing power needed for OpenAI’s ambitious AI plans. (The Information) Humane is shutting down the AI Pin and selling its remnants to HP One big debate in AI is whether the technology will require its own piece of hardware. Rather than just conversing with AI on our phones, will we need some sort of dedicated device to talk to? Humane got investments from Sam Altman and others to build just that, in the form of a badge worn on your chest. But after poor reviews and sluggish sales, last week the company announced it would shut down. (The Verge) Schools are replacing counselors with chatbots School districts, dealing with a shortage of counselors, are rolling out AI-powered “well-being companions” for students to text with. But experts have pointed out the dangers of relying on these tools and say the companies that make them often misrepresent their capabilities and effectiveness. (The Wall Street Journal) What dismantling America’s leadership in scientific research will mean Federal workers spoke to MIT Technology Review about the efforts by DOGE and others to slash funding for scientific research. They say it could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. (MIT Technology Review) Your most important customer may be AI People are relying more and more on AI models like ChatGPT for recommendations, which means brands are realizing they have to figure out how to rank higher, much as they do with traditional search results. Doing so is a challenge, since AI model makers offer few insights into how they form recommendations. (MIT Technology Review)

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Opaque algorithms meant to analyze worker productivity have been rapidly spreading through our workplaces, as detailed in a new must-read piece by Rebecca Ackermann, published Monday in MIT Technology Review

Since the pandemic, lots of companies have adopted software to analyze keystrokes or detect how much time workers are spending at their computers. The trend is driven by a suspicion that remote workers are less productive, though that’s not broadly supported by economic research. Still, that belief is behind the efforts of Elon Musk, DOGE, and the Office of Personnel Management to roll back remote work for US federal employees. 

The focus on remote workers, though, misses another big part of the story: algorithmic decision-making in industries where people don’t work at home. Gig workers like ride-share drivers might be kicked off their platforms by an algorithm, with no way to appeal. Productivity systems at Amazon warehouses dictated a pace of work that Amazon’s internal teams found would lead to more injuries, but the company implemented them anyway, according to a 2024 congressional report.

Ackermann posits that these algorithmic tools are less about efficiency and more about control, which workers have less and less of. There are few laws requiring companies to offer transparency about what data is going into their productivity models and how decisions are made. “Advocates say that individual efforts to push back against or evade electronic monitoring are not enough,” she writes. “The technology is too widespread and the stakes too high.”

Productivity tools don’t just track work, Ackermann writes. They reshape the relationship between workers and those in power. Labor groups are pushing back against that shift in power by seeking to make the algorithms that fuel management decisions more transparent. 

The full piece contains so much that surprised me about the widening scope of productivity tools and the very limited means that workers have to understand what goes into them. As the pursuit of efficiency gains political influence in the US, the attitudes and technologies that transformed the private sector may now be extending to the public sector. Federal workers are already preparing for that shift, according to a new story in Wired. For some clues as to what that might mean, read Rebecca Ackermann’s full story


Deeper Learning

Microsoft announced last week that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Why it matters: Quantum computers promise to crunch computations faster than any conventional computer humans could ever build, which could mean faster discovery of new drugs and scientific breakthroughs. The problem is that qubits—the unit of information in quantum computing, rather than the typical 1s and 0s—are very, very finicky. Microsoft’s new type of qubit is supposed to make fragile quantum states easier to maintain, but scientists outside the project say there’s a long way to go before the technology can be proved to work as intended. And on top of that, some experts are asking whether rapid advances in applying AI to scientific problems could negate any real need for quantum computers at all. Read more from Rachel Courtland

Bits and Bytes

X’s AI model appears to have briefly censored unflattering mentions of Trump and Musk

Elon Musk has long alleged that AI models suppress conservative speech. In response, he promised that his company xAI’s AI model, Grok, would be “maximally truth-seeking” (though, as we’ve pointed out previously, making things up is just what AI does). Over last weekend, users noticed that if you asked Grok about who is the biggest spreader of misinformation, the model reported it was explicitly instructed not to mention Donald Trump or Elon Musk. An engineering lead at xAI said an unnamed employee had made this change, but it’s now been reversed. (TechCrunch)

Figure demoed humanoid robots that can work together to put your groceries away

Humanoid robots aren’t typically very good at working with one another. But the robotics company Figure showed off two humanoids helping each other put groceries away, another sign that general AI models for robotics are helping them learn faster than ever before. However, we’ve written about how videos featuring humanoid robots can be misleading, so take these developments with a grain of salt. (The Robot Report)

OpenAI is shifting its allegiance from Microsoft to Softbank

In calls with its investors, OpenAI has signaled that it’s weakening its ties to Microsoft—its largest investor—and partnering more closely with Softbank. The latter is now working on the Stargate project, a $500 billion effort to build data centers that will support the bulk of the computing power needed for OpenAI’s ambitious AI plans. (The Information)

Humane is shutting down the AI Pin and selling its remnants to HP

One big debate in AI is whether the technology will require its own piece of hardware. Rather than just conversing with AI on our phones, will we need some sort of dedicated device to talk to? Humane got investments from Sam Altman and others to build just that, in the form of a badge worn on your chest. But after poor reviews and sluggish sales, last week the company announced it would shut down. (The Verge)

Schools are replacing counselors with chatbots

School districts, dealing with a shortage of counselors, are rolling out AI-powered “well-being companions” for students to text with. But experts have pointed out the dangers of relying on these tools and say the companies that make them often misrepresent their capabilities and effectiveness. (The Wall Street Journal)

What dismantling America’s leadership in scientific research will mean

Federal workers spoke to MIT Technology Review about the efforts by DOGE and others to slash funding for scientific research. They say it could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. (MIT Technology Review)

Your most important customer may be AI

People are relying more and more on AI models like ChatGPT for recommendations, which means brands are realizing they have to figure out how to rank higher, much as they do with traditional search results. Doing so is a challenge, since AI model makers offer few insights into how they form recommendations. (MIT Technology Review)

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Fortinet speeds threat detection with improved FortiAnalyzer

The package also now integrates with FortiAI, the vendor’s genAI assistant, to better support analytics and telemetry to help security teams speed threat investigation and response, the vendor stated. “FortiAI identifies the threats that need analysis from the data collected by FortiAnalyzer, primarily collected from FortiGates. By automating the collection,

Read More »

ConocoPhillips Sells $735 Million Worth of GOM Stakes to Shell

ConocoPhillips has signed an agreement to divest minority stakes in the Ursa and Europa fields and associated facilities – located in the Gulf of Mexico (GOM) – to co-venturer Shell PLC for $735 million. Houston, Texas-based ConocoPhillips is giving up a share of about 8,000 barrels of oil equivalent a day (boed), based on 2024 production, with the transaction, it said in an online statement. The volume comes from ConocoPhillips’ 15.96 percent stake in the Ursa field and one percent interest in the Europa field. It is also transferring to Shell its stake in Ursa Oil Pipeline Co. LLC. “The transaction also includes an overriding royalty interest in the Ursa Field”, said ConocoPhillips, which acquired the royalty stake as part of its $22.5 billion merger with Marathon Oil Corp. late last year. “Proceeds from this transaction will be used for general corporate purposes”, ConocoPhillips said. Expected to be completed in the second quarter subject to customary closing conditions, the transaction “will increase Shell’s working interest in its operated Ursa platform, pipeline, and associated fields from 45.3884 percent to a maximum of 61.35 percent”, Shell said separately. The maximum figure is “subject to preferential rights election”, Shell said. As it stands Shell operates the Ursa development with a stake of about 45.39 percent. BP PLC owns around 22.69 percent, ECP GOM III LLC 15.96 percent and ConocoPhillips 15.96 percent. The Ursa tension-leg platform, approximately 130 miles southeast of New Orleans, started production 1999. Located in the Mars Basin, the field produced more than 800 million barrels of oil equivalent gross over 25 years, according to Shell. “This targeted investment is the latest example of how we are unlocking more value from our existing advantaged Upstream assets and infrastructure”, said Zoë Yujnovich, director for integrated gas and upstream at Shell. “The acquisition expands our ownership

Read More »

BP agrees ‘giant’ Iraq deal ahead of spotlight on oil and gas production

BP has unveiled a deal with the Government of the Republic of Iraq to invest in several “giant” oil fields in Kirkuk. The firm said the opportunity across the contract and surrounding area is believed to include up to 20 billion barrels of oil equivalent. The deal confirms a previously signed memorandum of understanding with Iraq. It comes ahead of a major intervention expected from the firm as it highlights a major pivot towards its oil and gas business. BP boss Murray Auchincloss will clarify the firm’s approach to increasing oil and gas production and slashing investment commitments to renewables and low carbon energy. In the energy giant’s highly anticipated capital markets day event Wednesday, the CEO will highlight how recent moves to focus on hydrocarbon production in India, Abu Dhabi and Iraq will confirm the firm’s shift in focus away from renewables goals set in 2020 towards fossil fuels. It is thought Auchincloss will set out further plans to “decapitalise” its investments in green energy including a possible spin out of its solar power business Lightsource. Last year, BP paid £254 million to take over full ownership of the business. The approach would be similar to that of BP’s offshore wind business which it hived off into a join venture, with Japanese firm Jera, called JERA Nex bp. The joint venture is expected to be one of the biggest five offshore wind developers in the world by capacity while also ensuring BP investments in low carbon are “capital light”. Suggestions have emerged Auchincloss will emphasise recent efforts to counterbalance the strategy of his predecessor Bernard Looney, who put BP on a path of reducing oil and gas production 40% and rapidly growing renewables by 2030. Auchincloss, who was finance chief of BP’s upstream business at the time Looney revealed

Read More »

Solar and battery storage will lead new generation in 2025: EIA

The Energy Information Administration anticipates that 63 GW of utility-scale generation capacity will be brought online this year, and 81% of that capacity will be solar and battery storage, according to a Monday report.  That addition of 63 GW “represents an almost 30% increase from 2024 when 48.6 GW of capacity was installed, the largest capacity installation in a single year since 2002,” EIA said. Solar itself is projected to make up over half of that increase.  “In 2024, generators added a record 30 GW of utility-scale solar to the U.S. grid, accounting for 61% of capacity additions last year,” EIA said. “We expect this trend will continue in 2025, with 32.5 GW of new utility-scale solar capacity to be added. Texas (11.6 GW) and California (2.9 GW) will account for almost half of the new utility-scale solar capacity addition in 2025.” Optional Caption (2025). Retrieved from Energy Information Administration. The report comes at a time of uncertainty for renewable energy, as President Trump began his second term by attempting to freeze disbursements of Inflation Reduction Act funding, including for clean energy projects, and establishing tariffs that may raise the cost of developing wind and solar.  Investor interest in solar and storage also cooled in 2024, due in part to uncertainty about renewables under the Trump administration, Mercom Capital Group CEO and co-founder Raj Prabhu told Utility Dive last month. In 2024, power providers added a record 10.3 GW of new battery storage capacity and EIA projects this growth could almost double to an addition of 18.2 GW in 2025.  The rest of the anticipated 63 GW will be made up of an added 4.4 GW of natural gas, and an added 7.7 GW of wind. In 2024, only 5.1 GW of wind was added – the smallest wind capacity addition since

Read More »

Scottish public supports oil and gas production

A poll has found widespread support for North Sea oil and gas production among Scottish residents, either alongside renewable energy or as part of a broader energy mix. Conducted by Opinion Matters for advisory firm Big Partnership, the poll of 1,000 Scottish residents found 80% were in favour of continued oil and gas production. Of those in favour, 42% supported Scotland continuing to produce its own oil and gas while growing renewable energy generation. Meanwhile, a further 27% backed continuing to have a mix of oil, gas and renewables “indefinitely”. Around 15% of respondents felt Scotland should keep producing its own oil and gas only with a plan to phase it out, while 11% said Scotland should stop producing oil and gas as soon as possible. Big said the results demonstrate a “clear public preference for a balanced approach” to the energy transition. Based on the poll findings, Big said the sentiment in favour of oil and gas “holds true across a range of age groups, genders and geographical regions”. The poll comes amid warnings over the impact to jobs that Labour’s oil and gas policies could have on cities like Aberdeen, including for firms like Big. Warning for SNP and Labour Big Partnership chief executive Allan Barr said the poll results are a “stark reminder” to political leaders that Scottish public opinion on oil and gas is “nowhere near as polarised as has been portrayed”. Barr said the policies of the Labour UK government and the SNP Scottish government seem to be out of step with Scottish voters. Since taking office last year, Labour has implemented an increase to the windfall tax on North Sea oil and gas firms and pledged not to issue new exploration licences. © Supplied by Harbour EnergyThe Harbour Energy operated Judy platform in the

Read More »

EIA Raises USA Crude Oil Production Forecasts

The U.S. Energy Information Administration (EIA) raised its U.S. crude oil production forecast for 2025 and 2026 in its latest short term energy outlook (STEO), which was released on February 11. According to that STEO, the EIA sees U.S. crude oil production, including lease condensate, averaging 13.59 million barrels per day this year and 13.73 million barrels per day next year. In its previous STEO, which was released in January, the EIA projected that U.S. crude oil output would average 13.55 million barrels per day in 2025 and 13.62 million barrels per day in 2026. In its latest STEO, the EIA forecast that U.S. crude oil production will come in at 13.40 million barrels per day in the first quarter of 2025, 13.57 million barrels per day in the second quarter, 13.65 million barrels per day in the third quarter, 13.74 million barrels per day in the fourth quarter, 13.77 million barrels per day in the first quarter of 2026, 13.82 million barrels per day in the second quarter, 13.68 million barrels per day in the third quarter, and 13.63 million barrels per day in the fourth quarter of next year. In its previous STEO, the EIA saw U.S. crude oil production averaging 13.41 million barrels per day in the first quarter of this year, 13.54 million barrels per day in the second quarter, 13.56 million barrels per day in the third quarter, 13.67 million barrels per day in the fourth quarter, 13.63 million barrels per day in the first quarter of next year, 13.67 million barrels per day in the second quarter, 13.61 million barrels per day in the third quarter, and 13.59 million barrels per day in the fourth quarter of 2026. Both STEOs highlight that U.S. crude oil production averaged 13.21 million barrels per day in 2024.

Read More »

Kipper Partners to Invest $200MM to Raise Capacity of Aussie Field

Exxon Mobil Corp. and its partners have agreed to invest nearly $200 million to develop an additional well to grow production in the Kipper field offshore Australia ahead of winter 2026. The Kipper 1B project “will bring online much-needed additional gas supply from the Gippsland Basin”, ExxonMobil said in an online statement. The project also involves “significant upgrades to the West Tuna platform”, the United States oil giant said. Drilling is set to start later this year, with the upgrades happening simultaneously. “Esso Australia continues to invest in multiple projects that ensure our Gippsland operations sustain gas production well into the 2030s”, commented Simon Younger, chair of ExxonMobil for Australia. “Projects like Kipper 1B are vital to help meet the country’s energy security needs by bringing new supply online, which will be used exclusively for Australia’s domestic market”. Last year the partners finished installing compression facilities in the Kipper field to maintain natural gas production. The field “experiences decreasing reservoir pressure as it depletes”, requiring additional compression on the West Tuna platform to keep gas flowing to the domestic market, ExxonMobil said October 18, 2024, announcing the completion of the compression project. Kipper currently has a production capacity of 115 terajoules a day of gas. It supplied nearly 60 percent of the southern states’ consumption in 2023, according to ExxonMobil. In March 2022, ExxonMobil and BHP Group Ltd., whose oil and gas assets were later acquired by Woodside, announced a AUD 400 million ($253.67 million) investment to deliver an additional 200 petajoules from Kipper between 2023 and 2027. Kipper is part of the Bass Strait Project between ExxonMobil and Woodside Energy Group Ltd. Bass Strait has numerous conventional oil and gas fields in the Gippsland Basin off the southeast coast of Victoria. With several fields in the Bass Strait depleted,

Read More »

3 strategies for carbon-free data centers

Because of the strain that data centers (as well as other electrification sources, such as electric vehicles) are putting on the grid, “the data center industry needs to develop new power supply strategies to support growth plans,” Dietrich said. Here are the underling factors that play into the three strategies outlined by Uptime. Scale creates new opportunities: It’s not just that more data centers are being built, but the data centers under construction are fundamentally different in terms of sheer magnitude. For example, a typical enterprise data center might require between 10 and 25 megawatts of power. Today, the hyperscalers are building data centers in the 250-megawatt range and a large data center campus could require 1,000 megawatts of power. Data centers not only require a reliable source of power, they also require backup power in the form of generators. Dietrich pointed out that if a data center operator builds out enough backup capacity to support 250 megawatts of demand, they’re essentially building a new, on-site power plant. On the one hand, that new power plant requires permitting, it’s costly, and it requires highly training staffers to operate. On the other hand, it provides an opportunity. Instead of letting this asset sit around unused except in an emergency, organizations can leverage these power plants to generate energy that can be sold back to the grid. Dietrich described this arrangement as a win-win that enables the data center to generate revenue, and it helps the utility to gain a new source of power. Realistic expectations: Alternative energy sources like wind and solar, which are dependent on environmental factors, can’t technically or economically supply 100% of data center power, but they can provide a significant percentage of it. Organizations need to temper their expectations, Dietrich said.

Read More »

Questions arise about reasons why Microsoft has cancelled data center lease plans

This, the company said, “allows us to invest and allocate resources to growth areas for our future. Our plans to spend over $80 billion on infrastructure this fiscal year remains on track as we continue to grow at a record pace to meet customer demand.” When asked for his reaction to the findings, John Annand, infrastructure and operations research practice lead at Info-Tech Research Group, pointed to a blog released last month by Microsoft president Brad Smith, and said he thinks the company “is hedging its bets. It reaffirms the $80 billion AI investment guidance in 2025, $40 billion in the US. Why lease when you can build/buy your own?” Over the past four years, he said, Microsoft “has been leasing more data centers than owning. Perhaps they are using the fact that the lessors are behind schedule on providing facilities or the power upgrades required to bring that ratio back into balance. The limiting factor for data centers has always been the availability of power, and this has only become more true with power-hungry AI workloads.” The company, said Annand, “has made very public statements about owning nuclear power plants to help address this demand. If third-party data center operators are finding it tough to provide Microsoft with the power they need, it would make sense that Microsoft vertically integrate its supply chain; so, cancel leases or statements of qualification in favor of investing in the building of their own capacity.” However, Gartner analyst Tony Harvey said of the report, “so much of this is still speculation.” Microsoft, he added, “has not stated as yet that they are reducing their capex spend, and there are reports that Microsoft have strongly refuted that they are making changes to their data center strategy.” The company, he said, “like any other hyperscaler,

Read More »

Quantum Computing Advancements Leap Forward In Evolving Data Center and AI Landscape

Overcoming the Barriers to Quantum Adoption Despite the promise of quantum computing, widespread deployment faces multiple hurdles: High Capital Costs: Quantum computing infrastructure requires substantial investment, with uncertain return-on-investment models. The partnership will explore cost-sharing strategies to mitigate risk. Undefined Revenue Models: Business frameworks for quantum services, including pricing structures and access models, remain in development. Hardware Limitations: Current quantum processors still struggle with error rates and scalability, requiring advancements in error correction and hybrid computing approaches. Software Maturity: Effective algorithms for leveraging quantum computing’s advantages remain an active area of research, particularly in real-world AI and optimization problems. SoftBank’s strategy includes leveraging its extensive telecom infrastructure and AI expertise to create real-world testing environments for quantum applications. By integrating quantum into existing data center operations, SoftBank aims to position itself at the forefront of the quantum-AI revolution. A Broader Play in Advanced Computing SoftBank’s quantum initiative follows a series of high-profile moves into the next generation of computing infrastructure. The company has been investing heavily in AI data centers, aligning with its “Beyond Carrier” strategy that expands its focus beyond telecommunications. Recent efforts include the development of large-scale AI models tailored to Japan and the enhancement of radio access networks (AI-RAN) through AI-driven optimizations. Internationally, SoftBank has explored data center expansion opportunities beyond Japan, as part of its efforts to support AI, cloud computing, and now quantum applications. The company’s long-term vision suggests that quantum data centers could eventually play a role in supporting AI-driven workloads at scale, offering performance benefits that classical supercomputers cannot achieve. The Road Ahead SoftBank and Quantinuum’s collaboration signals growing momentum for quantum computing in enterprise settings. While quantum remains a long-term bet, integrating QPUs into data center infrastructure represents a forward-looking approach that could redefine high-performance computing in the years to come. With

Read More »

STACK Infrastructure Pushes Aggressive Data Center Expansion and Sustainability Strategy Into 2025

Global data center developer and operator STACK Infrastructure is providing a growing range of digital infrastructure solutions for hyperscalers, cloud service providers, and enterprise clients. Like almost all of the cutting-edge developers in the industry, Stack is maintaining the focus on scalability, reliability, and sustainability while delivering a full range of solutions, including build-to-suit, colocation, and powered shell facilities, with continued development in key global markets. Headquartered in the United States, the company has expanded its presence across North America, Europe, and Asia-Pacific, catering to the increasing demand for high-performance computing, artificial intelligence (AI), and cloud-based workloads. The company is known for its commitment to sustainable growth, leveraging green financing initiatives, energy-efficient designs, and renewable power sources to minimize its environmental impact. Through rapid expansion in technology hubs like Silicon Valley, Northern Virginia, Malaysia, and Loudoun County, the company continues to develop industry benchmarks for innovation and infrastructure resilience. With a customer-centric approach and a robust development pipeline, STACK Infrastructure is shaping the future of digital connectivity and data management in an era of accelerating digital transformation. Significant Developments Across 23 Major Data Center Markets Early in 2024, Stack broke ground on the expansion of their existing 100 MW campus in San Jose, servicing the power constrained Silicon Valley. Stack worked with the city of San Jose to add a 60 MW expansion to their SVY01 data center. While possibly the highest profile of Stack’s developments, due to its location, at that point in time the company had announced significant developments across 23 major data center markets, including:       Stack’s 48 MW Santa Clara data center, featuring immediately available shell space powered by an onsite substation with rare, contracted capacity. Stack’s 56 MW Toronto campus, spanning 19 acres, includes an existing 8 MW data center and 48 MW expansion capacity,

Read More »

Meta Update: Opens Mesa, Arizona Data Center; Unveils Major Subsea Cable Initiative; Forges Oklahoma Wind Farm PPA; More

Meta’s Project Waterworth: Building the Global Backbone for AI-Powered Digital Infrastructure Also very recently, Meta unveiled its most ambitious subsea cable initiative yet: Project Waterworth. Aimed at revolutionizing global digital connectivity, the project will span over 50,000 kilometers—surpassing the Earth’s circumference—and connect five major continents. When completed, it will be the world’s longest subsea cable system, featuring the highest-capacity technology available today. A Strategic Expansion to Key Global Markets As announced on Feb. 14, Project Waterworth is designed to enhance connectivity across critical regions, including the United States, India, Brazil, and South Africa. These regions are increasingly pivotal to global digital growth, and the new subsea infrastructure will fuel economic cooperation, promote digital inclusion, and unlock opportunities for technological advancement. In India, for instance, where rapid digital infrastructure growth is already underway, the project will accelerate progress and support the country’s ambitions for an expanded digital economy. This enhanced connectivity will foster regional integration and bolster the foundation for next-generation applications, including AI-driven services. Strengthening Global Digital Highways Subsea cables are the unsung heroes of global digital infrastructure, facilitating over 95% of intercontinental data traffic. With a multi-billion-dollar investment, Meta aims to open three new oceanic corridors that will deliver the high-speed, high-capacity bandwidth needed to fuel innovations like artificial intelligence. Meta’s experience in subsea infrastructure is extensive. Over the past decade, the company has collaborated with various partners to develop more than 20 subsea cables, including systems boasting up to 24 fiber pairs—far exceeding the typical 8 to 16 fiber pairs found in most new deployments. This technological edge ensures scalability and reliability, essential for handling the world’s ever-increasing data demands. Engineering Innovations for Resilience and Capacity Project Waterworth isn’t just about scale—it’s about resilience and cutting-edge engineering. The system will be the longest 24-fiber-pair subsea cable ever built, enhancing

Read More »

Do data centers threaten the water supply?

In a new report, the Royal Academy of Engineering called upon the government to ensure tech companies accurately report how much energy and water their data centers are using and reducing the use of drinking water for cooling. Without such action, warns one of the report’s authors, Professor Tom Rodden, “we face a real risk that our development, deployment and use of AI could do irreparable damage to the environment.” The situation is a little different for the US as the country has large bodies of water offering a  water supply that the UK just does not have. It’s not an accident that there are many data centers around the Chicago area: they’ve also got the Great Lakes to draw upon. Likewise, the Columbia and Klamath Rivers have become magnets for data centers for both water supply and hydroelectric power. Other than the Thames River, the UK doesn’t have these massive bodies of water. Still, the problem is not unique to the UK, says Alan Howard, senior analyst with Omdia. He notes that Microsoft took heat last year because it was draining the water supply of a small Arizona town of Goodyear with a new AI-oriented data center.  The city of Chandler, Arizona passed an ordinance in 2015 that restricted new water-intensive businesses from setting up shop which slowed data center development.   “I believe some data center operators just bowed out,” said Howard.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »