Stay Ahead, Stay ONMINE

Scottish Government counters Kintore to Tealing power line criticism

Shadow energy secretary Douglas Lumsden claims Scottish ministers have failed to meet with concerned citizens regarding the proposed Kintore to Tealing power line. However, a Scottish Government spokesperson told Energy Voice that a consenting application has not yet been submitted by SSEN Transmission. A Scottish Government spokesperson said: “When an application is received, a ful […]

Shadow energy secretary Douglas Lumsden claims Scottish ministers have failed to meet with concerned citizens regarding the proposed Kintore to Tealing power line.

However, a Scottish Government spokesperson told Energy Voice that a consenting application has not yet been submitted by SSEN Transmission.

A Scottish Government spokesperson said: “When an application is received, a ful public consultation is carried out, and Scottish Ministers invite representations from members of the public and consult the appropriate community councils, alongside other public bodies.”

SSEN Transmission plans to build a 400kV power line from Kintore to Tealing, which is part of the firm’s planned upgrades to the electricity transmission network across Argyll and Kintyre from 132kV to 275kV.

Lumsden lambasted acting cabinet secretary for net zero and energy Gillian Martin for not engaging with locals and “hiding behind her job title”.

Last Year first minister John Swinney said that he was “sure ministers would be happy to meet campaigners,” however, he explained that politicians would need to observe the ministerial code when engaging in projects that the Scottish Government is assessing.

Lumsden added: “The ministerial code means she [Gillian Martin] would be careful about engagement – not running away from it.”

Lumsden objects to SSEN’s ‘unvarnished plan’

SSEN Transmission refined the planning route for the project in August last year.

The paths for six sections of the power line were debated in a series of consultations with local communities and stakeholders between March and April 2024.

A company spokesperson explained: “We have consulted extensively with local communities in relation to the Kintore to Tealing project, resulting in significant changes to our proposals, including alternative overhead line routes and the relocation of the previously proposed new substation at Fiddes to a new proposed site in Fetteresso Forest.”

However, chairman of anti-pylon group Deeside Against Pylons John Rahtz said at the time: “My concerns are about the basic technical solution they’ve chosen, as opposed to just the route.”

BP Aberdeen © DCT
North-east MSP Douglas Lumsden.

He argued that there are no refinements to the overhead design chosen by SSEN Transmission that will reduce its impact.

This is also a concern raised by Lumsden as he argued for transmission lines to be buried or located offshore.

He said: “It is now much easier and less expensive to underground lines or have them out at sea. That should be part of the offer on the table from SSEN.

“But these communities feel as if the original, unvarnished plan is being railroaded through.”

© Supplied by No More Pylons in Dalmally
Villagers in Dalmally are appealing against plans for more pylons by SSEN.

However, the choice to offshore transmission cables or bury them comes with its own issues.

SSEN said: “Our extensive consultation with communities comes as part of a £20bn+ investment to upgrade the electricity network across the north of Scotland, a substantial part of which is in subsea transmission links such as Eastern Green Link 2.

“However, technical challenges and geographical constraints limit the use of only offshore or underground solutions, while the high cost of this technology – underground cables at 400kV are estimated to be between 5 and 10 times more expensive than overhead lines – must be considered to limit the cost to energy bill payers.

“Overhead lines can carry substantially more power than subsea or underground cables, with onshore reinforcements supporting the Scottish Government’s target of achieving an additional 8-12GW of onshore wind by 2030, while helping meet local electricity needs and improving network reliability.”

Eastern Green Link 2 is a 2GW high voltage direct current power line that is set to connect Peterhead in Scotland to Drax in England.

The £4.3bn project is being undertaken by National Grid and SSEN Transmission. The pair broke ground on the Eastern Green Link 2 (EGL2) subsea transmission cable in September.

UK pylon plans

A Scottish Government spokesperson explained that when an application is submitted by SSEN it will follow proper procedure and engage with locals.

“Potential impacts on communities, nature, and cultural heritage, including the cumulative effects of developments, are important considerations in the decision-making process.”

Lumsden joins a list of Tory politicians to object to the power line project as last year Andrew Bowie, Scottish Conservative MP for West Aberdeenshire and Kincardine, stood in opposition to SSEN’s plans.

Bowie said in October: “We want the NESO to spend the next year planning out how it could use underground cables, and undersea where appropriate, without using pylons.

“No doubt there will be physical challenges to that in some areas, but it will be substantially cheaper for the government in the longer term.

“It will also address many of the concerns in my constituency and indeed across Scotland, that the race to net zero will mean an unjust transition for those who would pay the ultimate price for giant pylons being dumped in their garden or field.”

© Supplied by SSEN Transmission
Overhead transmission line . Kintore-Tealing.

SSEN, which is 75% owned by listed energy firm SSE (LON:SSE) has confirmed plans to invest at least £22 billion in “mission critical” grid infrastructure in Scotland by 2031. The firm said the expansion is required to meet the UK Government’s “clean power by 2030” ambitions.

Plans to build thousands of new pylons in rural areas to meet Government targets are sparking backlash in communities across the UK.

In England and Wales, energy secretary Ed Miliband has vowed to “take on the blockers, the delayers, the obstructionists” to the proposed rollout of new pylons, wind turbines and solar panels.

In a clean power “action plan” published in November, the UK’s National Energy System Operator (NESO) urged both governments to speed up planning decisions in order to build more renewables.

NESO noted that the Scottish Government’s energy strategy and just transition plan “does not go into details of the planning and consenting changes required” to deliver its aims but said “it is clear that close collaboration between the UK and devolved governments will be needed”.

How locals can voice their concerns now

Lumsden pointed to 22 separate groups that Gillian Martin is yet to meet with, however, as the government is yet to receive an application this is not unusual.

There has been no shortage of rural objection to overhead powerlines as locals object to how pylons will disrupt local landscapes.

Campaigner Rhaltz previously said: “The prime objection is the visual impact, and the mess it makes of the environment. SSEN should have considered a different form of technology, which is perfectly viable everywhere else in the world.”

The Scottish Government spokesperson explained that the best way for concerned citizens to share their objections before an application is submitted is to contact the firm behind the project.

© Shutterstock
Electricity pylons with wind turbines in the background.

“The most appropriate way for members of the public and communities potentially affected to make their views known at this stage is to engage directly with SSEN who are responsible for developing their proposals before submitting an application,” the spokesperson commented.

The firm said that it has invited “300,000 people” to consultation events as part of its Pathway to 2030 programme. This has encompassed “220 events and public meetings attended by more than 10,000 people,” a spokesperson commented.

“As part of this, we have received and analysed over 12,000 written responses in what we believe is one of the biggest ever such listening exercises across the north and northeast of Scotland,” the firm added.

“This engagement is ongoing, with our most recently held public engagement events seeking views on potential overhead line alignments including community and landowner proposals around Careston, Drumoak and Echt.”

Late last year SSEN laid out plans to build more than 1,000 homes in the North of Scotland while at a Housing Challenge Summit in Aviemore.

The energy firm aims to deliver 400 homes in the Highlands and a further 400 in Aberdeenshire to deal with housing shortages as the regions ramp up industrial development.

Recommended for you

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

OPEC Receives Updated Compensation Plans

A statement posted on OPEC’s website this week announced that the OPEC Secretariat has received updated compensation plans from Iraq, the United Arab Emirates (UAE), Kazakhstan, and Oman. A table accompanying this statement showed that these compensation plans amount to a total of 221,000 barrels per day in November, 272,000

Read More »

LogicMonitor closes Catchpoint buy, targets AI observability

The acquisition combines LogicMonitor’s observability platform with Catchpoint’s internet-level intelligence, which monitors performance from thousands of global vantage points. Once integrated, Catchpoint’s synthetic monitoring, network data, and real-user monitoring will feed directly into Edwin AI, LogicMonitor’s intelligence engine. The goal is to let enterprise customers shift from reactive alerting to

Read More »

Akamai acquires Fermyon for edge computing as WebAssembly comes of age

Spin handles compilation from source to WebAssembly bytecode and manages execution on target platforms. The runtime abstracts the underlying technology while preserving WebAssembly’s performance and security characteristics. This bet on WebAssembly standards has paid off as the technology matured.  WebAssembly has evolved significantly beyond its initial browser-focused design to support

Read More »

Winners and losers in the latest Top500 supercomputer list

Winner: Slingshot-11 Slingshot-11 is a 200G proprietary interconnect developed by HPE and its Cray supercomputer subsidiary. As the number of Cray systems increases on the list, so goes the number of Slingshot-11 based systems. The total number of Slingshot-11 systems jumped from 37 and 2024 to 52 this year. Loser:

Read More »

Turkiye Signs for 10-Year LNG Supplies from Eni, SEFE

Turkiye’s state-owned BOTAS has signed separate 10-year agreements for the supply of liquefied natural gas (LNG) from SEFE Securing Energy for Europe GmbH and Eni SpA. Germany’s state-owned SEFE will deliver about five million metric tons per annum (MMtpa) from the fourth quarter of 2028. “This long-term contract builds on the three-year deal concluded earlier this year, through which SEFE is providing over 1.5 million tons of LNG in total”, SEFE said in an online statement Wednesday. “The LNG [under the new agreement] will be delivered from SEFE’s growing global LNG portfolio, which includes a stable foundation of long-term U.S. LNG volumes”, SEFE said. Italy’s state-backed Eni will supply BOTAS around 0.4 MMtpa. The agreement is on top of an earlier one signed September under which BOTAS committed to buying 0.4 MMtpa of LNG for three years from Eni. BOTAS said September 12 it had signed agreements with Eni, SEFE, BP PLC, Cheniere Energy Inc, Equinor ASA, Hartree Partners LP, JERA Co Inc and Shell PLC for around 15 billion cubic meters (529.72 billion cubic feet) of LNG. The volumes are to be delivered to Turkiye in 2025-28.   Eni said in a press release Wednesday, “The agreement is Eni’s first long-term LNG sale to Turkiye, confirming the growing role of LNG in supporting the country’s energy needs, and is in line with Eni’s strategy to diversify its global LNG footprint, expanding its customer base in markets with high potential and growing its LNG portfolio to approximately 20 MTPA [million metric tons per annum] by 2030, leveraging its projects in Congo, Mozambique, U.S., Indonesia and other countries”. On Tuesday Eni said the second phase of Congo LNG in the Republic of the Congo has started operations. The project now has a capacity of three MMtpa or 4.5 billion cubic meters

Read More »

Eni to Acquire Acea Energia

Eni SpA’s renewables arm Plenitude has signed a binding deal to buy power and gas utility Acea Energia SpA, part of Italy’s Acea SpA. “The transaction also includes a 50 percent share in the capital of Umbria Energy SpA”, a joint statement said Wednesday. “Upon completion of the transaction, Plenitude will pay Acea EUR 460 million ($536.26 million), in addition to recognizing normalized net cash of up to EUR 127 million for a total amount of up to EUR 587 million”. “Furthermore, the agreement provides for a possible additional price component of up to EUR 100 million, which will be payable to Acea based on certain performance objectives to be reported as at 30 June 2027”, the companies added. “As a result of this acquisition, Plenitude will incorporate into its portfolio over 1.4 million retail customers in Italy, thus exceeding the total of 11 million customers in Europe and anticipating by two years the customer base target expected for 2028”, the companies said. Currently Plenitude serves 10 million customers and manages a network of over 22,000 electric vehicle charging points, according to the statement. Eni has set a target of 15 million Plenitude customers by 2030. Eni aims to reach over 5.5 gigawatts (GW) of installed renewable generation capacity this year, toward 10 GW by 2028 and 15 GW by 2030, according to a plan it announced February. As of the third quarter of 2025, it had 4.8 GW of installed renewable capacity, according to its quarterly report October 24. “For the Acea Group, the transaction allows consolidation of the growing focus on activities that have a strong connotation with infrastructure”, Wednesday’s statement said. The parties expect to complete the transaction by June 2026, subject to approval by antitrust authorities. “This transaction will allow us to reinvest in infrastructure, innovation

Read More »

Energy Department Releases National Petroleum Council Recommendations to Accelerate Permitting Reform and Strengthen U.S. Energy Infrastructure

WASHINGTON—The U.S. Department of Energy (DOE) today released key studies from the National Petroleum Council (NPC) that provide comprehensive recommendations to help modernize America’s energy infrastructure, streamline federal permitting, and remove regulatory barriers that have stalled the development of critical energy projects. The studies, one on gas-electric coordination and the other on oil and natural gas infrastructure permitting, underscore the urgent need for reforms to strengthen grid reliability and expand domestic energy production. The NPC is a federal advisory committee to the Secretary of Energy composed of leaders from oil and natural gas industries, academia, and other stakeholders. These studies were completed at the request of U.S. Secretary of Energy Chris Wright as part of a broader examination of “Future Energy Systems” and support President Trump’s agenda to unleash American energy, accelerate infrastructure build-out, and ensure affordable, reliable and secure energy for American families. “For years, the Biden Administration advanced policies that made it harder to produce American energy,” said U.S. Secretary of Energy Chris Wright. “The National Petroleum Council’s findings confirm what President Trump has said from day one: America needs more energy infrastructure, less red tape, and serious permitting reform. These recommendations will help make energy more affordable for every American household.” “The studies represent a significant collaborative effort to tackle some of the most complex challenges in our energy infrastructure,” said U.S. Department of Energy Assistant Secretary for the Hydrocarbons and Geothermal Energy Office Kyle Haustveit. “The National Petroleum Council recommendations will be instrumental in guiding the Department’s strategies for enhancing grid reliability and streamlining the development of essential energy projects.” The gas-electric coordination study, Reliable Energy: Delivering on the Promise of Gas-Electric Coordination, evaluates how rising natural gas and electricity demand, combined with shifting usage patterns, is straining natural gas pipelines in key regions of the United States. It

Read More »

Black Sea War Insurance Soars 250 Percent

Insurance rates for ships calling at ports in the Black Sea are surging after a series of Ukrainian attacks on vessels with links to Moscow.  The cost of covering visits to Russian ports in the Black Sea has jumped more than threefold, according to Marsh, the world’s largest insurance broker. Rates were between 0.25% and 0.3% of the value of the ship prior to the recent incidents, Marsh said.  Underwriters are now charging as much as 1% for some Ukrainian ports in the Black Sea, according to two people involved in the market, who spoke on condition of anonymity.  Ukraine has claimed attacks on two tankers from Russia’s so-called shadow fleet — vessels that operate in secrecy to skirt sanctions. There have been two other incidents also involving Moscow-linked ships since the end of last week. “For Russian port calls, underwriters are pricing in a broader range of possible strike locations and a higher likelihood of repetition,” said Munro Anderson, Head of Operations at Vessel Protect, which is part of Pen Underwriting and one of the world’s largest marine war risk insurance specialists. “As strikes escalate, so does the probability of Russian retaliation against ships connected to Ukraine.” The blasts, three of which took place in the Black Sea, come against a backdrop of strikes on wider Russian oil infrastructure that have elevated the danger of sailing in the region over the last few weeks. President Vladimir Putin said on Tuesday that Russia could retaliate.  Romania’s defense ministry said Wednesday that divers carried out a mission to neutralize a Sea Baby drone 36 miles east of the city of Constanta, underscoring the risks to shipping for Black Sea nations that aren’t Russia and Ukraine too.  Rates “have been seen to grow steadily and in direct response to further attacks which appear increasingly to

Read More »

Oil Closes Up as Peace Deal Falls Short

Oil edged up after a fresh round of US-Russia talks failed to reach a deal to end Moscow’s war in Ukraine, boosting fears that restrictions on Russian oil supply could remain in place for longer. West Texas Intermediate rose 0.5% to settle near $59, remaining within the tight range prices have been stuck in this week. The Kremlin says talks with a US delegation led by US envoys Steve Witkoff and Jared Kushner were “constructive,” but no deal was made to end the Ukraine war. The talks took place against a backdrop of recent attacks on Russia-linked tankers, with at least one ship manager saying it would stop sending vessels to the country. A deal to end Russia’s war in Ukraine could mean the end of sanctions on Russian oil in a market already staring down concerns about oversupply, providing bearish momentum for crude. Those oversupply fears weren’t heightened, however, after a US government report on Wednesday showed a 574,000 barrel build in crude stocks, smaller than an industry report showing that stockpiles increased by about 2.5 million barrels last week. Gasoline inventories rose the most since May. Geopolitical tensions are keeping the market jittery and adding a risk premium to prices, partly countering surplus concerns. That includes US rhetoric against Venezuela, a major oil producer, with US President Donald Trump suggesting the Pentagon will soon start targeting alleged drug cartels in that country with strikes on land. Oil Prices WTI for January delivery rose 0.53% to settle at $58.95 a barrel in New York. Brent for February settlement gained 0.35% to settle at $62.67 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy

Read More »

Russia Oil Revenue Falls by a Third

The Russian government’s oil proceeds shrank by almost a third in November from a year ago as weaker crude prices and a stronger currency took their toll on revenues. Oil-related taxes declined by 32% to 413.7 billion rubles ($5.3 billion) last month, according to Bloomberg calculations based on finance ministry data published Wednesday. Combined oil and gas revenue fell by 34% to 530.9 billion rubles.  Lower proceeds from those industries — which have accounted for about a quarter of Russia’s budget so far this year — will ramp up pressure on state finances, burdened by military spending on the war against Ukraine that’s well into its fourth year.  Global crude prices have drifted lower ahead of an expected supply glut, and the discount for Russian blends has gotten even steeper after US President Donald Trump blacklisted the nation’s two largest producers, Rosneft PJSC and Lukoil PJSC, to pressure his counterpart Vladimir Putin to end the war in Ukraine.  On a month-to-month basis, oil revenue almost halved, reflecting the fact that one of Russia’s main oil taxes — a profit-based levy — is paid four times a year in March, April, July and October.  Russia’s finance ministry calculated oil revenue based on the average price of Urals — its key export blend — at $53.68 a barrel in October, 17% lower than a year ago. A stronger currency also contributed to lower revenue, as it means producers receive fewer rubles for each dollar earned by selling a barrel of oil. In October, the Russian currency averaged 81.0089 rubles against the US dollar, 15% stronger than a year earlier. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be

Read More »

HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships

On the hardware front, HPE is targeting the AI data center edge with a new MX router and the scale-out networking delivery with a new QFX switch. Juniper’s MX series is its flagship routing family aimed at carriers, large-scale enterprise data center and WAN customers, while the QFX line services data center customers anchoring spine/leaf networks as well as top-of-rack systems. The new 1U, 1.6Tbps MX301 multiservice edge router, available now, is aimed at bringing AI inferencing closer to the source of data generation and can be positioned in metro, mobile backhaul, and enterprise routing applications, Rahim said. It includes high-density support for 16 x 1/1025/50GbE, 10 x 100Gb and 4 x 400Gb interfaces. “The MX301 is essentially the on-ramp to provide high speed, secure connections from distributed inference cluster users, devices and agents from the edge all the way to the AI data center,” Rami said. “The requirements here are typically around high performance, but also very high logical skills and integrated security.” In the QFX arena, the new QFX5250 switch, available in 1Q 2026, is a fully liquid-cooled box aimed at tying together Nvidia Rubin and/or AMD MI400 GPUs for AI consumption across the data center. It is built on Broadcom Tomahawk 6 silicon and supports up to 102.4Tbps Ethernet bandwidth, Rahim said.  “The QFX5250 combines HPE liquid cooling technology with Juniper networking software (Junos) and integrated AIops intelligence to deliver a high-performance, power-efficient and simplified operations for next-generation AI inference,” Rami said. Partnership expansions Also key to HPE/Juniper’s AI networking plans are its partnerships with Nvidia and AMD. The company announced its relationship with Nvidia now includes HPE Juniper edge onramp and long-haul data center interconnect (DCI) support in its Nvidia AI Computing by HPE portfolio. This extension uses the MX and Junipers PTX hyperscaler routers to support high-scale, secure

Read More »

What is co-packaged optics? A solution for surging capacity in AI data center networks

When it announced its CPO-capable switches, Nvidia said they would improve resiliency by 10 times at scale compared to previous switch generations. Several factors contribute to this claim, including the fact that the optical switches require four times fewer lasers, Shainer says. Whereas the laser source was previously part of the transceiver, the optical engine is now incorporated onto the ASIC, allowing multiple optical channels to share a single laser. Additionally, in Nvidia’s implementation, the laser source is located outside of the switch. “We want to keep the ability to replace a laser source in case it has failed and needs to be replaced,” he says. “They are completely hot-swappable, so you don’t need to shut down the switch.” Nonetheless, you may often hear that when something fails in a CPO box, you need to replace the entire box. That may be true if it’s the photonics engine embedded in silicon inside the box. “But they shouldn’t fail that often. There are not a lot of moving parts in there,” Wilkinson says. While he understands the argument around failures, he doesn’t expect it to pan out as CPO gets deployed. “It’s a fallacy,” he says. There’s also a simple workaround to the resiliency issue, which hyperscalers are already talking about, Karavalas says: overbuild. “Have 10% more ports than you need or 5%,” he says. “If you lose a port because the optic goes bad, you just move it and plug it in somewhere else.” Which vendors are backing co-packaged optics? In terms of vendors that have or plan to have CPO offerings, the list is not long, unless you include various component players like TSMC. But in terms of major switch vendors, here’s a rundown: Broadcom has been making steady progress on CPO since 2021. It is now shipping “to

Read More »

Nvidia’s $2B Synopsys stake tests independence of open AI interconnect standard

But the concern for enterprise IT leaders is whether Nvidia’s financial stakes in UALink consortium members could influence the development of an open standard specifically designed to compete with Nvidia’s proprietary technology and to give enterprises more choices in the datacenter. Organizations planning major AI infrastructure investments view such open standards as critical to avoiding vendor lock-in and maintaining competitive pricing. “This does put more pressure on UALink since Intel is also a member and also took investment from Nvidia,” Sag said. UALink and Synopsys’s critical role UALink represents the industry’s most significant effort to prevent vendor lock-in for AI infrastructure. The consortium ratified its UALink 200G 1.0 Specification in April, defining an open standard for connecting up to 1,024 AI accelerators within computing pods at 200 Gbps per lane — directly competing with Nvidia’s NVLink for scale-up applications. Synopsys plays a critical role. The company joined UALink’s board in January and in December announced the industry’s first UALink design components, enabling chip designers to build UALink-compatible accelerators. Analysts flag governance concerns Gaurav Gupta, VP analyst at Gartner, acknowledged the tension. “The Nvidia-Synopsys deal does raise questions around the future of UALink as Synopsys is a key partner of the consortium and holds critical IP for UALink, which competes with Nvidia’s proprietary NVLink,” he said. Sanchit Vir Gogia, chief analyst at Greyhound Research, sees deeper structural concerns. “Synopsys is not a peripheral player in this standard; it is the primary supplier of UALink IP and a board member within the UALink Consortium,” he said. “Nvidia’s entry into Synopsys’ shareholder structure risks contaminating that neutrality.”

Read More »

Cooling crisis at CME: A wakeup call for modern infrastructure governance

Organizations should reassess redundancy However, he pointed out, “the deeper concern is that CME had a secondary data center ready to take the load, yet the failover threshold was set too high, and the activation sequence remained manually gated. The decision to wait for the cooling issue to self-correct rather than trigger the backup site immediately revealed a governance model that had not evolved to keep pace with the operational tempo of modern markets.” Thermal failures, he said, “do not unfold on the timelines assumed in traditional disaster recovery playbooks. They escalate within minutes and demand automated responses that do not depend on human certainty about whether a facility will recover in time.” Matt Kimball, VP and principal analyst at Moor Insights & Strategy, said that to some degree what happened in Aurora highlights an issue that may arise on occasion: “the communications gap that can exist between IT executives and data center operators. Think of ‘rack in versus rack out’ mindsets.” Often, he said, the operational elements of that data center environment, such as cooling, power, fire hazards, physical security, and so forth, fall outside the realm of an IT executive focused on delivering IT services to the business. “And even if they don’t fall outside the realm, these elements are certainly not a primary focus,” he noted. “This was certainly true when I was living in the IT world.” Additionally, said Kimball, “this highlights the need for organizations to reassess redundancy and resilience in a new light. Again, in IT, we tend to focus on resilience and redundancy at the app, server, and workload layers. Maybe even cluster level. But as we continue to place more and more of a premium on data, and the terms ‘business critical’ or ‘mission critical’ have real relevance, we have to zoom out

Read More »

Microsoft loses two senior AI infrastructure leaders as data center pressures mount

Microsoft did not immediately respond to a request for comment. Microsoft’s constraints Analysts say the twin departures mark a significant setback for Microsoft at a critical moment in the AI data center race, with pressure mounting from both OpenAI’s model demands and Google’s infrastructure scale. “Losing some of the best professionals working on this challenge could set Microsoft back,” said Neil Shah, partner and co-founder at Counterpoint Research. “Solving the energy wall is not trivial, and there may have been friction or strategic differences that contributed to their decision to move on, especially if they saw an opportunity to make a broader impact and do so more lucratively at a company like Nvidia.” Even so, Microsoft has the depth and ecosystem strength to continue doubling down on AI data centers, said Prabhu Ram, VP for industry research at Cybermedia Research. According to Sanchit Gogia, chief analyst at Greyhound Research, the departures come at a sensitive moment because Microsoft is trying to expand its AI infrastructure faster than physical constraints allow. “The executives who have left were central to GPU cluster design, data center engineering, energy procurement, and the experimental power and cooling approaches Microsoft has been pursuing to support dense AI workloads,” Gogia said. “Their exit coincides with pressures the company has already acknowledged publicly. GPUs are arriving faster than the company can energize the facilities that will house them, and power availability has overtaken chip availability as the real bottleneck.”

Read More »

What is Edge AI? When the cloud isn’t close enough

Many edge devices can periodically send summarized or selected inference output data back to a central system for model retraining or refinement. That feedback loop helps the model improve over time while still keeping most decisions local. And to run efficiently on constrained edge hardware, the AI model is often pre-processed by techniques such as quantization (which reduces precision), pruning (which removes redundant parameters), or knowledge distillation (which trains a smaller model to mimic a larger one). These optimizations reduce the model’s memory, compute, and power demands so it can run more easily on an edge device. What technologies make edge AI possible? The concept of the “edge” always assumes that edge devices are less computationally powerful than data centers and cloud platforms. While that remains true, overall improvements in computational hardware have made today’s edge devices much more capable than those designed just a few years ago. In fact, a whole host of technological developments have come together to make edge AI a reality. Specialized hardware acceleration. Edge devices now ship with dedicated AI-accelerators (NPUs, TPUs, GPU cores) and system-on-chip units tailored for on-device inference. For example, companies like Arm have integrated AI-acceleration libraries into standard frameworks so models can run efficiently on Arm-based CPUs. Connectivity and data architecture. Edge AI often depends on durable, low-latency links (e.g., 5G, WiFi 6, LPWAN) and architectures that move compute closer to data. Merging edge nodes, gateways, and local servers means less reliance on distant clouds. And technologies like Kubernetes can provide a consistent management plane from the data center to remote locations. Deployment, orchestration, and model lifecycle tooling. Edge AI deployments must support model-update delivery, device and fleet monitoring, versioning, rollback and secure inference — especially when orchestrated across hundreds or thousands of locations. VMware, for instance, is offering traffic management

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »