Stay Ahead, Stay ONMINE

Developers lose focus 1,200 times a day — how MCP could change that

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Software developers spend most of their time not writing code; recent industry research found that actual coding accounts for as little as 16% of developers’ working hours, with the […]

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Software developers spend most of their time not writing code; recent industry research found that actual coding accounts for as little as 16% of developers’ working hours, with the rest consumed by operational and supportive tasks. As engineering teams are pressured to “do more with less” and CEOs are bragging about how much of their codebase is written by AI, a question remains: What’s done to optimize the remaining 84% of the tasks that engineers are working on?

Keep developers where they are the most productive

A major culprit to developer productivity is context switching: The constant hopping between the ever-growing array of tools and platforms needed to build and ship software. A Harvard Business Review study found that the average digital worker flips between applications and websites nearly 1,200 times per day. And every interruption matters. The University of California found that it takes about 23 minutes to regain focus after a single interruption fully, and sometimes worse, as nearly 30% of interrupted tasks are never resumed. Context switching is actually at the center of DORA, one of the most popular performance software development frameworks.

In an era where AI-driven companies are trying to empower their employees to do more with less, beyond “just” giving them access to large language models (LLMs), some trends are emerging. For example, Jarrod Ruhland, principal engineer at Brex, hypothesizes that “developers deliver their highest value when focused within their integrated development environment (IDE)”. With that in mind, he decided to find new ways to make this happen, and Anthropic’s new protocol might be one of the keys.

MCP: A protocol to bring context to IDEs

Coding assistants, such as LLM-powered IDEs like Cursor, Copilot and Windsurf, are at the center of a developer renaissance. Their adoption speed is unseen. Cursor became the fastest-growing SaaS in history, reaching $100 million ARR within 12 months of launch, and 70% of Fortune 500 companies use Microsoft Copilot.


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


But these coding assistants were only limited to codebase context, which could help developers write code faster, but could not help with context switching. A new protocol is addressing this issue: Model Context Protocol (MCP). Released in November 2024 by Anthropic, it is an open standard developed to facilitate integration between AI systems, particularly LLM-based tools, and external tools and data sources. The protocol is so popular that there has been a 500% increase of new MCP servers in the last 6 months, with an estimated 7 million downloads in June,

One of the most impactful applications of MCP is its ability to connect AI coding assistants directly to the tools developers rely on every day, streamlining workflows and dramatically reducing context switching.

Take feature development as an example. Traditionally, it involves bouncing between several systems: Reading the ticket in a project tracker, looking at a conversation with a teammate for clarification, searching documentation for API details and, finally, opening the IDE to start coding. Each step lives in a different tab, requiring mental shifts that slow developers down.

With MCP and modern AI assistants like Anthropic’s Claude, that entire process can happen inside the editor.

For example, implementing a feature all within a coding assistant becomes:

The same principle can apply to many other engineers workflow, for instance an incident response for SREs could look like:

Nothing new under the sun

We’ve seen this pattern before. Over the past decade, Slack has transformed workplace productivity by becoming a hub for hundreds of apps, enabling employees to manage a wide range of tasks without leaving the chat window. Slack’s platform reduced context switching in everyday workflows. 

Riot Games, for example, connected around 1,000 Slack apps, and engineers saw a 27% reduction in time needed to test and iterate code, a 22% faster time to identify new bugs and a 24% increase in feature launch rate; all were attributed to streamlining workflows and reducing the friction of tool-switching.

Now, a similar transformation is occurring in software development, with AI assistants and their MCP integrations serving as the bridge to all these external tools. In effect, the IDE could become the new all-in-one command center for engineers, much like Slack has been for general knowledge workers.

MCP may not be enterprise ready

MCP is a relatively nascent standard, for example, security wisem MCP has no built-in authentication or permission model, relying on external implementations that are still evolving There’s also ambiguity around identity and auditing — the protocol doesn’t clearly distinguish whether an action was triggered by a user or the AI itself, making accountability and access control difficult without additional custom solutions. Lori MacVittie, distinguished engineer and chief evangelist in F5 Networks’ Office of the CTO, says that MCP is “breaking core security assumptions that we’ve held for a long time.”

Another practical limitation arises when too many MCP tools or servers are used simultaneously, for example, inside a coding assistant. Each MCP server advertises a list of tools, with descriptions and parameters, that the AI model needs to consider. Flooding the model with dozens of available tools can overwhelm its context window. Performance degrades noticeably as the tool count grows with some IDE integrations have imposed hard limits (around 40 tools in Cursor IDE, or ~20 tools for the OpenAI agent) to prevent the prompt from bloating beyond what the model can handle

Finally, there is no sophisticated way for tools to be auto-discovered or contextually suggested beyond listing them all, so developers often have to toggle them manually or curate which tools are active to keep things working smoothly. Referring to that example of Riot Games installing 1,000 Slack apps, we can see how it might be unfit for enterprise usage.

Less swivel-chair, more software

The past decade has taught us the value of bringing work to the worker, from Slack channels that pipe in updates to “inbox zero” email methodologies and unified platform engineering dashboards. Now, with AI in our toolkit, we have an opportunity to empower developers to be more productive. Suppose Slack became the hub of business communication.

In that case, coding assistants are well-positioned to become the hub of software creation, not just where code is written, but where all the context and collaborators coalesce. By keeping developers in their flow, we remove the constant mental gear-shifting that has plagued engineering productivity.

For any organization that depends on software delivery, take a hard look at how your developers spend their day; you might be surprised by what you find.

Sylvain Kalache leads AI Labs at Rootly.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

CompTIA unveils AI prompting certification

IT training and certification provider CompTIA launched a new certification designed to help professionals develop and enhance artificial intelligence skills in prompt writing, AI collaboration, and the responsible use of AI. The AI Prompting Essentials certification program can help professionals learn how to identify tasks best suited to AI and

Read More »

VMware Explore preview: Customers are looking for VCF value

“This year’s VMware Explore theme, ‘Simplify Your Cloud. Architect Your Future,’ speaks to where we see customers going,” said Broadcom in a statement to Network World. “Customers are struggling with cloud complexity, AI adoption, and security demands – all at once. VMware Explore 2025 aims to provide them with the

Read More »

Halliburton to Provide Well Stimulation Services for ConocoPhillips

Halliburton said it was awarded a contract to deliver comprehensive well stimulation services to ConocoPhillips Skandinavia AS to improve well performance and reservoir productivity. The contract spans five years and includes three optional extension periods, Halliburton said in a news release. Financial terms of the contract were not disclosed. Under

Read More »

Talos Encounters Oil at Walker Ridge in Gulf of America

Talos Energy Inc. said it encountered oil at the Daenerys exploration prospect located in the U.S. Gulf of America’s Walker Ridge blocks 106, 107, 150, and 151. The discovery well was drilled to a total vertical depth of 33,228 feet utilizing the West Vela deepwater drillship and encountered oil pay in multiple high-quality, sub-salt Miocene sands, the company said in a news release. Talos conducted a comprehensive wireline program, acquiring core, fluid, and log data to evaluate the reservoir. The well was drilled approximately 12 days ahead of schedule and delivered approximately $16 million under budget, Talos said. Planning is underway for an appraisal well to further define the discovered resource, and the discovery well has been temporarily suspended to preserve its future utility, the company said. Talos President and CEO Paul Goodfellow said, “We are encouraged by the results of our Daenerys discovery well, which confirms the presence of hydrocarbons and validates our geologic and geophysical models. We believe these results support Talos’ pre-drill resource assumptions. We are now working closely with our partners to design an appraisal program that will further delineate this exciting discovery. We anticipate spudding the appraisal well in the second quarter of 2026”. Talos, as operator, will hold a 27% percent working interest in the asset. Shell Offshore Inc. will hold a 22.5 percent working interest, Red Willow will hold 22.5 percent, Houston Energy, L.P. will hold 10 percent, Cathexis will hold nine percent, and HEQ II Daenerys LLC will hold nine percent, according to the release. Daenerys is a high-impact subsalt prospect that carries an estimated pre-drill gross resource potential between 100 million to 300 million barrels of oil equivalent. In June, Talos said it was implementing an “enhanced corporate strategy designed to position the company as a leading pure-play offshore exploration and

Read More »

USA Energy Demand Projected to Rise in 2025, 2026

Total U.S. energy consumption is projected to rise in 2025 and 2026 in the U.S. Energy Information Administration’s (EIA) latest short term energy outlook (STEO), which was released on August 12. According to this STEO, the EIA sees U.S. energy demand coming in at 95.50 quadrillion British thermal units (qBtu) this year and 95.52 qBtu next year. The EIA’s August STEO highlighted that total U.S. energy consumption was 94.22 qBtu in 2024. In its previous STEO, which was released in July, the EIA projected that total U.S. energy demand would be 95.32 qBtu in 2025 and 95.25 qBtu in 2026. This STEO also pointed out that U.S. energy consumption was 94.22 qBtu last year. The EIA projected in its August STEO that total U.S. energy consumption will come in at 23.83 qBtu in the third quarter of 2025, 23.96 qBtu in the fourth quarter, 24.76 qBtu in the first quarter of next year, 22.42 qBtu in the second quarter, 24.16 qBtu in the third quarter, and 24.19 qBtu in the fourth quarter. It highlighted that this demand was 25.42 qBtu in the first quarter and 22.28 qBtu in the second quarter. In its July STEO, the EIA projected that total U.S. energy consumption would be 23.80 qBtu in the third quarter of this year, 23.94 qBtu in the fourth quarter, 24.76 qBtu in the first quarter of 2026, 22.35 in the second quarter, 24.03 qBtu in the third quarter, and 24.11 qBtu in the fourth quarter. This STEO also highlighted that this demand was 25.42 qBtu in the first quarter but showed that it came in at 22.17 qBtu in the second quarter of 2025. The EIA’s latest STEO forecast that U.S. liquid fuels demand will average 20.44 million barrels per day in 2025 and 20.47 million barrels per day

Read More »

Kent to Acquire Exceed

Kent Global PLC has signed a binding agreement to buy Exceed Holdings Ltd., marking the integrated energy services company’s expansion into the decommissioning market. “This acquisition positions Kent at the forefront of a market set to double in size over the next decade, with global offshore decommissioning spend expected to rise from $8 billion to $16 billion per year by 2035”, Dubai-based Kent said in a statement online. Aberdeen-headquartered Exceed says it operates in over 40 countries and is one of only three licensed well operators in the United Kingdom. Kent says it has a workforce deployed across 34 countries, totaling over 13,000 employees. It offers consulting, engineering, commissioning, and operations and maintenance services. Kent is present or investing in the sectors of onshore and offshore oil, refining, integrated gas, offshore wind, alternative fuels, energy storage, carbon capture, water, chemicals, plastics and industrial infrastructure.   “The deal also unlocks significant opportunities in the energy transition space”, Kent added. “Exceed is already repurposing reservoirs for carbon capture and hydrogen storage projects, and combined with Kent’s existing expertise in this space will bring an unmatched offering to the marketplace”. “Exceed’s specialist capabilities in well and reservoir management, coupled with their strong reputation in decommissioning, complement our vision of offering full lifecycle services to our clients”, said Kent chief executive John Gilley. “Together, we will be uniquely positioned to help the industry navigate energy security, net-zero mandates and the safe retirement of offshore assets”. For Exceed, the merger “gives us the financial backing and global reach to scale our expertise to new markets and opportunities, while preserving the same culture, entrepreneurial spirit and values that define us”, Exceed managing director Ian Mills said. The companies did not disclose the financial details of the transaction, which they expect to close this year. Exceed is

Read More »

Why power utilities must recalibrate strategy now

Power utilities today are confronting a perfect storm of explosive demand, aging assets and unpredictable policy shifts, making strategic alignment more critical and more challenging to achieve than ever before. A surge in demand from large, energy-intensive loads, such as data operations, electric vehicle manufacturing and hydrogen production facilities, complicates forecasting and challenges existing planning processes. Simultaneously, aging grid assets, some nearing or exceeding their design lives, are increasingly vulnerable. These technical challenges are compounded by heightened regulatory scrutiny, decarbonization goals and shifting policy around energy affordability, reliability and sustainability. The unique circumstances companies are operating in today make it essential for executive teams to proactively address and shape company strategy. In today’s dynamic environment, strategic planning must be an essential capability of the executive team. It requires a deliberate, structured approach that is grounded in alignment at the top, provides clarity in execution, resourcing and goals, and sustains leadership attention for frequent assessment and adaptation. Despite the best intentions, strategic initiatives can miss the mark if they lack this foundation. Why strategy can often fail with utilities For today’s power utilities, especially those governed by elected boards or serving highly engaged communities, leadership and board perspectives can vary widely. One board member may be exclusively focused on affordability, while another may advocate for decarbonization and renewables integration. These different viewpoints can be a strength in the development of the company strategy, but without an intentional process that drives alignment, they can often create confusion and slow progress. Common symptoms of executive misalignment include: Decision-making paralysis, where little progress is made on strategic priorities Conflicting interpretations leading to diluted execution, as different leaders act on their own interpretations of “the strategy” Wasted effort, as incomplete or poorly supported initiatives don’t move the needle Frustration at all levels, as priorities seem

Read More »

Building resilience: how utilities can prepare for the next major storm

As extreme weather events increase in both frequency and intensity, the utility sector is under growing pressure to fortify its resilience. Delivering reliable power amid climate volatility demands more than reactive measures—it requires strategic foresight, technological innovation, and deep cross-sector coordination. Recent industry dialogue has revealed a clear consensus: resilience is no longer optional. Utilities must adapt to a landscape shaped by aging infrastructure, electrification trends, escalating customer expectations, and climate-related risks. A New Era of Challenges Utilities today must navigate a complex web of vulnerabilities: Aging grid assets in urgent need of modernization Rising demand driven by data centers and electric vehicles Increasing threats from cyberattacks Regulatory mandates for zero-carbon transition The growing unpredictability of natural disasters The stakes are high. Power outages not only disrupt daily life—they threaten health, safety and economic continuity. As a result, utilities must evolve from reactive recovery to proactive resilience. Five Pillars of Storm Preparedness To ensure service continuity and protect communities, utilities are embracing five core best practices in storm response: 1. Strategic Planning and Operational Playbooks Preparation starts well before a storm hits. By leveraging forecasting tools and damage simulations, utilities can strategically position crews, equipment and materials. Operational playbooks ensure coordination across departments—logistics, communications, emergency response—so that all teams can act swiftly and in sync. 2. Transparent, Real-Time Communication Clear and timely communication builds trust and enhances safety. Utilities are expanding their outreach through multiple platforms, providing customers with restoration timelines, safety tips and actionable updates. Visual storytelling and proactive media engagement are helping set realistic expectations and minimize confusion. 3. Deployment of Distributed Energy Resources (DERs) Innovative technologies such as solar-plus-storage systems and mobile energy units are enabling a more adaptive response. These DERs can reduce grid stress, power critical facilities during outages and offer real-time data through advanced

Read More »

Fuel cells: A distributed approach for accelerating load growth

Utilities are under increasing pressure to maintain grid stability while accommodating record levels of electrification, industrial growth, and decarbonization. At the same time, large energy users — including manufacturers and data centers — are requiring access to much more reliable, high-quality power on a much shorter timeline than traditional utility infrastructure solutions can deliver. In fact, U.S. electricity demand is projected to grow by over 20% by 2030, marking the fastest sustained load growth since the post-WWII industrial boom. Fuel cells aren’t new, but have substantially improved over the past decade in performance, reliability and cost, and now give utilities a way to effectively meet power demands. As part of a distributed energy capacity strategy, they allow utilities to co-create solutions that serve large-load customers without overburdening the grid or ratepayers.  Compared to gas turbines and reciprocating engines, fuel cells deliver cleaner, quieter, and more efficient distributed power. To unlock the full potential of this flexible and scalable alternative, utilities and regulators must expand offerings beyond traditional transmission and distribution offerings and adopt new models for concentrated local power delivery. The cleaner alternative to traditional onsite generation Unlike traditional turbines that rely on combustion, fuel cells generate electricity using natural gas through an electrochemical process that does not burn the fuel. “Fuel cells are quiet, clean and compact, making them far more community-friendly than gas turbines or reciprocating engines,” said Kevin Passalacqua, VP of Project Finance for Bloom Energy, a leading provider of fuel cell-based power solutions. This key distinction eliminates the release of harmful pollutants and particulate matter, reduces water usage and results in systems that operate with virtually no noise. These attributes not only simplify the permitting process but also minimize opposition from neighborhood groups and align with both regulatory expectations and customer-driven sustainability goals.  Built for efficiency

Read More »

Nvidia turns to software to speed up its data center networking hardware for AI

Typically chunks of AI tasks are distributed across GPUs, which then coordinate to provide a unified output. Adaptive routing ensures the network and GPUs over long distances are in sync when running AI workloads, Shainer said. Jitter bugs “If I retransmit the packet, I create jitter, which means one GPU out of many will be delayed and all the others have to wait for that GPU to finish,” Shainer said. The congestion control improvements remove bottlenecks by balancing transmissions across switches. Nvidia tested XGS algorithms in its server hardware and measured a 1.9x improvement in GPU-to-GPU communication compared to off-the-shelf networking technology, executives said during a briefing on the technology. Cloud providers already have long-distance high-speed networks. For example, Google’s large-scale Jupiter network uses optical switching for fast communications between its AI chips, which are called TPUs. It is important to separate the physical infrastructure from the software algorithms like XGS, Shainer said.

Read More »

Fluke Networks expands testing to help ease data center networking challenges

High-density fiber connections amplify contamination risks The shift toward higher-density fiber connections has significantly complicated contamination control. Modern array connectors can house up to 24 individual fibers within a single connection point. In contrast, traditional duplex connections contained just two fibers. “The slightest little bit of dust on one of those nine micron wide fibers, which, by the way, is much smaller than a human hair, the slightest little bit of dust on any one of the 24 in that connector, and it’s not going to work,” Mullins explained.  The inspection and cleaning requirements extend beyond traditional fiber testing. Each kit includes cleaning and inspection capabilities. Mullins noted that many technicians take shortcuts on fiber preparation.  “Cleaning and inspecting a fiber, every time you unplug it and plug it back in, adds, like another minute worth of work. But you know what? If you don’t do it, you’re gonna pay for it down the road,” he said. Cable identification a persistent challenge In addition to the new kits, Fluke Networks is also continuing to help solve other persistent networking issues. Physical cable identification continues to plague data center operations despite advances in network management and monitoring. Fluke’s solutions address this through multiple approaches. These include tone and probe technology, remote identification systems, and active network port discovery.

Read More »

Cisco ties storage networking gear to IBM z17 mainframe

“IBM Z systems are mainframes known for their ability to handle massive transaction volumes, support large-scale databases, and provide unmatched levels of security and uptime,” wrote Fausto Vaninetti, a senior solutions engineer for data center technologies at Cisco, in a blog post about the news. “The newest in the IBM Z system family, IBM z17 is the first mainframe fully engineered for the AI age, unlocking expanded capabilities for enterprise-scale AI, such as large language models, generative AI, and accelerated inferencing. However, the performance of mainframe applications depends on the underlying storage infrastructure.” SANs play a critical role in ensuring fast, reliable, and secure access to data, Vaninetti wrote: “For mainframe environments, which leverage high-speed [Fibre Connection] FICON protocol, having a robust SAN fabric that supports these requirements is non-negotiable. A solution that combines high throughput, low latency, and enterprise-class resilience is vital to ensure seamless operations and meet stringent service-level agreement requirements.” According to Vaninetti, some standout features of the MDS 9000 Series for mainframe environments include:

Read More »

Scaling Up: Tract’s Master-Planned Land and Infrastructure Approach to Data Center Development

With the rapid growth of physical data center infrastructure, it’s no surprise that a niche market has emerged for companies specializing in land acquisition. Reports of massive property purchases by firms planning new facilities appear almost daily—and so do accounts of the challenges developers face before the first shovel hits the ground. As parcel sizes grow and power and water demands intensify, the complexities of acquiring and preparing these sites have only increased. Tract is a leader in this space. The Denver-based company develops master-planned data center parks, with more than 25,000 acres of potential sites under its control and plans to support over 25 GW of workload capacity. To put that into perspective, 25,000 acres is roughly 40 square miles—about two-thirds the land area of Washington, D.C., or, for European readers, two-thirds the size of Liechtenstein. Building Shovel-Ready Megasites Rather than waiting for developers to come knocking, Tract takes a proactive approach, built on the core belief that the future of data center growth lies in pre-entitled, zoned, and infrastructure-ready megasites. The company works years in advance to deliver shovel-ready campuses with reliable energy, fiber connectivity, and municipal cooperation already in place. Its model emphasizes strategic land aggregation in high-growth regions, the cultivation of long-term relationships with utilities and governments, and master planning for power, cooling, transportation, and sustainability. This integrated approach positions Tract to deliver both speed and certainty to hyperscale project developers—at scale. Tract’s leadership team brings deep industry experience. Founder and Executive Chairman Grant van Rooyen previously led acquisitions and expansions at Cologix and Teraco. President Matt Spencer brings more than 35 years of telecom and infrastructure leadership, while Chief Energy Officer Nat Sahlstrom, former head of Amazon’s global energy, water, and sustainability teams, helped make Amazon the world’s largest buyer of renewable energy. Backed by

Read More »

When Communities Push Back: Navigating Data Center Opposition

2025 has been a landmark year for data center development. The rise of the AI Factory and AI-driven data center designs has made announcements of massive new complexes routine, with claims and certainties that these facilities will require hundreds of megawatts of power scarcely raising an industry eyebrow. At the same time, opposition is becoming more organized, often forming unexpected alliances. Even in an era of clear political alignment around certain causes, blocking data center projects has emerged as a bipartisan concern among voters. In the past several months, as data center projects in the gigawatt range have been announced, significant behind-the-scenes opposition has been building, from local grassroots organizations to state legislatures crafting new guidelines for data center development. Rising Community Opposition In 2025, multiple communities across the U.S., from Northern Virginia to Indiana, Texas, Arizona, Georgia, and Alabama, have effectively organized to challenge proposed data center developments. Some campaigns have already succeeded in delaying or derailing projects, while others are still building momentum. A report from Data Center Watch, covering the period from May 2024 to March 2025, estimates that billions in data center investment have already been affected by local resistance: $18 billion in projects were blocked, and another $46 billion faced delays. Whether these trends will represent a lasting constraint on the AI-driven data center boom remains unclear, but one point is certain: organized community action has become a central front in the debate over digital infrastructure in America. The Data Center Watch report also identified 142 activist groups across 24 states actively opposing data center projects. While opposition is largely local in focus, the nature of the concerns has remained relatively consistent, with activism often coalescing quickly into organized groups (such as the Coalition to Protect Prince William County, No Desert Data Center, and Protect

Read More »

Study finds data center colocation capacity near zero

Vacancy in the North American market has declined to a new record low of 2.3 percent, and JLL projects that figure will remain the same or go even further down from now through 2027. For comparison, the vacancy rate stood at 9.8 percent in 2020. As bad as the wait for data center capacity has become, the wait for power is even worse. The average wait time for a grid connection across North America is now four years, according to the report, with power delays representing a significant hurdle in efforts to alleviate the shortage of new colocation capacity. Most of the top markets have doubled or even tripled in size since 2020, with Columbus, Ohio leading the way with 1800% growth, followed by Austin/San Antonio at 500% growth. However, they started from a small base in 2020. In absolute terms, Northern Virginia (+3,975 MW), Dallas (+1,008 MW) and Atlanta (+828 MW) have seen the largest increase in capacity.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »