Stay Ahead, Stay ONMINE

Crowdstrike’s massive cyber outage 1-year later: lessons enterprises can learn to improve security

As we wrote in our initial analysis of the CrowdStrike incident, the July 19, 2024, outage served as a stark reminder of the importance of cyber resilience. Now, one year later, both CrowdStrike and the industry have undergone significant transformation, with the catalyst being driven by 78 minutes that changed everything.“The first anniversary of July 19 marks a moment that deeply impacted our customers and partners and became one of the most defining chapters in CrowdStrike’s history,” CrowdStrike’s President Mike Sentonas wrote in a blog detailing the company’s year-long journey toward enhanced resilience.The numbers remain sobering: A faulty Channel File 291 update, deployed at 04:09 UTC and reverted just 78 minutes later, crashed 8.5 million Windows systems worldwide. Insurance estimates put losses at $5.4 billion for the top 500 U.S. companies alone, with aviation particularly hard hit with 5,078 flights canceled globally.Steffen Schreier, senior vice president of product and portfolio at Telesign, a Proximus Global company, captures why this incident resonates a year later: “One year later, the CrowdStrike incident isn’t just remembered, it’s impossible to forget. A routine software update, deployed with no malicious intent and rolled back in just 78 minutes, still managed to take down critical infrastructure worldwide. No breach. No attack. Just one internal failure with global consequences.”

As we wrote in our initial analysis of the CrowdStrike incident, the July 19, 2024, outage served as a stark reminder of the importance of cyber resilience. Now, one year later, both CrowdStrike and the industry have undergone significant transformation, with the catalyst being driven by 78 minutes that changed everything.

“The first anniversary of July 19 marks a moment that deeply impacted our customers and partners and became one of the most defining chapters in CrowdStrike’s history,” CrowdStrike’s President Mike Sentonas wrote in a blog detailing the company’s year-long journey toward enhanced resilience.

The numbers remain sobering: A faulty Channel File 291 update, deployed at 04:09 UTC and reverted just 78 minutes later, crashed 8.5 million Windows systems worldwide. Insurance estimates put losses at $5.4 billion for the top 500 U.S. companies alone, with aviation particularly hard hit with 5,078 flights canceled globally.

Steffen Schreier, senior vice president of product and portfolio at Telesign, a Proximus Global company, captures why this incident resonates a year later: “One year later, the CrowdStrike incident isn’t just remembered, it’s impossible to forget. A routine software update, deployed with no malicious intent and rolled back in just 78 minutes, still managed to take down critical infrastructure worldwide. No breach. No attack. Just one internal failure with global consequences.”


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


His technical analysis reveals uncomfortable truths about modern infrastructure: “That’s the real wake-up call: even companies with strong practices, a staged rollout, fast rollback, can’t outpace the risks introduced by the very infrastructure that enables rapid, cloud-native delivery. The same velocity that empowers us to ship faster also accelerates the blast radius when something goes wrong.”

Understanding what went wrong

CrowdStrike’s root cause analysis revealed a cascade of technical failures: a mismatch between input fields in their IPC Template Type, missing runtime array bounds checks and a logic error in their Content Validator. These weren’t edge cases but fundamental quality control gaps.

Merritt Baer, incoming Chief Security Officer at Enkrypt AI and advisor to companies including Andesite, provides crucial context: “CrowdStrike’s outage was humbling; it reminded us that even really big, mature shops get processes wrong sometimes. This particular outcome was a coincidence on some level, but it should have never been possible. It demonstrated that they failed to instate some basic CI/CD protocols.”

Her assessment is direct but fair: “Had CrowdStrike rolled out the update in sandboxes and only sent it in production in increments as is best practice, it would have been less catastrophic, if at all.”

Yet Baer also recognizes CrowdStrike’s response: “CrowdStrike’s comms strategy demonstrated good executive ownership. Execs should always take ownership—it’s not the intern’s fault. If your junior operator can get it wrong, it’s my fault. It’s our fault as a company.”

Leadership’s accountability

George Kurtz, CrowdStrike’s founder and CEO, exemplified this ownership principle. In a LinkedIn post reflecting on the anniversary, Kurtz wrote: “One year ago, we faced a moment that tested everything: our technology, our operations, and the trust others placed in us. As founder and CEO, I took that responsibility personally. I always have and always will.”

His perspective reveals how the company channeled crisis into transformation: “What defined us wasn’t that moment; it was everything that came next. From the start, our focus was clear: build an even stronger CrowdStrike, grounded in resilience, transparency, and relentless execution. Our North Star has always been our customers.”

CrowdStrike goes all-in on a new Resilient by Design framework

CrowdStrike’s response centered on their Resilient by Design framework, which Sentonas describes as going beyond “quick fixes or surface-level improvements.” The framework’s three pillars, including Foundational, Adaptive and Continuous components, represent a comprehensive rethinking of how security platforms should operate.

Key implementations include:

  • Sensor Self-Recovery: Automatically detects crash loops and transitions to safe mode
  • New Content Distribution System: Ring-based deployment with automated safeguards
  • Enhanced Customer Control: Granular update management and content pinning capabilities
  • Digital Operations Center: Purpose-built facility for global infrastructure monitoring
  • Falcon Super Lab: Testing thousands of OS, kernel and hardware combinations

“We didn’t just add a few content configuration options,” Sentonas emphasized in his blog. “We fundamentally rethought how customers could interact with and control enterprise security platforms.”

Industry-wide supply chain awakening

The incident forced a broader reckoning about vendor dependencies. Baer frames the lesson starkly: “One huge practical lesson was just that your vendors are part of your supply chain. So, as a CISO, you should test the risk to be aware of it, but simply speaking, this issue fell on the provider side of the shared responsibility model. A customer wouldn’t have controlled it.”

CrowdStrike’s outage has permanently altered vendor evaluation: “I see effective CISOs and CSOs taking lessons from this, around the companies they want to work with and the security they receive as a product of doing business together. I will only ever work with companies that I respect from a security posture lens. They don’t need to be perfect, but I want to know that they are doing the right processes, over time.”

Sam Curry, CISO at Zscaler, added, “What happened to CrowdStrike was unfortunate, but it could have happened to many, so perhaps we don’t put the blame on them with the benefit of hindsight. What I will say is that the world has used this to refocus and has placed more attention to resilience as a result, and that’s a win for everyone, as our collective goal is to make the internet safer and more secure for all.”

Underscores the need for a new security paradigm

Schreier’s analysis extends beyond CrowdStrike to fundamental security architecture: “Speed at scale comes at a cost. Every routine update now carries the weight of potential systemic failure. That means more than testing, it means safeguards built for resilience: layered defenses, automatic rollback paths and fail-safes that assume telemetry might disappear exactly when you need it most.”

His most critical insight addresses a scenario many hadn’t considered: “And when telemetry goes dark, you need fail-safes that assume visibility might vanish.”

This represents a paradigm shift. As Schreier concludes: “Because security today isn’t just about keeping attackers out—it’s about making absolutely sure your own systems never become the single point of failure.”

Looking forward: AI and future challenges

Baer sees the next evolution already emerging: “Ever since cloud has enabled us to build using infrastructure as code, but especially now that AI is enabling us to do security differently, I am looking at how infrastructure decisions are layered with autonomy from humans and AI. We can and should layer on reasoning as well as effective risk mitigation for processes like forced updates, especially at high levels of privilege.”

CrowdStrike’s forward-looking initiatives include:

  • Hiring a Chief Resilience Officer reporting directly to the CEO
  • Project Ascent, exploring capabilities beyond kernel space
  • Collaboration with Microsoft on the Windows Endpoint Security Platform
  • ISO 22301 certification for business continuity management

A stronger ecosystem

One year later, the transformation is evident. Kurtz reflects: “We’re a stronger company today than we were a year ago. The work continues. The mission endures. And we’re moving forward: stronger, smarter, and even more committed than ever.”

To his credit, Kurtz also acknowledges those who stood by the company: “To every customer who stayed with us, even when it was hard, thank you for your enduring trust. To our incredible partners who stood by us and rolled up their sleeves, thank you for being our extended family.”

The incident’s legacy extends far beyond CrowdStrike. Organizations now implement staged rollouts, maintain manual override capabilities and—crucially—plan for when security tools themselves might fail. Vendor relationships are evaluated with new rigor, recognizing that in our interconnected infrastructure, every component is critical.

As Sentonas acknowledges: “This work isn’t finished and never will be. Resilience isn’t a milestone; it’s a discipline that requires continuous commitment and evolution.” The CrowdStrike incident of July 19, 2024, will be remembered not just for the disruption it caused but for catalyzing an industry-wide evolution toward true resilience.

In facing their greatest challenge, CrowdStrike and the broader security ecosystem have emerged with a deeper understanding: protecting against threats means ensuring the protectors themselves can do no harm. That lesson, learned through 78 difficult minutes and a year of transformation, may prove to be the incident’s most valuable legacy.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Microsoft will stop using Chinese workers on US DoD systems

“In response to concerns raised earlier this week about US-supervised foreign engineers, Microsoft has made changes to our support for US Government customers to assure that no China-based engineering teams are providing technical assistance for DoD Government cloud and related services. We remain committed to providing the most secure services

Read More »

US lawmakers question big tech over undersea cable safeguards

This comes after the Federal Communications Commission announced last week that it plans to introduce rules that will prevent companies from connecting undersea communication cables to the US if those systems include Chinese technology or equipment. In a statement, the House Homeland Security Committee added that the letter follows reports

Read More »

Governing AI in utilities: Insights from West Monroe’s AI summit

The rapid evolution of artificial intelligence (AI) presents both opportunities and risks for the utility sector. Infrastructure owners are no strangers to navigating emerging challenges—whether it’s environmental standards, field device modernization, or cybersecurity. But AI represents a new frontier, where the pace of technological change and regulatory complexity demands a

Read More »

Tullow Sells Kenyan Business to Gulf Energy

Tullow Oil PLC said Monday it had signed an agreement to divest its Kenyan portfolio to Gulf Energy Ltd. for at least $120 million in cash. The assets to be sold hold about 463 million barrels of 2C resources, London-based Tullow said in a statement online. The transaction involves the sale of the shares of Tullow Kenya BV by Tullow Overseas Holdings BV to Gulf Energy’s Auron Energy E&P Ltd. The sale and purchase agreement announced Monday builds on a heads of terms agreement announced April. “Tullow retains a back-in right for a 30 percent participation in potential future development phases at no historic cost”, Tullow said. “This right can be exercised if a third-party investor participates in future development phases, whether through a sale or farm-down of the purchaser’s interest in the assets”. The parties expect to complete the transaction this year, subject to approval by the Eastern African country’s Competition Authority, approval of a development plan and the fulfillment of payments. “The consideration will be split into a $40 million payment due on completion, $40 million payable at the earlier of field development plan approval or 30 June 2026, and $40 million payable over five years from the third quarter of 2028 onwards”, Tullow said. “In addition, Tullow will be entitled to royalty payments subject to certain conditions”. Tullow added, “All past and future decommissioning liabilities and all material past and future environmental liabilities will be transferred to the purchaser”. Tullow chief executive Richard Miller said, “For a total consideration of at least $120 million, the Transaction supports our strategic priority to strengthen the balance sheet, with the first two payments totaling $80 million expected before the end of the year”. “We continue to advance plans to optimize our capital structure during 2025”, Miller added. “Coupled with the

Read More »

Equinor to Supply Gas to BASF, Invests in Johan Sverdup Development

Equinor said it entered into a long-term strategic agreement to supply up to 23 terawatt-hours of natural gas, or around 2 billion cubic meters, annually to BASF SE over a 10-year period. The contract secures a substantial share of BASF’s natural gas needs in Europe, Equinor said in a news release. The gas will be sold on market terms, and deliveries will start on October 1. BASF uses natural gas both as an energy source and as a raw material in the production of basic chemicals, according to the release. The partnership will support BASF’s strategy to diversify its energy and raw materials portfolio, Equinor said. “This agreement further strengthens our partnership with BASF. Natural gas not only provides energy security to Europe but also critical feedstock to European industries. I am very happy that our gas also supports BASF’s efforts to reduce their carbon footprint. Gas from Norway comes with the lowest emissions from production and transportation,” Equinor President and CEO Anders Opedal said. “We are very happy to enter into this long-term partnership with Equinor for the reliable supply of low-carbon natural gas for BASF’s operations in Europe. Equinor is a trusted and valued partner. The supply agreement not only comes with competitive terms but also supports our sustainability targets,” BASF CFO and Chief Digital Officer Dirk Elvermann said. For the past several years, Equinor has been supplying gas and liquids to BASF, which develops a broad portfolio of solutions that are components in the manufacturing of everyday consumer goods, such as car interiors, sportswear, personal care items, and agricultural solutions, according to the release. Development Plans for Johan Sverdup Meanwhile, Equinor and its partners plan to invest $1.27 billion (NOK 13 billion) in the third phase of Johan Sverdrup oil field in the North Sea, approximately 87

Read More »

Where Will USA EPS Electricity Generation Come From in 2025?

In its latest short term energy outlook (STEO), which was released earlier this month, the U.S. Energy Information Administration (EIA) projected that total U.S. electricity generation from the electric power sector will come in at 4,244.2 billion kilowatt-hours (BK) in 2025. The EIA projected in its latest STEO that 1,696.9 BK of that total will come from natural gas, which represents around 40 percent, and 1,046.7 BK will come from renewable energy sources, which represents around 25 percent. Renewable energy sources included in the STEO comprised conventional hydropower, wind, biomass, geothermal, and solar. The solar category included generation from utility-scale (larger than one megawatt) solar photovoltaic and solar thermal power plants and excluded generation from small-scale solar photovoltaic systems, the STEO highlighted. The EIA expects wind to contribute the largest figure to the renewable energy sources category this year, at 472.8 BK, the STEO showed. Solar is expected to come second in this category, with 291.5 BK, and biomass third, with 20.5 BK, the STEO highlighted. Nuclear is projected in the STEO to be the third largest source of total U.S. electricity generation from the electric power sector, at 783.8 BK, and coal is forecast to be the fourth largest source, at 702.6 BK. Petroleum – comprising residual fuel oil, distillate fuel oil, petroleum coke, and other petroleum liquids – is expected to contribute 17.4 BK to the total, according to the STEO, which projected that other fossil gases will make up 3.0 BK and other non-renewable fuels will make up 1.4 BK. These comprise batteries, chemicals, hydrogen, pitch, purchased steam, sulfur, nonrenewable waste, and miscellaneous technologies, the STEO pointed out. The EIA highlighted in its July STEO that total U.S. electricity generation from the electric power sector came in at 4,150.9 BK in 2024. That STEO pointed out that

Read More »

Buru Tweaks Timeline for Australia’s Rafael Gas Project

Buru Energy Ltd. has adjusted the timeline for the development of the Rafael natural gas project in Western Australia’s Canning Basin but still aims for a startup late 2027. The Rafael gas and condensate field is in Exploration Permit 428, about 150 kilometers (93.21 miles) east of Broome and around 85 kilometers south of Derby in the Shire of Derby-West Kimberley, according to Buru. Rafael is the only confirmed source of conventional gas and liquids onshore Western Australia north of the North West Shelf Project, according to Buru. First drilled in 2021 and confirmed as a discovery the same year, Rafael has been assessed to hold contingent and unrisked gross recoverable volumes of 85-523 Bscf of gas and 1.8-10.6 MMstb of condensate, according to Buru. Buru eyes a 20-year production life. It expects the project to supply trucked liquefied natural gas and liquids to Pilbara and the Northern Territory. Buru plans to drill two wells, including the 2021 discovery. Under the new timeline, instead of recompleting the discovery well as a producer before drilling a second well called Rafael B, Buru will now drill and test Rafael B first. Drilling is planned to start June 2026. The change aims “to reduce risk and increase the probability of higher reserves”, West Perth-based Buru said in a regulatory filing. Chief executive Thomas Nador said, “The Rafael technical assurance process has delivered valuable information to underpin decision making on the risks and opportunities of our planned Rafael appraisal and production flow test program”. “Drilling and testing the Rafael B appraisal well next is the optimum pathway to proving up the resource and underpinning a robust Final Investment Decision, whilst maintaining our first cashflow target of late 2027”, Nador added. Earlier this month Buru said it had received government approval for a two-year extension

Read More »

Texas Industry Groups Look at June Upstream Employment

According to the Texas Independent Producers and Royalty Owners Association’s (TIPRO) analysis, direct Texas upstream employment for June totaled 205,400. That’s what TIPRO said in a statement sent to Rigzone by the TIPRO team on Friday, which cited the latest Current Employment Statistics (CES) report from the U.S. Bureau of Labor Statistics (BLS). In the statement, TIPRO noted that the June figure was a decline of 2,700 industry positions from May employment numbers, adding that this represented an increase of 200 jobs in oil and gas extraction and a decrease of 2,900 jobs in the services sector. TIPRO said in the statement that fluctuations in monthly employment are normal and subject to revisions with CES data. It also noted in the statement that “demand for talent in the Texas upstream sector remains high” and pointed out “recent policy developments that will support the continued expansion of domestic production and energy infrastructure in the coming years”. “TIPRO’s new workforce data indicated strong job postings for the Texas oil and natural gas industry,” TIPRO said in its statement, highlighting that, according to the association, “there were 8,457 active unique jobs postings for the Texas oil and natural gas industry last month, compared to 8,157 postings in May, and 3,533 new postings, compared to 3,050 in the previous month”. “In comparison, the state of Pennsylvania had 2,689 unique job postings in June, followed by California (2,555), New York (2,265) and Ohio (2,201),” TIPRO continued. “TIPRO reported a total of 51,661 unique job postings nationwide last month within the oil and natural gas sector, including 21,861 new postings,” it went on to state. The industry body noted in the statement that, among the 19 specific industry sectors it uses to define the Texas oil and natural gas industry, Support Activities for Oil and Gas Operations led in the ranking for

Read More »

European Commission Finds No Fraud in Chinese Biofuel Imports

The European Commission, acting on allegations raised by German authorities in March 2023, has failed to confirm any fraud related to the sustainability and emissions savings of biofuels imported from China. “The Commission identified some systemic weaknesses in the way certification audits have been conducted and is taking action to address these issues. Nevertheless, the information gathered did not allow confirmation of the existence of fraud”, the Commission’s Directorate-General for Energy said in a statement online. “The German authorities may perform additional verifications or investigations if they wish to do so”. The investigation was conducted under Article 30 (10) of the Renewable Energy Directive of 2018, amended October 2023. To be eligible for European Union financial support and to count toward the fulfilment of renewable energy targets, biofuels must meet certain criteria that protect biodiversity and soil and prevent deforestation. The amount of greenhouse gas emissions avoided by using biofuels must also meet certain thresholds. On the lower end, emission savings must be at least 50 percent for biofuels consumed in the transport sector. On the upper end, as updated in the 2023 directive, savings must be at least 80 percent for electricity, heating and cooling production from biomass fuels. “In close cooperation with the German authorities, it [the Commission] collected input from numerous stakeholders and reviewed audit reports from the voluntary certification scheme that certified the economic operators concerned”, the statement said. It added, “To tackle the risk of fraud in the biofuels market, the Commission is undertaking a range of actions in the short and medium term, in particular in areas where the Implementing Regulation on sustainability certification (EU/2022/996) can be further strengthened”. The Commission has formed a working group with EU states under the Committee on the Sustainability of Biofuels, Bioliquids and Biomass Fuels to review the certification law. The Commission

Read More »

‘Significant’ outage at Alaska Airlines not a security incident, but a hardware breakdown

The airline told Network World that when the critical piece of what it described as “third-party multi-redundant hardware” failed unexpectedly, “it impacted several of our key systems that enable us to run various operations.” The company is currently working with its vendor to replace the faulty equipment at the data center. The airline has cancelled more than 150 flights since Sunday evening, including 64 on Monday. The company said additional flight disruptions are likely as it repositions aircraft and crews throughout its network. Alaska Airlines emphasized that the safety of its flights was never compromised, and that “the IT outage is not related to any other current events, and it’s not connected to the recent cybersecurity incident at Hawaiian Airlines.” The airline did not provide additional information to Network World about the specifics of the outage. “There are many redundant components that can fail,” said Roberts, noting that it could have been something as simple as a RAID array (which combines multiple physical data storage components into one or more logical units). Or, on the network side, it could have been the failure of a pair of load balancers. “It’s interesting that redundancy didn’t save them,” said Roberts. “Perhaps multiple pieces of hardware were impacted by the same issue, like a firmware update. Or, maybe they’re just really unlucky.”

Read More »

Cisco upgrades 400G optical receiver to boost AI infrastructure throughput

“In the data center, what’s really changed in the last year or so is that with AI buildouts, there’s much, much more optics that are part of 400G and 800G. It’s not so much using 10G and 25G optics, which we still sell a ton of, for campus applications. But for AI infrastructure, the 400G and 800G optics are really the dominant optics for that application,” Gartner said. Most of the AI infrastructure builds have been for training models, especially in hyperscaler environments, Gartner said. “I expect, towards the tail end of this year, we’ll start to see more enterprises deploying AI infrastructure for inference. And once they do that, because it has an Nvidia GPU attached to it, it’s going to be a 400G or 800G optic.” Core enterprise applications – such as real-time trading, high-frequency transactions, multi-cloud communications, cybersecurity analytics, network forensics, and industrial IoT – can also utilize the higher network throughput, Gartner said. 

Read More »

Supermicro bets big on 4-socket X14 servers to regain enterprise trust

In April, Dell announced its PowerEdge R470, R570, R670, and R770 servers with Intel Xeon 6 Processors with P-cores, but with single and double-socket servers. Similarly, Lenovo’s ThinkSystem V4 servers are also based on the Intel Xeon 6 processor but are limited to dual socket configurations. The launch of 4-socket servers by Supermicro reflects a growing enterprise need for localized compute that can support memory-bound AI and reduce the complexity of distributed architectures. “The modern 4-socket servers solve multiple pain points that have intensified with GenAI and memory-intensive analytics. Enterprises are increasingly challenged by latency, interconnect complexity, and power budgets in distributed environments. High-capacity, scale-up servers provide an architecture that is more aligned with low-latency, large-model processing, especially where data residency or compliance constraints limit cloud elasticity,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Launching a 4-socket Xeon 6 platform and packaging it within their modular ‘building block’ strategy shows Supermicro is focusing on staying ahead in enterprise and AI data center compute,” said Devroop Dhar, co-founder and MD at Primus Partner. A critical launch after major setbacks Experts peg this to be Supermicro’s most significant product launch since it became mired in governance and regulatory controversies. In 2024, the company lost Ernst & Young, its second auditor in two years, following allegations by Hindenburg Research involving accounting irregularities and the alleged export of sensitive chips to sanctioned entities. Compounding its troubles, Elon Musk’s AI startup xAI redirected its AI server orders to Dell, a move that reportedly cost Supermicro billions in potential revenue and damaged its standing in the hyperscaler ecosystem. Earlier this year, HPE signed a $1 billion contract to provide AI servers for X, a deal Supermicro was also bidding for. “The X14 launch marks a strategic reinforcement for Supermicro, showcasing its commitment

Read More »

Moving AI workloads off the cloud? A hefty data center retrofit awaits

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.” Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes. “We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.”

Read More »

My take on the Gartner Magic Quadrant for LAN infrastructure? Highly inaccurate

Fortinet being in the leader quadrant may surprise some given they are best known as a security vendor, but the company has quietly built a broad and deep networking portfolio. I have no issue with them being considered a leader and believe for security conscious companies, Fortinet is a great option. Challenger Cisco is the only company listed as a challenger, and its movement out of the leader quadrant highlights just how inaccurate this document is. There is no vendor that sells more networking equipment in more places than Cisco, and it has led enterprise networking for decades. Several years ago, when it was a leader, I could argue the division of engineering between Meraki and Catalyst could have pushed them out, but it didn’t. So why now? At its June Cisco Live event, the company launched a salvo of innovation including AI Canvas, Cisco AI Assistant, and much more. It’s also continually improved the interoperability between Meraki and Catalyst and announced several new products. AI Canvas is a completely new take, was well received by customers at Cisco Live, and reinvents the concept of AIOps. As I stated above, because of the December cutoff time for information gathering, none of this was included, but that makes Cisco’s representation false. Also, I find this MQ very vague in its “Cautions” segment. As an example, it states: “Cisco’s product strategy isn’t well-aligned with key enterprise needs.” Some details here would be helpful. In my conversations with Cisco, which includes with Chief Product Officer and President Jeetu Patel, the company has reiterated that its strategy is to help customers be AI-ready with products that are easier to deploy and manage, more automated, and with a lower cost to run. That seems well-aligned with customer needs. If Gartner is hearing customers want networks

Read More »

Equinix, AWS embrace liquid cooling to power AI implementations

With AWS, it deployed In-Row Heat Exchangers (IRHX), a custom-built liquid cooling system designed specifically for servers using Nvidia’s Blackwell GPUs, it’s most powerful but also its hottest running processors used for AI training and inference. The IRHX unit has three components: a water‑distribution cabinet, an integrated pumping unit, and in‑row fan‑coil modules. It uses direct to chip liquid cooling just like the equinox servers, where cold‑plates attached to the chip draw heat from the chips and is cooled by the liquid. The warmed coolant then flows through the coils of heat exchangers, where high‑speed fans Blow on the pipes to cool them, like a car radiator. This type of cooling is nothing new, and there are a few direct to chip liquid cooling solutions on the market from Vertiv, CoolIT, Motivair, and Delta Electronics all sell liquid cooling options. But AWS separates the pumping unit from the fan-coil modules, letting a single pumping system to support large number of fan units. These modular fans can be added or removed as cooling requirements evolve, giving AWS the flexibility to adjust the system per row and site. This led to some concern that Amazon would disrupt the market for liquid cooling, but as a Dell’Oro Group analyst put it, Amazon develops custom technologies for itself and does not go into competition or business with other data center infrastructure companies.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »