Stay Ahead, Stay ONMINE

Alibaba’s new open source Qwen3-235B-A22B-2507 beats Kimi-2 and offers low compute version

Chinese e-commerce giant Alibaba has made waves globally in the tech and business communities with its own family of “Qwen” generative AI large language models, beginning with the launch of the original Tongyi Qianwen LLM chatbot in April 2023 through the release of Qwen 3 in April 2025.Well, not only are its models powerful and score high on third-party benchmark tests at completing math, science, reasoning, and writing tasks, but for the most part, they’ve been released under permissive open source licensing terms, allowing organizations and enterprises to download them, customize them, run them, and generally use them for all variety of purposes, even commercial. Think of them as an alternative to DeepSeek. This week, Alibaba’s “Qwen Team,” as its AI division is known, released the latest updates to its Qwen family, and they’re already attracting attention once more from AI power users in the West for their top performance, in one case, edging out even the new Kimi-2 model from rival Chinese AI startup Moonshot released in mid-July 2025.

Chinese e-commerce giant Alibaba has made waves globally in the tech and business communities with its own family of “Qwen” generative AI large language models, beginning with the launch of the original Tongyi Qianwen LLM chatbot in April 2023 through the release of Qwen 3 in April 2025.

Well, not only are its models powerful and score high on third-party benchmark tests at completing math, science, reasoning, and writing tasks, but for the most part, they’ve been released under permissive open source licensing terms, allowing organizations and enterprises to download them, customize them, run them, and generally use them for all variety of purposes, even commercial. Think of them as an alternative to DeepSeek.

This week, Alibaba’s “Qwen Team,” as its AI division is known, released the latest updates to its Qwen family, and they’re already attracting attention once more from AI power users in the West for their top performance, in one case, edging out even the new Kimi-2 model from rival Chinese AI startup Moonshot released in mid-July 2025.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


The new Qwen3-235B-A22B-2507-Instruct model — released on AI code sharing community Hugging Face alongside a “floating point 8” or FP8 version, which we’ll cover more in-depth below — improves from the original Qwen 3 on reasoning tasks, factual accuracy, and multilingual understanding. It also outperforms Claude Opus 4’s “non-thinking” version.

The new Qwen3 model update also delivers better coding results, alignment with user preferences, and long-context handling, according to its creators. But that’s not all…

Read on for what else it offers enterprise users and technical decision-makers.

FP8 version lets enterprises run Qwen 3 with far less memory and far less compute

In addition to the new Qwen3-235B-A22B-2507 model, the Qwen Team released an “FP8” version, which stands for 8-bit floating point, a format that compresses the model’s numerical operations to use less memory and processing power — without noticeably affecting its performance.

In practice, this means organizations can run a model with Qwen3’s capabilities on smaller, less expensive hardware or more efficiently in the cloud. The result is faster response times, lower energy costs, and the ability to scale deployments without needing massive infrastructure.

This makes the FP8 model especially attractive for production environments with tight latency or cost constraints. Teams can scale Qwen3’s capabilities to single-node GPU instances or local development machines, avoiding the need for massive multi-GPU clusters. It also lowers the barrier to private fine-tuning and on-premises deployments, where infrastructure resources are finite and total cost of ownership matters.

Even though Qwen team didn’t release official calculations, comparisons to similar FP8 quantized deployments suggest the efficiency savings are substantial. Here’s a practical illustration:

MetricFP16 Version (Instruct)FP8 Version (Instruct-FP8)
GPU Memory Use~88 GB~30 GB
Inference Speed~30–40 tokens/sec~60–70 tokens/sec
Power DrawHigh~30–50% lower
Number of GPUs Needed8× A100s or similar4× A100s or fewer

Estimates based on industry norms for FP8 deployments. Actual results vary by batch size, prompt length, and inference framework (e.g., vLLM, Transformers, SGLang).

No more ‘hybrid reasoning’…instead Qwen will release separate reasoning and instruct models!

Perhaps most interesting of all, Qwen Team announced it will no longer be pursuing a “hybrid” reasoning approach, which it introduced back with Qwen 3 in April and seemed to be inspired by an approach pioneered by sovereign AI collective Nous Research.

This allowed users to toggle on a “reasoning” model, letting the AI model engage in its own self-checking and producing “chains-of-thought” before responding.

In a way, it was designed to mimic the reasoning capabilities of powerful proprietary models such as OpenAI’s “o” series (o1, o3, o4-mini, o4-mini-high), which also produce “chains-of-thought.”

However, unlike those rival models which always engage in such “reasoning” for every prompt, Qwen 3 could have the reasoning mode manually switched on or off by the user by clicking a “Thinking Mode” button on the Qwen website chatbot, or by typing “/think” before their prompt on a local or privately run model inference.

The idea was to give users control to engage the slower and more token-intensive thinking mode for more difficult prompts and tasks, and use a non-thinking mode for simpler prompts. But again, this put the onus on the user to decide. While flexible, it also introduced design complexity and inconsistent behavior in some cases.

Now As Qwen team wrote in its announcement post on X:

“After talking with the community and thinking it through, we decided to stop using hybrid thinking mode. Instead, we’ll train Instruct and Thinking models separately so we can get the best quality possible.”

With the 2507 update — an instruct or NON-REASONING model only, for now — Alibaba is no longer straddling both approaches in a single model. Instead, separate model variants will be trained for instruction and reasoning tasks respectively.

The result is a model that adheres more closely to user instructions, generates more predictable responses, and, as benchmark data shows, improves significantly across multiple evaluation domains.

Performance benchmarks and use cases

Compared to its predecessor, the Qwen3-235B-A22B-Instruct-2507 model delivers measurable improvements:

  • MMLU-Pro scores rise from 75.2 to 83.0, a notable gain in general knowledge performance.
  • GPQA and SuperGPQA benchmarks improve by 15–20 percentage points, reflecting stronger factual accuracy.
  • Reasoning tasks such as AIME25 and ARC-AGI show more than double the previous performance.
  • Code generation improves, with LiveCodeBench scores increasing from 32.9 to 51.8.
  • Multilingual support expands, aided by improved coverage of long-tail languages and better alignment across dialects.

The model maintains a mixture-of-experts (MoE) architecture, activating 8 out of 128 experts during inference, with a total of 235 billion parameters—22 billion of which are active at any time.

As mentioned before, the FP8 version introduces fine-grained quantization for better inference speed and reduced memory usage.

Enterprise-ready by design

Unlike many open-source LLMs, which are often released under restrictive research-only licenses or require API access for commercial use, Qwen3 is squarely aimed at enterprise deployment.

Boasting a permissive Apache 2.0 license, this means enterprises can use it freely for commercial applications. They may also:

  • Deploy models locally or through OpenAI-compatible APIs using vLLM and SGLang
  • Fine-tune models privately using LoRA or QLoRA without exposing proprietary data
  • Log and inspect all prompts and outputs on-premises for compliance and auditing
  • Scale from prototype to production using dense variants (from 0.6B to 32B) or MoE checkpoints

Alibaba’s team also introduced Qwen-Agent, a lightweight framework that abstracts tool invocation logic for users building agentic systems.

Benchmarks like TAU-Retail and BFCL-v3 suggest the instruction model can competently execute multi-step decision tasks—typically the domain of purpose-built agents.

Community and industry reactions

The release has already been well received by AI power users.

Paul Couvert, AI educator and founder of private LLM chatbot host Blue Shell AI, posted a comparison chart on X showing Qwen3-235B-A22B-Instruct-2507 outperforming Claude Opus 4 and Kimi K2 on benchmarks like GPQA, AIME25, and Arena-Hard v2, calling it “even more powerful than Kimi K2… and even better than Claude Opus 4.”

AI influencer NIK (@ns123abc), commented on its rapid impact: “You’re laughing. Qwen-3-235B made Kimi K2 irrelevant after only one week despite being one quarter the size and you’re laughing.”

Meanwhile, Jeff Boudier, head of product at Hugging Face, highlighted the deployment benefits: “Qwen silently released a massive improvement to Qwen3… it tops best open (Kimi K2, a 4x larger model) and closed (Claude Opus 4) LLMs on benchmarks.”

He praised the availability of an FP8 checkpoint for faster inference, 1-click deployment on Azure ML, and support for local use via MLX on Mac or INT4 builds from Intel.

The overall tone from developers has been enthusiastic, as the model’s balance of performance, licensing, and deployability appeals to both hobbyists and professionals.

What’s next for Qwen team?

Alibaba is already laying the groundwork for future updates. A separate reasoning-focused model is in the pipeline, and the Qwen roadmap points toward increasingly agentic systems capable of long-horizon task planning.

Multimodal support, seen in Qwen2.5-Omni and Qwen-VL models, is also expected to expand further.

And already, rumors and rumblings have started as Qwen team members tease yet another update to their model family incoming, with updates on their web properties revealing URL strings for a new Qwen3-Coder-480B-A35B-Instruct model, likely a 480-billion parameter mixture-of-experts (MoE) with a token context of 1 million.

What Qwen3-235B-A22B-Instruct-2507 ultimately signals is not just another leap in benchmark performance, but a maturation of open models as viable alternatives to proprietary systems.

The flexibility of deployment, strong general performance, and enterprise-friendly licensing give the model a unique edge in a crowded field.

For teams looking to integrate advanced instruction-following models into their AI stack—without the limitations of vendor lock-in or usage-based fees—Qwen3 is a serious contender.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Blackstone to acquire majority stake in NetBrain Technologies

Global investment firm Blackstone announced it entered into an agreement to acquire a majority stake in network automation platform provider NetBrain Technologies. While financial details of the deal were not disclosed, Blackstone’s growth investment in NetBrain valued the technology provider at $750 million. “AI has the power to transform how

Read More »

CIOs recalibrate IT agendas to make room for rising AI spend

Moreover, they’re reporting that the executive drive for all things AI has them recalibrating their IT project agenda, prioritizing AI spending while bumping other items down or even off the to-do list. “Budgets are finite, and because AI investments are an imperative for CEOs, the boards, and CIOs to support

Read More »

IT leaders rethink talent strategies to cope with AI skills crunch

As a result, CIOs at most companies have a tougher time attracting machine learning engineers, prompt engineers, and other AI-specific talent, Goldberg says. That leaves many turning to AI consultants and training their existing data engineers, enterprise architects, and others so they can slide into those AI positions. “They’re assessing

Read More »

PJM capacity prices set another record with 22% jump

Dive Brief: Capacity prices in the PJM Interconnection’s latest capacity auction hit a $329.17/MW-day price cap across its region, up 22% from a year ago for most of PJM, the grid operator said Tuesday. PJM expects the increase to record-high capacity prices for the 12-month period that starts in June 2026 could lead to 1.5% to 5% bill increases for some ratepayers, depending on what state they are in. PJM estimates that without a price cap that was established in an agreement with Pennsylvania Gov. Josh Shapiro, D, the capacity price for the 2026-27 delivery year would have been nearly $389/MW-day, or about 18% higher. Dive Insight: A year ago, PJM’s capacity auction sent shockwaves through its 13-state region when prices for the delivery year that started June 1 soared to $269.92/MW-day for most of its footprint, up from $28.92/MW-day. Prices in that auction hit zonal caps of $466.35/MW-day for the Baltimore Gas and Electric zone in Maryland, and $444.26/MW-day for the Dominion zone in Virginia and North Carolina. The 2024 auction’s total cost jumped to $14.7 billion from $2.2 billion. The cost of this year’s auction, which opened July 9 and closed July 15, climbed even higher to $16.1 billion, up 9.5% from a year ago. Capacity costs make up a relatively small part of electric bills, according to PJM. PJM capacity costs hit a record high The cost of PJM’s capacity auction in billions of dollars. In the auction, PJM bought 134,311 MW for the capacity year that starts June 1, according to the grid operator’s auction report. About 135,192 MW was offered in the auction, a decline from last year. PJM secured enough capacity to have an 18.9% reserve margin as forecast peak load grew by about 5,500 MW, mainly from data centers.  Gas-fired generation accounted for 45%

Read More »

DOE cancels $4.9B conditional loan commitment for Grain Belt Express

The U.S. Department of Energy on Wednesday announced it has terminated its $4.9 billion conditional loan commitment for the 800-mile Grain Belt Express Phase 1 transmission project. “After a thorough review of the project’s financials, DOE found that the conditions necessary to issue the guarantee are unlikely to be met and it is not critical for the federal government to have a role in supporting this project,” the agency said in a statement.  Chicago-based Invenergy plans to build the 5-GW Grain Belt Express in phases from Kansas to Illinois. In May, the company made almost $1.7 billion in contractor awards to engineering and infrastructure services companies Quanta Services and Kiewit Energy Group. Invenergy has said it aims to begin construction next year on the portion of the project connecting Kansas and Missouri. It estimates the project will provide $52 billion in energy cost savings to U.S. residents over 15 years. But Republicans have urged the Trump administration to cancel the conditional loan commitment. Missouri Attorney General Andrew Bailey on March 6 urged the Department of Government Efficiency to cancel the loan guarantee, arguing that the Midcontinent Independent System Operator’s transmission plans and business case analyses are “highly biased in favor of over-building transmission.” Republican Sen. Josh Hawley, also from Missouri, said July 10 on X that he had spoken with President Donald Trump and Energy Secretary Chris Wright about canceling the loan commitment. “Wright said he will be putting a stop to the Grain Belt Express green scam. It’s costing taxpayers BILLIONS! Thank you, President Trump,” he wrote on July 10. Grain Belt Express developers responded in a letter to Wright on July 11. The proposed transmission line “has been the target of egregious politically motivated lawfare,” Grain Belt Express Vice President Jim Shield wrote. “Recent false accusations from Senator

Read More »

Microgrid ‘energy parks’ could ease strain from rising power demand, report says

Dive Brief: In order to accommodate spiking electricity demand in the U.S., the capabilities of the existing power system should be leveraged using tools like grid-enhancing technologies, and new loads should be co-located in “energy parks” — large microgrids — to minimize the need for transmission upgrades, according to a Tuesday report that the Brattle Group prepared for the Clean Air Task Force. The report recommends states and utilities consider “scaling up promising demand-side programs” and providing targeted incentives for them to customers. Energy efficiency technologies, distributed energy resources, time-varying rates and demand response can “create more headroom in the electricity system to accommodate more load growth,” it says. The report also recommends avoiding the need for costly transmission upgrades as much as possible by creating energy parks, “where large electricity consumers are co-located with generation assets that can be dispatched for grid-related needs, which can offer significantly faster grid access for new loads.” Dive Insight: Several energy parks are already in development, including the $1 billion Meitner Project in Texas, which is “developing 460 MW of wind and 340 MW of solar to power 400 MW of hydrogen electrolyzers,” the report says. It also notes that Google plans to invest $20 billion in energy parks by the end of the decade “to power data centers using solar and battery storage, with the first project expected to be operational in 2026 and complete in 2027.” “To facilitate this co-location arrangement, transmission operators and owners should adopt interconnection processes that appropriately reflect the operation of colocated load and generation, and offer expedited screening processes given the controllable, non-firm nature of their grid injections,” the report says. In December, Grid Strategies issued a report estimating that U.S. electricity demand could rise 128 GW over the next five years — a five-fold increase

Read More »

Shippers, Traders Avoid Nayara Energy following EU Blacklisting

Shipowners and oil traders are staying away from Russia-backed Nayara Energy Ltd. as part of the fallout faced by the Indian refiner, after it was singled out in the latest round of European Union sanctions. At least one oil tanker, the Talara, u-turned and sailed away from Vadinar port on Sunday, according to Bloomberg ship-tracking data. The vessel was meant to pick up a cargo of fuel – likely diesel – from Nayara, shipbrokers said. The booking was cancelled following Friday’s sanction, they said, and the cargo was not loaded.  Another tanker, the Chang Hang Xing Yun, that was on its way to Vadinar this week, halted off the southwestern coast of India, ballasting, ship-tracking data and chartering fixtures show. The ship is now heading to the Arabian Gulf to pick up cargoes bound for southern Africa, after its previous plans to load products from Vadinar were cancelled yesterday, shipbrokers said. Shipbrokers added owners have become wary of dealings with Nayara this week, be it fuel exports or crude imports. Rosneft PJSC holds a 49.13 percent stake in the Indian processor.  Global oil market observers are waiting to see if the hesitation among shipowners will spread beyond logistics to trading counterparties and even financiers. Indian refiners have been seeking more clarity from the EU in the past days on a variety of matters including Nayara’s blacklist and a ban on the diesel supplies made from Russian crude.  Owners from Greece to Norway control a significant portion of the world’s shipping fleet, with companies likely to adhere to EU restrictions to some extent. Since the Ukraine war in 2022, however, Greek owners have played a crucial role in the Russian oil trade, particularly when barrels were below the price cap.  Talara’s diversion adds to the concerns surrounding Nayara, after it sought advance payment or

Read More »

F1 Renewable Energy Shift Drives 26 Pct Drop in Carbon Footprint

Motor racing, a sport known for flashy, petrol-guzzling racecars speeding at hundreds of kilometers per hour across twisting asphalt tracks, hasn’t been a pastime known for sustainability. Formula 1 is trying to change that.  Often referred to as “the pinnacle of motorsport,” the racecar organization, which hosts an annual championship featuring the best drivers in the world, has seen its carbon emissions drop 26% since 2018. At the end of the 2024 season, the sport’s carbon footprint fell to 168,720 tonnes of carbon dioxide equivalent from 228,793. F1 said in a statement on Wednesday that it’s halfway toward achieving its minimum 50% reduction target, as set out in its “net zero by 2030” commitment.  “It’s a key milestone and the culmination of a lot of work,” said Ellen Jones, head of environmental, social and governance at F1, in an interview. “We’ve changed the way we operate, changed the way we work” with the racing teams and promoters, as well Formula 1’s management and regulator, she said. A major factor in the reductions has been a years-long shift to renewable energy, Jones said. Investment in sustainable aviation fuel for travel and freight use, and other alternative energy sources, including solar and biofuels, contributed to the decline. Next year, F1 has set a target for the cars to have new hybrid engines and be powered entirely by advanced sustainable fuel. “We’re not only changing what we’re doing materially, we’re also changing the technologies,” Jones said. The sport has achieved carbon reductions across the four major categories that it tracks: factories and facilities, down 59% since 2018; logistics, down 9%; event operations, down 12% on a per-race basis; and travel, down 25%.  An increase in using remote operations and changes to the race schedule also have resulted in lower greenhouse gas emissions. Having remote broadcast operations has allowed about 140 personnel to avoid having to travel to the race location each

Read More »

Electric utilities will invest more than $1.1T by 2030 to meet demand growth: EEI

Electric utilities will invest more than $1.1T by 2030 to meet demand growth: EEI | Utility Dive Skip to main content An article from The electric utility sector’s capital expenditures “are higher than any other sector in the U.S. economy,” Edison Electric Institute President and CEO Drew Maloney said. Published July 23, 2025 A composite wind blade. U.S. electric utilities brought 52 GW of total new generating capacity online in 2024, 11% more than in 2023 and 48% more than in 2022, according to a July 23, 2025, report from the Edison Electric Institute. Scott Olson via Getty Images Investor-owned U.S. electric utilities will invest more than $1.1 trillion in the 2025-2029 period, marking a rapid increase in capital expenditures as the sector rushes to meet growing power demand, according to a Wednesday report from the Edison Electric Institute. Capital expenditures from 2015 to 2024 totaled $1.3 trillion, the trade group noted. Permission granted by Edison Electric Institute The electric utility sector’s capital expenditures “are higher than any other sector in the U.S. economy, outpacing transportation, retail, and other capital-intensive industries,” EEI President and CEO Drew Maloney said in a statement.  “As demand for electricity continues to grow, we remain committed to making the investments needed to strengthen America’s energy security while ensuring that our customers receive reliable, affordable energy.”  Much of the investment is going to meet rising demand from data centers. While predictions for AI-related load growth vary, EEI’s financial review cited a McKinsey study predicting data center demand will rise about 20% annually from 2023 to 2030, from 60 GW today to 170-220 GW. Depending on factors, demand could even reach 300 GW, the McKinsey analysis said. But not all proposed data centers will ultimately be built, experts agree. A Schneider Electric 2030 AI power demand estimate put scenario ranges

Read More »

Technology is coming so fast data centers are obsolete by the time they launch

 Tariffs aside, Enderle feels that AI technology and ancillary technology around it like battery backup is still in the early stages of development and there will be significant changes coming in the next few years. GPUs from AMD and Nvidia are the primary processors for AI, and they are derived from video game accelerators. They were never meant for use in AI processing, but they are being fine-tuned for the task.  It’s better to wait to get a more mature product than something that is still in a relatively early state. But Alan Howard, senior analyst for data center infrastructure at Omdia, disagrees and says not to wait. One reason is the rate at which people that are building data centers is all about seizing market opportunity.” You must have a certain amount of capacity to make sure that you can execute on strategies meant to capture more market share.” The same sentiment exists on the colocation side, where there is a considerable shortage of capacity as demand outstrips supply. “To say, well, let’s wait and see if maybe we’ll be able to build a better, more efficient data center by not building anything for a couple of years. That’s just straight up not going to happen,” said Howard. “By waiting, you’re going to miss market opportunities. And these companies are all in it to make money. And so, the almighty dollar rules,” he added. Howard acknowledges that by the time you design and build the data center, it’s obsolete. The question is, does that mean it can’t do anything? “I mean, if you start today on a data center that’s going to be full of [Nvidia] Blackwells, and let’s say you deploy in two years when they’ve already retired Blackwell, and they’re making something completely new. Is that data

Read More »

‘Significant’ outage at Alaska Airlines not a security incident, but a hardware breakdown

The airline told Network World that when the critical piece of what it described as “third-party multi-redundant hardware” failed unexpectedly, “it impacted several of our key systems that enable us to run various operations.” The company is currently working with its vendor to replace the faulty equipment at the data center. The airline has cancelled more than 150 flights since Sunday evening, including 64 on Monday. The company said additional flight disruptions are likely as it repositions aircraft and crews throughout its network. Alaska Airlines emphasized that the safety of its flights was never compromised, and that “the IT outage is not related to any other current events, and it’s not connected to the recent cybersecurity incident at Hawaiian Airlines.” The airline did not provide additional information to Network World about the specifics of the outage. “There are many redundant components that can fail,” said Roberts, noting that it could have been something as simple as a RAID array (which combines multiple physical data storage components into one or more logical units). Or, on the network side, it could have been the failure of a pair of load balancers. “It’s interesting that redundancy didn’t save them,” said Roberts. “Perhaps multiple pieces of hardware were impacted by the same issue, like a firmware update. Or, maybe they’re just really unlucky.”

Read More »

Cisco upgrades 400G optical receiver to boost AI infrastructure throughput

“In the data center, what’s really changed in the last year or so is that with AI buildouts, there’s much, much more optics that are part of 400G and 800G. It’s not so much using 10G and 25G optics, which we still sell a ton of, for campus applications. But for AI infrastructure, the 400G and 800G optics are really the dominant optics for that application,” Gartner said. Most of the AI infrastructure builds have been for training models, especially in hyperscaler environments, Gartner said. “I expect, towards the tail end of this year, we’ll start to see more enterprises deploying AI infrastructure for inference. And once they do that, because it has an Nvidia GPU attached to it, it’s going to be a 400G or 800G optic.” Core enterprise applications – such as real-time trading, high-frequency transactions, multi-cloud communications, cybersecurity analytics, network forensics, and industrial IoT – can also utilize the higher network throughput, Gartner said. 

Read More »

Supermicro bets big on 4-socket X14 servers to regain enterprise trust

In April, Dell announced its PowerEdge R470, R570, R670, and R770 servers with Intel Xeon 6 Processors with P-cores, but with single and double-socket servers. Similarly, Lenovo’s ThinkSystem V4 servers are also based on the Intel Xeon 6 processor but are limited to dual socket configurations. The launch of 4-socket servers by Supermicro reflects a growing enterprise need for localized compute that can support memory-bound AI and reduce the complexity of distributed architectures. “The modern 4-socket servers solve multiple pain points that have intensified with GenAI and memory-intensive analytics. Enterprises are increasingly challenged by latency, interconnect complexity, and power budgets in distributed environments. High-capacity, scale-up servers provide an architecture that is more aligned with low-latency, large-model processing, especially where data residency or compliance constraints limit cloud elasticity,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Launching a 4-socket Xeon 6 platform and packaging it within their modular ‘building block’ strategy shows Supermicro is focusing on staying ahead in enterprise and AI data center compute,” said Devroop Dhar, co-founder and MD at Primus Partner. A critical launch after major setbacks Experts peg this to be Supermicro’s most significant product launch since it became mired in governance and regulatory controversies. In 2024, the company lost Ernst & Young, its second auditor in two years, following allegations by Hindenburg Research involving accounting irregularities and the alleged export of sensitive chips to sanctioned entities. Compounding its troubles, Elon Musk’s AI startup xAI redirected its AI server orders to Dell, a move that reportedly cost Supermicro billions in potential revenue and damaged its standing in the hyperscaler ecosystem. Earlier this year, HPE signed a $1 billion contract to provide AI servers for X, a deal Supermicro was also bidding for. “The X14 launch marks a strategic reinforcement for Supermicro, showcasing its commitment

Read More »

Moving AI workloads off the cloud? A hefty data center retrofit awaits

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.” Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes. “We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.”

Read More »

My take on the Gartner Magic Quadrant for LAN infrastructure? Highly inaccurate

Fortinet being in the leader quadrant may surprise some given they are best known as a security vendor, but the company has quietly built a broad and deep networking portfolio. I have no issue with them being considered a leader and believe for security conscious companies, Fortinet is a great option. Challenger Cisco is the only company listed as a challenger, and its movement out of the leader quadrant highlights just how inaccurate this document is. There is no vendor that sells more networking equipment in more places than Cisco, and it has led enterprise networking for decades. Several years ago, when it was a leader, I could argue the division of engineering between Meraki and Catalyst could have pushed them out, but it didn’t. So why now? At its June Cisco Live event, the company launched a salvo of innovation including AI Canvas, Cisco AI Assistant, and much more. It’s also continually improved the interoperability between Meraki and Catalyst and announced several new products. AI Canvas is a completely new take, was well received by customers at Cisco Live, and reinvents the concept of AIOps. As I stated above, because of the December cutoff time for information gathering, none of this was included, but that makes Cisco’s representation false. Also, I find this MQ very vague in its “Cautions” segment. As an example, it states: “Cisco’s product strategy isn’t well-aligned with key enterprise needs.” Some details here would be helpful. In my conversations with Cisco, which includes with Chief Product Officer and President Jeetu Patel, the company has reiterated that its strategy is to help customers be AI-ready with products that are easier to deploy and manage, more automated, and with a lower cost to run. That seems well-aligned with customer needs. If Gartner is hearing customers want networks

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »