Stay Ahead, Stay ONMINE

Why everyone in AI is freaking out about DeepSeek

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As of a few days ago, only the nerdiest of nerds (I say this as one) had ever heard of DeepSeek, a Chinese A.I. subsidiary of the equally evocatively named High-Flyer Capital Management, a quantitative analysis […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


As of a few days ago, only the nerdiest of nerds (I say this as one) had ever heard of DeepSeek, a Chinese A.I. subsidiary of the equally evocatively named High-Flyer Capital Management, a quantitative analysis (or quant) firm that initially launched in 2015.

Yet within the last few days, it’s been arguably the most discussed company in Silicon Valley. That’s largely thanks to the release of DeepSeek R1, a new large language model that performs “reasoning” similar to OpenAI’s current best-available model o1 — taking multiple seconds or minutes to answer hard questions and solve complex problems as it reflects on its own analysis in a step-by-step, or “chain of thought” fashion.

Not only that, but DeepSeek R1 scored as high or higher than OpenAI’s o1 on a variety of third-party benchmarks (tests to measure AI performance at answering questions on various subject matter), and was reportedly trained at a fraction of the cost (reportedly around $5 million) , with far fewer graphics processing units (GPU) under a strict embargo imposed by the U.S., OpenAI’s home turf.

But unlike o1, which is available only to paying ChatGPT subscribers of the Plus tier ($20 per month) and more expensive tiers (such as Pro at $200 per month), DeepSeek R1 was released as a fully open source model, which also explains why it has quickly rocketed up the charts of AI code sharing community Hugging Face’s most downloaded and active models.

Also, thanks to the fact that it is fully open source, people have already fine-tuned and trained many multiple variations of the model for different task-specific purposes such as making it small enough to run on a mobile device, or combining it with other open source models. Even if you want to use it for development purposes, DeepSeek’s API costs are more than 90% cheaper than the equivalent o1 model from OpenAI.

Most impressively of all, you don’t even need to be a software engineer to use it: DeepSeek has a free website and mobile app even for U.S. users with an R1-powered chatbot interface very similar to OpenAI’s ChatGPT. Except, once again, DeepSeek undercut or “mogged” OpenAI by connecting this powerful reasoning model to web search — something OpenAI hasn’t yet done (web search is only available on the less powerful GPT family of models at present).

An open and shut irony

There’s a pretty delicious, or maybe disconcerting irony to this given OpenAI’s founding goals to democratize AI to the masses. As NVIDIA Senior Research Manager Jim Fan put it on X: “We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive – truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely.”

Or as X user @SuspendedRobot put it, referencing reports that DeepSeek appears to have been trained on question-answer outputs and other data generated by ChatGPT: “OpenAI stole from the whole internet to make itself richer, DeepSeek stole from them and give it back to the masses for free I think there is a certain british folktale about this”

But Fan isn’t the only one to sit up and take note of DeepSeek’s success. The open source availability of DeepSeek R1, its high performance, and the fact that it seemingly “came out of nowhere” to challenge the former leader of generative AI, has sent shockwaves throughout Silicon Valley and far beyond, based on my conversations and readings of various engineers, thinkers, and leaders. If not “everyone” is freaking out about it as my hyperbolic headline suggests, it’s certainly the talk of the town in tech and business circles.

A message posted to Blind, the app for sharing anonymous gossip in Silicon Valley, has been making the rounds suggesting Meta is in crisis over the success of DeepSeek because of how quickly it surpassed Meta’s own efforts to be the king of open source AI with its Llama models.

‘This changes the whole game’

X user @tphuang wrote compellingly: “DeepSeek has commoditized AI outside of very top-end. Lightbulb moment for me in 1st photo. R1 is so much cheaper than US labor cost that many jobs will get automated away over next 5 yrs,” later noting why DeepSeek’s R1 is more enticing to users than even OpenAI’s o1:

“3 huge issues w/ o1:
1) too slow
2) too expensive
3) lack of control for end user/reliance on OpenAI
R1 solves all of them. A company can buy their own Nvidia GPUs, run these models. Don’t have to worry about additional costs or slow/unresponsive OpenAI servers”

@tphaung also posed a compelling analogy as a question: “Will DeepSeek be to LLM what Android became to OS world?”

Web entrepreneur Arnaud Bertrand didn’t mince words about the startling implications of DeepSeek’s success, either, writing on X: “There’s no overstating how profoundly this changes the whole game. And not only with regards to AI, it’s also a massive indictment of the US’s misguided attempt to stop China’s technological development, without which Deepseek may not have been possible (as the saying goes, necessity is the mother of inventions).”

The censorship issue

However, others have sounded cautionary notes on DeepSeek’s rapid rise, arguing that as a startup operated out of China, it is necessarily subject to that country’s laws and content censorship requirements.

Indeed, my own usage of DeepSeek on the iOS app here in the U.S. found it would not answer questions about Tiananmen Square, the site of the 1989 pro-democracy student protests and uprising, and subsequent violent crackdown by the Chinese military, resulting in at least 200, possibly thousands of deaths, earning it the nickname “Tiananmen Square Massacre” in Western media outlets.

Ben Hylak, a former Apple human interface designer and co-founder of AI product analytics platform Dawn, posted on X how asking about this subject caused DeepSeek R1 to enter a circuitous loop.

As a member of the press itself, I of course take freedom of speech and expression extremely seriously and it is arguably one of the most fundamental, inarguable causes I champion.

Yet I would be remiss not to note that OpenAI’s models and products including ChatGPT also refuse to answer a whole range of questions about even innocuous content — especially pertaining to human sexuality and erotic/adult, NSFW subject matter.

It’s not an apples-to-apples comparison, of course. And there will be some for whom the resistance to relying on foreign technology makes them skeptical of DeepSeek’s ultimate value and utility. But there’s no denying its performance and low cost.

And in a time when 16.5% of all U.S. goods are imported by China, it’s hard for me to caution against using DeepSeek R1 on the basis of censorship concerns or security risks — especially when the model code is freely available to download, take offline, use on-device in secure environments, and to fine-tune at will.

I definitely detect some existential crisis about the “fall of the West” and “rise of China,” motivating some of the animated discussion around DeepSeek, however, and others have already connected it to how U.S. users joined the app Xiaohongshu (aka “Little Red Book”) when TikTok was briefly banned in this country, only to be amazed at the quality of life in China depicted in the videos shared there. DeepSeek R1’s arrival occurs in this narrative context — one in which China appears (and by many metrics is clearly) ascendant while the U.S. appears (and by many metrics, also is) in decline.

The first but hardly the last Chinese AI model to shake the world

It also won’t be the last Chinese AI model to threaten the dominance of Silicon Valley giants — even as they, like OpenAI, raise more money than ever for their ambitions to develop artificial general intelligence (AGI), programs that outperform humans at most economically valuable work.

Just yesterday, another Chinese model from TikTok parent company Bytedance — called Doubao-1.5-pro — was released with performance matching OpenAI’s non-reasoning GPT-4o model on third-party benchmarks, but again, at 1/50th the cost.

Chinese models have gotten so good, so fast, even those outside the tech industry are taking note: The Economist magazine just ran a piece on DeepSeek’s success and that of other Chinese AI efforts, and political commentator Matt Bruenig posted on X that: “I have been extensively using Gemini, ChatGPT, and Claude for NLRB document summary for nearly a year. Deepseek is better than all of them at it. The chatbot version of it is free. Price to use it’s API is 99.5% below the price of OpenAI’s API. [shrug emoji]”

How does OpenAI respond?

Little wonder OpenAI co-founder and CEO Sam Altman today said that the company was bringing its yet-to-be released second reasoning model family, o3, to ChatGPT even for free users. OpenAI still appears to be carving its own path with more proprietary and advanced models — setting the industry standard.

But the question becomes: with DeepSeek, ByteDance, and other Chinese AI companies nipping at its heels, how long can OpenAI remain in the lead at making and releasing new cutting-edge AI models? And if it and when it falls, how hard and how fast will its decline be?

OpenAI does have another historical precedent going for it, though. If DeepSeek and Chinese AI models do indeed become to LLMs as Google’s open source Android did to mobile — taking the lion’s share of the market for a while — you only have to see how the Apple iPhone with its locked down, proprietary, all-in house approach managed to carve off the high-end of the market and steadily expand downward from there, especially in the U.S., to the point that it now owns nearly 60% of the domestic smartphone market.

Still, for all those spending big bucks to use AI models from leading labs, DeepSeek shows the same capabilities may be available for much cheaper and with much greater control. And in an enterprise setting, that may be enough to win the ballgame.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia aims to bring AI to wireless

Key features of ARC-Compact include: Energy Efficiency: Utilizing the L4 GPU (72-watt power footprint) and an energy-efficient ARM CPU, ARC-Compact aims for a total system power comparable to custom baseband unit (BBU) solutions currently in use. 5G vRAN support: It fully supports 5G TDD, FDD, massive MIMO, and all O-RAN

Read More »

Netgear’s enterprise ambitions grow with SASE acquisition

Addressing the SME security gap The acquisition directly addresses a portfolio gap that Netgear (Nasdaq:NTGR) has identified through customer feedback.  According to Badjate, customers have been saying that they like the Netgear products, but they also really need more security capabilities. Netgear’s target market focuses on organizations with fewer than

Read More »

IBM’s cloud crisis deepens: 54 services disrupted in latest outage

Rawat said IBM’s incident response appears slow and ineffective, hinting at procedural or resource limitations. The situation also raises concerns about IBM Cloud’s adherence to zero trust principles, its automation in threat response, and the overall enforcement of security controls. “The recent IBM Cloud outages are part of a broader

Read More »

CNOOC Announces Seventh Upstream Startup in Chinese Waters This Year

CNOOC Ltd. has begun production at the Weizhou 5-3 oilfield in the South China Sea, its seventh announced startup offshore China in 2025. Weizhou 5-3 is expected to reach a peak output of about 10,000 barrels a day next year, the state-backed oil and gas explorer and producer said in an online statement Monday. The field produces medium crude. Weizhou 5-3 is in the South China Sea’s Beibu Gulf, or Gulf of Tonkin, in waters around 35 meters (114.83 feet) deep. The development includes a wellhead platform, as well as uses existing facilities. CNOOC Ltd., majority-owned by China National Offshore Oil Corp., plans to commission seven production wells and two water injection wells. CNOOC Ltd. owns 51 percent of the project. Smart Oil Investment Ltd. holds 49 percent. Previously in 2025 CNOOC Ltd. announced three startups in the Bohai Sea and three in the South China Sea. The Bohai Sea projects are the Caofeidian 6-4 oilfield adjustment, phase 2 of the Luda 5-2 North field and the Bozhong 26-6 field. The South China Sea projects are Wenchang 19-1 oilfield phase 2, the Dongfang 29-1 field and the Panyu 11-12/10-1/10-2 Oilfield Adjustment Joint Development Project. The Caofeidian 6-4 adjustment project is expected to achieve 11,000 barrels of oil equivalent a day (boed) in peak production 2026. The oil is light crude. Luda 5-2 North phase 2 could reach about 6,700 boed in peak production next year. Phase 1 went online 2022 as the first Chinese oilfield to produce from superheavy oil reservoirs through thermal recovery, according to CNOOC Ltd. It said of Luda 5-2 North phase 2, “CNOOC Limited made major technological breakthroughs in this project and significantly enhanced the development efficiency of offshore super heavy oil”. “Through optimized Jet Pump Injection-Production Technology, the project realized efficient and economic development of heavy

Read More »

SAF Firm Completes Combination; Up for Nasdaq Listing

Sustainable aviation fuel (SAF) firm XCF Global Capital, Inc. said it has completed its business combination with special purpose acquisition company Focus Impact BH3 Acquisition in line with its plan for a public listing. The combined company will operate under the name XCF Global, Inc. and its class A common stock is expected to begin trading on the Nasdaq Capital Market under the ticker symbol “SAFX” on June 9, the company said in a news release. XCF Global’s New Rise Reno facility, located in the Reno-Tahoe Industrial Complex in Storey County, Nevada, began commercial production in February of so-called “neat” SAF, which is totally free of all fossil fuels and not blended with conventional jet fuel, with a nameplate production capacity of 38 million gallons of neat SAF per year, according to the release. The first customer deliveries of neat SAF were completed in March, the company said. The company stated it is advancing a pipeline of production sites in Nevada, North Carolina, and Florida to expand SAF capacity and support long-term growth. “The completion of this transaction marks a transformational step for XCF Global and the decarbonization of the aviation industry,” XCF Global CEO Mihir Dange said. “With commercial production underway, first deliveries completed, and a proven business model in place, we are entering the public markets with momentum and a clear path to growth. XCF Global is positioned as a market leader at the intersection of aviation and decarbonization – standing at the forefront of a high-growth opportunity in synthetic aviation fuel. We offer the public capital markets access to one of the fastest-growing sectors in the global energy transition, and we are proud to be leading the shift toward a lower-carbon future for aviation”. “We are thrilled to have completed the business combination with XCF Global and

Read More »

ADNOC Expands STEM Education Program ‘to Empower UAE Students in AI’

ADNOC announced, in a release posted on its site recently, that it has expanded its Science, Technology, Engineering and Mathematics (STEM) education program “to empower UAE students in artificial intelligence (AI) and advanced technology through an initiative called ‘STEM for Life: Future of AI Schools Challenge’”. The release highlighted that the Challenge was launched in January 2025 and recently held its finals at the Abu Dhabi Energy Center. The Challenge received 14,500 applicants from 351 schools across the country, according to the release, which pointed out that 896 teachers helped students to “design, build, and pitch AI solutions that addressed one of three themes: creating real-world impact, demonstrating blue sky thinking, or winning the hearts and minds of local communities”. A total of 1,500 submissions were received, with 80 students in 27 teams selected to attend the final, the release noted. Winning teams pitched their projects to a jury which included members from the Ministry of Industry and Advanced Technology, the Ministry of Education, Abu Dhabi Early Childhood Authority, ADNOC, Khalifa University, ADNOC Technology Academy, Dubai Institute of Design and Innovation, Microsoft, and Neubio, the release stated. Following an assessment by the jury, nine teams each were awarded the gold, silver, and bronze positions respectively, the release said, adding that submissions “featured impressive AI-powered solutions”. The final was attended by Sultan Ahmed Al Jaber, Minister of Industry and Advanced Technology and ADNOC Managing Director and Group CEO, Sarah bint Yousif Al Amiri, Minister of Education, Abdulla Humaid Al Jarwan, Chairman of the Abu Dhabi Department of Energy, Hajer Ahmed Mohamed Al Thehli, Secretary-General of the Education, Human Development and Community Council, Khalaf Abdulla Rahma Al Hammadi, Director-General of the Abu Dhabi Pension Fund, and senior ADNOC executives, the release pointed out. The release also noted that, during the final, ADNOC

Read More »

ScottishPower Allots About $300MM for UK Power Grid Modernization

ScottishPower Energy Networks (SPEN), Iberdrola’s distribution company in the United Kingdom, will invest more than EUR 262 million ($298.8 million) in the modernization of the United Kingdom’s electricity grid. SPEN said in a media release that six partners will continue working on the maintenance and upgrade of more than 20,000 kilometers (12,400 miles) of overhead lines across the network over the next four years. SPEN partners include Scottland-based Aureos, Gaeltec, and PLPC, which will support the six license districts in central and southern Scotland (Ayrshire and Clyde South, Central and Fife, Dumfries and Galloway, Edinburgh and Borders, Glasgow and Clyde North, Lanarkshire). The company said it is also partnering with Emerald Power, IES, and Network Plus, which will support the license districts in Mid-Cheshire, Merseyside, Dee Valley and Mid Wales, Wirral and North Wales. “Ensuring we have the partners, resources, and technical skills in place to deliver on our bold and ambitious plans for our network is vital for the modern and resilient grid needed to support the doubling of demand”, Nicola Connelly, SPEN CEO, said. “These contracts not only support significant investment in our overhead line network, they allow us to build on the solid foundations created with our supply chain partners and give certainty and confidence to further invest in their skills and people.  It’s a win-win on both sides and we look forward to working together to make a long and lasting difference for all our communities – from Anstruther to Anglesey”. The contracts will support over 500 jobs – including more than 50 new linesmen roles – nationwide, with companies based in and around ScottishPower’s Scotland and Manweb license areas. “This is an extremely significant milestone for Emerald Power and provides the opportunity to further invest in our business – recruiting, training, and upskilling the resources needed to deliver

Read More »

Fennex to Deploy AI-Powered Safety System across EnQuest’s UK Operations

Fennex Ltd. has bagged a multi-year deal from EnQuest plc to deploy the flagship AI-powered Behaviour-Based Safety System (BBSS) across EnQuest’s UK operations.   EnQuest oversees a varied portfolio of offshore assets in the North Sea, which includes Thistle, Heather, Magnus, and the Kraken FPSO, along with the Sullom Voe Terminal located onshore Shetland, recognized as one of the largest oil terminals in Europe, Fennex noted in a media release. Fennex added that the BBSS is already live across all of EnQuest’s UK offshore assets and the Sullom Voe Terminal. This rapid deployment was achieved in just eight weeks. “BBSS is now deployed across all EnQuest’s UK-operated offshore assets, and for the first time at a major onshore terminal”, Adrian Brown, Managing Director at Fennex, said. “EnQuest was eager to roll out the platform quickly, and thanks to strong collaboration, we were able to go live both offshore and onshore in record time”. “We identified BBSS as an opportunity to make a step change in operational safety through making it easier and more user-friendly for personnel to participate and allowing us to make more effective use of the resulting leading data. It provides full visibility of engagement in our safety reporting and real-time data, giving us immediate insight into reported issues and the ability to act swiftly for the best outcomes in our operations”, EnQuest’s Director of HSE and Wells, Ian McKimmie, added. As the collaboration progresses, Fennex and EnQuest are working together to reveal even more value – leveraging advanced analytics, behavioral insights, and AI-driven predictive safety tools to foster a culture of proactive, intelligence-led safety, Fennex said. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to

Read More »

Canada’s Oil Sands Emissions Intensity Falls for Sixth Year

Canada’s oil sands industry reduced its emissions per barrel for the sixth straight year in 2023, even as one growing portion of the sector moved in the opposite direction, according to new Alberta government data released Thursday. The emissions intensity of all oil sands sites fell to the equivalent of 0.399 metric tons of carbon dioxide per cubic meter of bitumen produced, down from 0.404 in 2022, the data show. The gain reflects improvements at oil sands mines, where bitumen is dug from the ground. However, in situ oil sands, which use wells similar to traditional oil producers, saw emissions per barrel rise.  Even with the efficiency improvement, total emissions rose to the equivalent 80.1 million metric tons of carbon dioxide, up from 78.8 million in 2022, the data show. That’s the highest in data back to 2011. The oil sands’ declining energy intensity — to the lowest in data stretching back to 2011 — is welcome news for an industry that has struggled with a reputation for being climate unfriendly, prompting some investors to shun it altogether. However, the rising emissions intensity at well sites presents a challenge for the sector, as the method’s lower costs make it increasingly popular among producers. While the average intensity of oil sands producers is higher than the average for the global oil industry overall, drillers ‘emissions profiles vary widely around the world, said Kevin Birn, chief analyst for Canadian oil markets for S&P Global. The oil sands “fits well within the range of carbon intensity of oil and gas we see in the world,” Birn said in an interview. All of the oil sands mines reduced their emissions intensity with Canadian Natural Resources Ltd.’s Horizon making the biggest gain for the year.  In situ production facilities, which include the more than 250,000 barrel-a-day Suncor

Read More »

LiquidStack launches cooling system for high density, high-powered data centers

The CDU is serviceable from the front of the unit, with no rear or end access required, allowing the system to be placed against the wall. The skid-mounted system can come with rail and overhead piping pre-installed or shipped as separate cabinets for on-site assembly. The single-phase system has high-efficiency dual pumps designed to protect critical components from leaks and a centralized design with separate pump and control modules reduce both the number of components and complexity. “AI will keep pushing thermal output to new extremes, and data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise,” said Joe Capes, CEO of LiquidStack in a statement. “With up to 10MW of cooling capacity at N, N+1, or N+2, the GigaModular is a platform like no other—we designed it to be the only CDU our customers will ever need. It future-proofs design selections for direct-to-chip liquid cooling without traditional limits or boundaries.”

Read More »

Enterprises face data center power design challenges

” Now, with AI, GPUs need data to do a lot of compute and send that back to another GPU. That connection needs to be close together, and that is what’s pushing the density, the chips are more powerful and so on, but the necessity of everything being close together is what’s driving this big revolution,” he said. That revolution in new architecture is new data center designs. Cordovil said that instead of putting the power shelves within the rack, system administrators are putting a sidecar next to those racks and loading the sidecar with the power system, which serves two to four racks. This allows for more compute per rack and lower latency since the data doesn’t have to travel as far. The problem is that 1 mW racks are uncharted territory and no one knows how to manage the power, which is considerable now. ”There’s no user manual that says, hey, just follow this and everything’s going to be all right. You really need to push the boundaries of understanding how to work. You need to start designing something somehow, so that is a challenge to data center designers,” he said. And this brings up another issue: many corporate data centers have power plugs that are like the ones that you have at home, more or less, so they didn’t need to have an advanced electrician certification. “We’re not playing with that power anymore. You need to be very aware of how to connect something. Some of the technicians are going to need to be certified electricians, which is a skills gap in the market that we see in most markets out there,” said Cordovil. A CompTIA A+ certification will teach you the basics of power, but not the advanced skills needed for these increasingly dense racks. Cordovil

Read More »

HPE Nonstop servers target data center, high-throughput applications

HPE has bumped up the size and speed of its fault-tolerant Nonstop Compute servers. There are two new servers – the 8TB, Intel Xeon-based Nonstop Compute NS9 X5 and Nonstop Compute NS5 X5 – aimed at enterprise customers looking to upgrade their transaction processing network infrastructure or support larger application workloads. Like other HPE Nonstop systems, the two new boxes include compute, software, storage, networking and database resources as well as full-system clustering and HPE’s specialized Nonstop operating system. The flagship NS9 X5 features support for dual-fabric HDR200 InfiniBand interconnect, which effectively doubles the interconnect bandwidth between it and other servers compared to the current NS8 X4, according to an HPE blog detailing the new servers. It supports up to 270 networking ports per NS9 X system, can be clustered with up to 16 other NS9 X5s, and can support 25 GbE network connectivity for modern data center integration and high-throughput applications, according to HPE.

Read More »

AI boom exposes infrastructure gaps: APAC’s data center demand to outstrip supply by 42%

“Investor confidence in data centres is expected to strengthen over the remainder of the decade,” the report said. “Strong demand and solid underlying fundamentals fuelled by AI and cloud services growth will provide a robust foundation for investors to build scale.” Enterprise strategies must evolve With supply constrained and prices rising, CBRE recommended that enterprises rethink data center procurement models. Waiting for optimal sites or price points is no longer viable in many markets. Instead, enterprises should pursue early partnerships with operators that have robust development pipelines and focus on securing power-ready land. Build-to-suit models are becoming more relevant, especially for larger capacity requirements. Smaller enterprise facilities — those under 5MW — may face sustainability challenges in the long term. The report suggested that these could become “less relevant” as companies increasingly turn to specialized colocation and hyperscale providers. Still, traditional workloads will continue to represent up to 50% of total demand through 2030, preserving value in existing facilities for non-AI use cases, the report added. The region’s projected 15 to 25 GW gap is more than a temporary shortage — it signals a structural shift, CBRE said. Enterprises that act early to secure infrastructure, invest in emerging markets, and align with power availability will be best positioned to meet digital transformation goals. “Those that wait may find themselves locked out of the digital infrastructure they need to compete,” the report added.

Read More »

Cisco bolsters DNS security package

The software can block domains associated with phishing, malware, botnets, and other high-risk categories such as cryptomining or new domains that haven’t been reported previously. It can also create custom block and allow lists and offers the ability to pinpoint compromised systems using real-time security activity reports, Brunetto wrote. According to Cisco, many organizations leave DNS resolution to their ISP. “But the growth of direct enterprise internet connections and remote work make DNS optimization for threat defense, privacy, compliance, and performance ever more important,” Cisco stated. “Along with core security hygiene, like a patching program, strong DNS-layer security is the leading cost-effective way to improve security posture. It blocks threats before they even reach your firewall, dramatically reducing the alert pressure your security team manages.” “Unlike other Secure Service Edge (SSE) solutions that have added basic DNS security in a ‘checkbox’ attempt to meet market demand, Cisco Secure Access – DNS Defense embeds strong security into its global network of 50+ DNS data centers,” Brunetto wrote. “Among all SSE solutions, only Cisco’s features a recursive DNS architecture that ensures low-latency, fast DNS resolution, and seamless failover.”

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »