Stay Ahead, Stay ONMINE

DeepSeek helps speed up threat detection while raising national security concerns

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More DeepSeek and its R1 model aren’t wasting any time rewriting the rules of cybersecurity AI in real-time, with everyone from startups to enterprise providers piloting integrations to their new model this month. R1 was developed in […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


DeepSeek and its R1 model aren’t wasting any time rewriting the rules of cybersecurity AI in real-time, with everyone from startups to enterprise providers piloting integrations to their new model this month.

R1 was developed in China and is based on pure reinforcement learning (RL) without supervised fine-tuning. It is also open source, making it immediately attractive to nearly every cybersecurity startup that is all-in on open-source architecture, development and deployment.

DeepSeek’s $6.5 million investment in the model is delivering performance that matches OpenAI’s o1-1217 in reasoning benchmarks while running on lower-tier Nvidia H800 GPUs. DeepSeek’s pricing sets a new standard with significantly lower costs per million tokens compared to OpenAI’s models. The deep seek-reasoner model charges $2.19 per million output tokens, while OpenAI’s o1 model charges $60 for the same. That price difference and its open-source architecture have gotten the attention of CIOs, CISOs, cybersecurity startups and enterprise software providers alike.

(Interestingly, OpenAI claims DeepSeek used its models to train R1 and other models, going so far as to say the company exfiltrated data through multiple queries.)   

An AI breakthrough with hidden risks that will keep emerging

Central to the issue of the models’ security and trustworthiness is whether censorship and covert bias are incorporated into the model’s core, warned Chris Krebs, inaugural director of the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and, most recently, chief public policy officer at SentinelOne.

“Censorship of content critical of the Chinese Communist Party (CCP) may be ‘baked-in’ to the model, and therefore a design feature to contend with that may throw off objective results,” he said. “This ‘political lobotomization’ of Chinese AI models may support…the development and global proliferation of U.S.-based open source AI models.”

He pointed out that, as the argument goes, democratizing access to U.S. products should increase American soft power abroad and undercut the diffusion of Chinese censorship globally. “R1’s low cost and simple compute fundamentals call into question the efficacy of the U.S. strategy to deprive Chinese companies of access to cutting-edge western tech, including GPUs,” he said. “In a way, they’re really doing ‘more with less.’”

Merritt Baer, CISO at Reco and advisor to multiple security startups, told VentureBeat that, “in fact, training [DeepSeek-R1] on broader internet data controlled by internet sources in the west (or perhaps better described as lacking Chinese controls and firewalls), might be one antidote to some of the concerns. I’m less worried about the obvious stuff, like censoring any criticism of President Xi, and more concerned about the harder-to-define political and social engineering that went into the model. Even the fact that the model’s creators are part of a system of Chinese influence campaigns is a troubling factor — but not the only factor we should consider when we select a model.”

With DeepSeek training the model with Nvidia H800 GPUs that were approved for sale in China but lack the power of the more advanced H100 and A100 processors, DeepSeek is further democratizing its model to any organization that can afford the hardware to run it. Estimates and bills of materials explaining how to build a system for $6,000 capable of running R1 are proliferating across social media. 

R1 and follow-on models will be built to circumvent U.S. technology sanctions, a point Krebs sees as a direct challenge to the U.S. AI strategy. 

Enkrypt AI’s DeepSeek-R1 Red Teaming Report finds that the model is vulnerable to generating “harmful, toxic, biased, CBRN and insecure code output.” The red team continues that: “While it may be suitable for narrowly scoped applications, the model shows considerable vulnerabilities in operational and security risk areas, as detailed in our methodology. We strongly recommend implementing mitigations if this model is to be used.”  

Enkrypt AI’s red team also found that Deepseek-R1 is three times more biased than Claude 3 Opus, four times more vulnerable to generating insecure code than Open AI’s o1, and four times more toxic than GPT-4o. The red team also found that the model is eleven times more likely to create harmful output than Open AI’s o1.

Know the privacy and security risks before sharing your data

DeepSeek’s mobile apps now dominate global downloads, and the web version is seeing record traffic, with all the personal data shared on both platforms captured on servers in China. Enterprises are considering running the model on isolated servers to reduce the threat. VentureBeat has learned about pilots running on commoditized hardware across organizations in the U.S.

Any data shared on mobile and web apps is accessible by Chinese intelligence agencies.

China’s National Intelligence Law states that companies must “support, assist and cooperate” with state intelligence agencies. The practice is so pervasive and such a threat to U.S. firms and citizens that the Department of Homeland Security has published a Data Security Business Advisory. Due to these risks, the U.S. Navy issued a directive banning DeepSeek-R1 from any work-related systems, tasks or projects.

Organizations who are quick to pilot the new model are going all-in on open source and isolating test systems from their internal network and the internet. The goal is to run benchmarks for specific use cases while ensuring all data remains private. Platforms like Perplexity and Hyperbolic Labs allow enterprises to securely deploy R1 in U.S. or European data centers, keeping sensitive information out of reach of Chinese regulations. Please see an excellent summary of this aspect of the model.

Itamar Golan, CEO of startup Prompt Security and a core member of OWASP’s Top 10 for large language models (LLMs), argues that data privacy risks extend beyond just DeepSeek. “Organizations should not have their sensitive data fed into OpenAI or other U.S.-based model providers either,” he noted. “If data flow to China is a significant national security concern, the U.S. government may want to intervene through strategic initiatives such as subsidizing domestic AI providers to maintain competitive pricing and market balance.”

Recognizing R1’s security flaws, Prompt added support to inspect traffic generated by DeepSeek-R1 queries in a matter of days after the model was introduced.

During a probe of DeepSeek’s public infrastructure, cloud security provider Wiz’s research team discovered a ClickHouse database open on the internet with more than a million lines of logs with chat histories, secret keys and backend details. There was no authentication enabled on the database, allowing for quick potential privilege escalation.

Wiz’s Research’s discovery underscores the danger of rapidly adopting AI services that aren’t built on hardened security frameworks at scale. Wiz responsibly disclosed the breach, prompting DeepSeek to lock down the database immediately. DeepSeek’s initial oversight emphasizes three core lessons for any AI provider to keep in mind when introducing a new model.

First, perform red teaming and thoroughly test AI infrastructure security before ever even launching a model. Second, enforce least privileged access and adopt a zero-trust mindset, assume your infrastructure has already been breached and trust no multidomain connections across systems or cloud platforms. Third, have security teams and AI engineers collaborate and own how the models safeguard sensitive data.

DeepSeek creates a security paradox

Krebs cautioned that the model’s real danger isn’t just where it was made but how it was made. DeepSeek-R1 is the byproduct of the Chinese technology industry, where private sector and national intelligence objectives are inseparable. The concept of firewalling the model or running it locally as a safeguard is an illusion because, as Krebs explains, the bias and filtering mechanisms are already “baked-in” at a foundational level.

Cybersecurity and national security leaders agree that DeepSeek-R1 is the first of many models with exceptional performance and low cost that we’ll see from China and other nation-states that enforce control of all data collected.

Bottom line: Where open source has long been viewed as a democratizing force in software, the paradox this model creates shows how easily a nation-state can weaponize open source at will if they choose to.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

US Justice Department sues to block HPE’s $14 billion Juniper buy

“Even well-resourced networking companies in complementary networking markets are unlikely to be strong alternatives to Cisco and HPE immediately, as several face reputational headwinds and have not developed the distribution networks for rapid growth in the enterprise-grade WLAN market,” the DOJ stated. The DOJ said that if the deal were

Read More »

Cisco touts ‘Internet of Agents’ for secure AI agent collaboration

AI-native agentic applications: This layer encompasses the full spectrum of agentic applications—from business workflow automation to scientific discovery to social interaction. Think of it like a movie production, where specialized teams (writers, actors, cinematographers, editors) collaborate to create something greater than any individual could achieve. Similarly, AI agents will specialize

Read More »

CompTIA unveils AI Essentials training resource

“CompTIA AI Essentials in tailored to help learners of all backgrounds master the fundamentals of AI,” said Katie Hoenicke, senior vice president of product development at CompTIA, said in a statement. “IT professionals, workers looking to progress from digitally literate to digitally fluent, students, and others can learn how to

Read More »

Shell, Thebe Near $1B South Africa Oil Asset Sales Pact

Shell Plc and its South African partner are close to ending a valuation dispute, paving the way for the sale of the oil giant’s local downstream assets for as much as $1 billion, according to people familiar with the matter. The London-based company has a network of 600 service stations across the country, and Thebe Investment Corp. — owned by Black investors — has a 28% stake in the local retail operation, bought in 2002 for about $70 million. The dispute arose in 2022, when Thebe wanted to exit its stake and the parties couldn’t settle on the value of the holding. They are now close to agreeing on ending the impasse, said the people, who asked to remain unidentified because the information is private. Shell said it doesn’t comment on commercial matters, while Thebe was unavailable for comment. A potential deal could also fetch Thebe more than the initial $200 million at which the company valued its stake in 2022, said the people. At the time, Shell estimated it was worth less.  The agreement by both parties to sell — a process that Rothschild and Co. is running — would also give potential buyers certainty about the size of the deal as they place binding offers in the coming weeks, said the people.  The assets have attracted interest from Saudi Aramco, Abu Dhabi National Oil Co. and trading firm Trafigura, among others, Bloomberg reported in September. Shell and BP Plc jointly owned Sapref, South Africa’s largest oil refinery, and sold it for a symbolic 1 rand (five US cents) to the state-owned Central Energy Fund after the oil majors stopped processing there in 2022. A deal for Shell’s downstream assets would follow the sale of Petroliam Nasional Bhd’s 74% stake in Engen Ltd., South Africa’s biggest gas-station chain, to

Read More »

Tesla storage deployments more than double to 31.4 GWh in 2024

Tesla Megapack and Powerwall battery storage deployments jumped to 31.4 GWh last year, up from 14.7 GWh in 2023, the company said in an earnings presentation Wednesday. The company expects storage deployments will grow at least 50% this year. “We’re trying to ramp output of the stationary battery storage as quickly as possible,” Tesla CEO Elon Musk said during an analyst conference call. Gross profit at Tesla’s energy generation and storage segment increased to $2.6 billion in 2024 from $1.1 billion the year before as revenue climbed 67% to $10.1 billion from $6 billion in the same period, according to Tesla. The segment’s profit margin grew to 26.2% in 2024, up from 18.9% in 2023, driven by cost reductions, including benefits from Inflation Reduction Act tax credits, Tesla said. Revenue from the tax credits soared to $756 million last year from $115 million in 2023. Musk is bullish on energy storage. “It is something that enables far greater energy output to the grid than is currently possible,” Musk said. “This will drive the demand of stationary battery packs, and especially the grid scale ones, to insane [levels], basically as much demand as we could possibly make.” However, Tesla said its Powerwall and Megapack production is supply constrained as the company enters new markets and demand for energy storage products continues to grow. As part of its market risk disclosures, Tesla warned investors that its solar and storage business faces risks from changes in governmental rebates, tax credits and other financial incentives.  “These incentives may expire when the allocated funding is exhausted, reduced or terminated as renewable energy adoption rates increase, sometimes without warning,” Tesla said in a U.S. Securities and Exchange Commission filing on Thursday. “Likewise, in jurisdictions where net metering is currently available, our customers receive bill credits from

Read More »

HSE serves notice on Serica over Bruce gas release

Independent North Sea producer Serica Energy (AIM: SQZ) has been served with an improvement notice by the Health and Safety Executive (HSE) over the release of fuel gas at its Bruce platform. According to the HSE, Serica allowed the release of 196kg of hydrocarbons for over two and a half hours. In addition, the HSE said that Serica’s workers were put at risk due to the presence of gas. However, a spokesperson for Serica Energy said that nobody was injured due to the release. The HSE blamed the release on Serica’s maintenance arrangements for one the platform’s low-pressure booster compressors (LPBC A), which “failed such that the gas fuel supply system and joints were inadequately reinstated and tested” before the compressor was returned to service. The spokesperson added: “Safety remains Serica’s highest priority, and the company is working closely with the HSE to ensure lessons are learned from the incident and updated procedures are put in place to prevent a reoccurrence.” Serica has until 31 July to improve its processes. The Bruce platform is based 211 miles (340km) northeast of Aberdeen where it connects to Serica’s Bruce field. The platform is responsible for processing nearly 5% of the UK’s gas production and hosts a community of over 300 people, of whom 160 are offshore at any one time. The platform saw a short period of unscheduled downtime in 2024 due to a subsea intervention to ensure enhanced production reliability on the Rhum field. Serica chief executive Chris Cox has previously said that the company is looking to take advantage of the “untapped potential around the Bruce hub”. The group has committed to extend Bruce’s lifespan by an extra four years through to 2030, with Cox hinting that a drilling campaign around the field is likely in coming years. The group

Read More »

Testing UK tech overseas ‘is not good enough’, says NZTC

Aberdeen’s Net Zero Technology Centre (NZTC) said “it’s not good enough” that UK decommissioning technology is tested overseas when “we are meant to be the leaders in this space”. At an event in the Granite City’s Union Kirk, NZTC programme manager Lewis Harper argued that technology brought about by the “genius” in the domestic supply chain should have more opportunity to test in UK waters, rather than going to places like the US. Harper said technology developers often have seek opportunities to trial it: “I have to go on a flight to go 4,000 miles into a place where PPE is a shirt and Levi’s to get this technology into the well.” The not-for-profit organisation currently runs on funding from industry and the government, however, its decade of public funding is set to come to an end. In 2017, the NZTC was set up as part of the Aberdeen City Region Deal with £180 million from the UK and Scottish Governments. The organisation is readying an application for the next tranche of funding once this period ends and is set to deliver this to government in the coming weeks. This week the NZTC £500,000 in grant funding across the 11 firms that form the next cohort of its accelerator program for emerging companies in the clean energy sector. © Supplied by NSTAAn NZTC and NSTA decommissioning event in Aberdeen. “We’re not short of genius, just take a look around the room, 22 technologies on display here today and many more asking if there was space for them,” Harper added. However, he said that the “hard truth” is that “a breakthrough in the lab is not a breakthrough for our industry”. Harper argued that “proof” is what’s needed as the industry does not have a “gap of ideas”. He said: “Decades

Read More »

UK Oil Ruling Sets Up Growth Versus Climate Test for Government

A UK court ruled that two North Sea oil and gas fields must re-apply for environmental permits while allowing the developments to continue, setting up a crucial test of whether the new Labour government will prioritize economic growth or climate action.  The Court of Session in Edinburgh quashed the approvals for the Rosebank and Jackdaw projects — led by Equinor ASA and Shell Plc, respectively — which were unlawful because they hadn’t considered the climate impact of burning oil and gas pumped from the fields, according to a statement from the court on Thursday.  The fate of these projects has big implications for the UK North Sea, an aging oil and gas province where major new developments are dwindling. While the ruling was a victory for Greenpeace and Uplift, environmental groups that brought the legal action, it left open the possibility that the two fields could one day still come into production.  Shell and Equinor can continue working on the projects while the government considers their new environmental applications, although they will not be allowed to pump any oil and gas before a final decision is reached, according to the ruling.  It was unclear how long this decision could take. The UK government is still discussing how exactly to assess so-called Scope 3 emissions from burning a field’s oil and gas, a process that must be completed before the environmental impact assessments for Rosebank and Jackdaw can be reconsidered, according to the court ruling. “The government has already consulted on revised environmental guidance to take into account emissions from burning extracted oil and gas,” said a spokesperson for the Department of Energy Security and Net Zero. “We will respond to this consultation as soon as possible and developers will be able to apply for consents under this revised regime.” In

Read More »

Severe weather, accreditation reforms call for more flexible generation: panel

Dive Brief: Almost every major independent system operator in the U.S. has reformed its capacity accreditation process, or has started to in the wake of more frequent severe weather and the growing adoption of intermittent energy resources, Wood Mackenzie Senior Analyst Patrick Huang said during a Wednesday panel discussion. As a result of these reforms, most types of energy resources — from renewables and energy storage to thermal generation — have seen, or will see, their capacity accreditation downgraded, Huang said. The reforms have made resource planning more complicated, increasing demand for more flexible generation resources, according to Karl Meeusen, Wärtsilä’s director of markets, legislative and regulatory policy for North America. Dive Insight: Beyond rewarding utilities that adopt an “all-of-the-above” approach to energy resources, capacity accreditation reforms at U.S. ISOs could also spur utilities to take a closer look at more novel generation technologies previously considered too expensive, Meeusen said during Wednesday’s discussion, hosted by consulting firm Wood Mackenzie. Reciprocating internal combustion engines have not been adopted in large numbers by utilities in the past due to the relatively high upfront capital cost. But they could offer a greater return on investment within ISOs that have adopted capacity accreditation reforms, Meeusen said. The reciprocating engines can ramp up or down more quickly than gas turbines, which allows them to better complement intermittent resources like wind and solar, and their modular design comes with potential reliability benefits, he said. For example, a utility might choose to add 100 MW of capacity to its portfolio by deploying five reciprocating engines, or two peaker gas turbines. If one of the turbines goes down, the resulting outage will be significantly larger than if one of the five reciprocating engines goes offline, Meeusen explained. And that odds of all five engines going down at the same time are significantly

Read More »

Timeline of HPE’s $14 billion bid for Juniper

June 20, 2024: HPE-Juniper merger faces antitrust inquiry in UK An inquiry into HPE’s $14 billion takeover of Juniper Networks by the UK’s Competition and Markets Authority (CMA), a move that potentially could delay approval of the deal, will have little impact on data center managers, said one analyst with Info-Tech Research Group. Both companies were informed of the inquiry by the CMA, the UK’s principal antitrust regulator, on Wednesday. July 17, 2024: Juniper advances AI networking software Juniper continues to improve its AI-native networking platform while HPE’s $14 billion deal to acquire Juniper continues to advance through the requisite regulatory hurdles. The latest platform upgrades are designed to help enterprise customers better manage and support AI in their data centers. Juniper is also offering a new validated design for enterprise AI clusters and has opened a lab to certify enterprise AI data center projects. Aug. 01, 2024: EU clears HPE’s $14 billion Juniper acquisition Hewlett Packard Enterprise’s proposed acquisition of Juniper Networks took a big step forward this week as the European Commission unconditionally approved the buy. Next up: US and UK regulatory approval? Nov. 21, 2024: AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl HPE’s acquisition of Juniper has been under regulatory scrutiny ever since HPE announced the $14 billion deal in January. The proposed deal has passed muster with a number of world agencies so far, but there is reportedly some concern about it from the US Department of Justice.  Jan. 30, 2025: U.S. Justice Department sues to block HPE’s $14 billion Juniper buy After months of speculation, the U.S. Justice Department sued to block the $14 billion sale of Juniper Networks to HPE. The DOJ said reduced competition in the wireless market is the biggest problem with the proposed buy. “This proposed acquisition risks substantially lessening competition in

Read More »

Verizon brings AI suite to enterprise infrastructure customers

Verizon Business has launched AI Connect, an integrated suite of products designed to let businesses deploy generative artificial intelligence (AI) workloads at scale. Verizon is building its AI ecosystem by repurposing its existing infrastructure assets in its intelligent and programmable network, which consists of fiber, edge networking, and data center assets, along with its metro and long-haul fiber, ILEC and Fios footprint, its metro network build-out, lit and dark fiber services, and 5G network. Verizon believes that the drive toward real-time decision-making using inferencing will be what drives demand for additional computing power.  The company cites a McKinsey report, which states that 60% to 70% of AI workloads are expected to shift to real-time inference by 2030. That will create an urgent need for low-latency connectivity, compute and security at the edge beyond current demand.

Read More »

Trump’s 100% tariff threat on Taiwan chips raises cost, supply chain fears

“I don’t think we will see a near-term impact, as it takes years to build fabs, but by the end of the decade, the US share could rise by a few percentage points,” Gupta said. “It’s hard to give an exact number, but if I were to estimate, I’d say 14-15%. That isn’t a lot, but for the US to gain share, someone else must lose it, and while the US is making efforts, we see similar developments across Asia.” Yet, if Washington imposes smaller tariffs on imports from countries such as India, Japan, or Malaysia, Taiwanese chipmakers may shift production there rather than to the US, according to Stephen Ezell, vice president at the Information Technology and Innovation Foundation (ITIF). “Additionally, if the tariffs applied to Chinese chip exports were lower than for Taiwanese exports, Trump would be helping Chinese semiconductor manufacturers, whose exports to the US market would then be less expensive,” Ezell said in a recent note. “So, for this policy to have any real effect, Trump effectively must raise tariffs on all semiconductors, and that would likely lead to global tit-for-tat.” Enterprise IT faces tough choices If semiconductor tariffs drive up costs, enterprises will be forced to reassess spending priorities, potentially delaying or cutting investments in critical IT infrastructure. Rising chip prices could squeeze budgets for AI, cloud computing, and data center expansions, forcing businesses to make difficult trade-offs. “On the corporate side, hyperscalers and enterprise players need to brace for impact over the next 2-3 years if high tariffs continue along with the erosion of operating margin,” Faruqui said. “In addition, the boards and CEOs have to boldly make heavy CAPEX investment on US Soil via US and Asian partners as soon as possible to realize HVM on US soil and alleviate operating margin erosion due to

Read More »

New tweak to Linux kernel could cut data center power usage by up to 30%

When network traffic is heavy, it is most efficient, and delivers the best performance, to disable interrupts and run in polling mode. But when network traffic is light, interrupt-driven processing works best, he noted. “An implementation using only polling would waste a lot of resources/energy during times of light traffic. An implementation using only interrupts becomes inefficient during times of heavy traffic. … So the biggest energy savings arise when comparing to a high-performance always-polling implementation during times of light traffic,” Karsten said. “Our mechanism automatically detects [the amount of network traffic] and switches between polling and interrupt-driven to get the best of both worlds.” In the patch cover letter, Damato described the implementation of the new parameter in more detail, noting: “this delivery mode is efficient, because it avoids softIRQ execution interfering with application processing during busy periods. It can be used with blocking epoll_wait to conserve CPU cycles during idle periods. The effect of alternating between busy and idle periods is that performance (throughput and latency) is very close to full busy polling, while CPU utilization is lower and very close to interrupt mitigation.” Added Karsten: “At the nuts and bolts level, enabling the feature requires a small tweak to applications and the setting of a system configuration variable.” And although he can’t yet quantify the energy benefits of the technique (the 30% saving cited is best case), he said, “the biggest energy savings arise when comparing to a high-performance always-polling implementation during times of light traffic.”

Read More »

Macquarie’s Big Play in AI and HPC: $17+ Billion Invested Across Two Data Center Titans

Macquarie Asset Management (MAM) is making bold moves to position itself as a dominant force in the rapidly growing sectors of AI and high-performance computing (HPC). In a single week, MAM has made two pivotal investments in Applied Digital and Aligned Data Centers, committing over $17 billion to fuel innovation, growth, and capacity expansion in critical infrastructure markets across the Americas. Both deals highlight the immense demand for AI-ready and HPC-optimized data centers, underscoring the ongoing digitization of the global economy and the insatiable need for computing power to drive artificial intelligence (AI), machine learning (ML), and other resource-intensive workloads. Applied Digital Partners with Macquarie Asset Management for $5 Billion HPC Investment On January 14, Applied Digital Corporation announced what it billed as a transformative partnership with Macquarie to drive growth in HPC infrastructure. This agreement positions Applied Digital as a leading designer, builder, and operator of advanced data centers in the United States, catering to the growing demands of AI and HPC workloads. To account for the $5 billion commitment, funds managed by MAM will invest up to $900 million in Applied Digital’s Ellendale HPC Campus in North Dakota, with an additional $4.1 billion available for future HPC projects. This could support over 2 gigawatts (GW) of HPC data center development. MAM is a global asset manager overseeing approximately $633.7 billion in assets. Part of Australia-based Macquarie Group, it specializes in diverse investment solutions across real assets, real estate, credit, and equities. With its new landmark agreement with Macquarie, Applied Digital feels it is poised to redefine the HPC data center landscape, ensuring its place as a leader in the AI and HPC revolution. In terms of ownership structure, MAM’s investment here includes perpetual preferred equity and a 15% common equity interest in Applied Digital’s HPC business segment, allowing

Read More »

Data Center Frontier Announces Editorial Advisory Board for 2025 DCF Trends Summit

Nashua, NH – Data Center Frontier is excited to announce its Editorial Advisory Board for the second annual Data Center Frontier Trends Summit (DCF Trends Summit), taking place August 26-28, 2025, at the Hyatt Regency Reston in Reston, Virginia.  The 2025 DCF Trends Summit Editorial Advisory Board includes distinguished leaders from hyperscale and colocation operators, power and cooling solutions companies, IT and interconnection providers, and design/build/construction specialists. This year’s board has grown to include 15 esteemed executives, reflecting DCF’s commitment to providing comprehensive and diverse insights for the data center sector.  This visionary group of leaders, representing the critical facets of the data center ecosystem, will guide the event’s content and programming to address the most pressing trends impacting the industry. The group’s unparalleled expertise ensures the Summit will deliver essential insights to help data center stakeholders make informed decisions in the industry’s rapidly evolving landscape.  The Editorial Advisory Board for the 2025 DCF Trends Summit includes:  Scott Bergs, CEO, Dark Fiber & Infrastructure (DF&I) Steven Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Dan Crosby, CEO, Legend Energy Advisors Rob Coyle, Director of Technical Programs, Open Compute Project (OCP) Foundation Chris Downie, CEO, Flexential Sean Farney, VP of Data Centers, Jones Lang LaSalle (JLL) Mark Freeman, VP of Marketing, Vantage Data Centers Steven Lim, SVP of Marketing & GTM Strategy, NTT Global Data Centers David McCall, VP of Innovation, QTS Data Centers Nancy Novak, Chief Innovation Officer, Compass Datacenters Karen Petersburg, VP of Construction & Development, PowerHouse Data Centers Tara Risser, Chief Business Officer, Cologix Stefan Raab, Sr. Director, Business Development – AMER, Equinix Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Brenda Van der Steen, VP of Global Growth Marketing, Digital Realty “The Editorial Advisory Board for the second annual Data Center Frontier Trends Summit is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »