Stay Ahead, Stay ONMINE

Roadmap to Becoming a Data Scientist, Part 4: Advanced Machine Learning

Introduction Data science is undoubtedly one of the most fascinating fields today. Following significant breakthroughs in machine learning about a decade ago, data science has surged in popularity within the tech community. Each year, we witness increasingly powerful tools that once seemed unimaginable. Innovations such as the Transformer architecture, ChatGPT, the Retrieval-Augmented Generation (RAG) framework, and state-of-the-art Computer Vision models — including GANs — have […]

Introduction

Data science is undoubtedly one of the most fascinating fields today. Following significant breakthroughs in machine learning about a decade ago, data science has surged in popularity within the tech community. Each year, we witness increasingly powerful tools that once seemed unimaginable. Innovations such as the Transformer architectureChatGPT, the Retrieval-Augmented Generation (RAG) framework, and state-of-the-art Computer Vision models — including GANs — have had a profound impact on our world.

However, with the abundance of tools and the ongoing hype surrounding AI, it can be overwhelming — especially for beginners — to determine which skills to prioritize when aiming for a career in data science. Moreover, this field is highly demanding, requiring substantial dedication and perseverance.

The first three parts of this series outlined the necessary skills to become a data scientist in three key areas: math, software engineering, and machine learning. While knowledge of classical Machine Learning and neural network algorithms is an excellent starting point for aspiring data specialists, there are still many important topics in machine learning that must be mastered to work on more advanced projects.

This article will focus solely on the math skills necessary to start a career in Data Science. Whether pursuing this path is a worthwhile choice based on your background and other factors will be discussed in a separate article.

The importance of learning evolution of methods in machine learning

The section below provides information about the evolution of methods in natural language processing (NLP).

In contrast to previous articles in this series, I have decided to change the format in which I present the necessary skills for aspiring data scientists. Instead of directly listing specific competencies to develop and the motivation behind mastering them, I will briefly outline the most important approaches, presenting them in chronological order as they have been developed and used over the past decades in machine learning.

The reason is that I believe it is crucial to study these algorithms from the very beginning. In machine learning, many new methods are built upon older approaches, which is especially true for NLP and computer vision.

For example, jumping directly into the implementation details of modern large language models (LLMs) without any preliminary knowledge may make it very difficult for beginners to grasp the motivation and underlying ideas of specific mechanisms.

Given this, in the next two sections, I will highlight in bold the key concepts that should be studied.

# 04. NLP

Natural language processing (NLP) is a broad field that focuses on processing textual information. Machine learning algorithms cannot work directly with raw text, which is why text is usually preprocessed and converted into numerical vectors that are then fed into neural networks.

Before being converted into vectors, words undergo preprocessing, which includes simple techniques such as parsingstemming, lemmatization, normalization, or removing stop words. After preprocessing, the resulting text is encoded into tokens. Tokens represent the smallest textual elements in a collection of documents. Generally, a token can be a part of a word, a sequence of symbols, or an individual symbol. Ultimately, tokens are converted into numerical vectors.

NLP roadmap

The bag of words method is the most basic way to encode tokens, focusing on counting the frequency of tokens in each document. However, in practice, this is usually not sufficient, as it is also necessary to account for token importance — a concept introduced in the TF-IDF and BM25 methods. While TF-IDF improves upon the naive counting approach of bag of words, researchers have developed a completely new approach called embeddings.

Embeddings are numerical vectors whose components preserve the semantic meanings of words. Because of this, embeddings play a crucial role in NLP, enabling input data to be trained or used for model inference. Additionally, embeddings can be used to compare text similarity, allowing for the retrieval of the most relevant documents from a collection.

Embeddings can also be used to encode other unstructured data, including images, audio, and videos.

As a field, NLP has been evolving rapidly over the last 10–20 years to efficiently solve various text-related problems. Complex tasks like text translation and text generation were initially addressed using recurrent neural networks (RNNs), which introduced the concept of memory, allowing neural networks to capture and retain key contextual information in long documents.

Although RNN performance gradually improved, it remained suboptimal for certain tasks. Moreover, RNNs are relatively slow, and their sequential prediction process does not allow for parallelization during training and inference, making them less efficient.

Additionally, the original Transformer architecture can be decomposed into two separate modules: BERT and GPT. Both of these form the foundation of the most state-of-the-art models used today to solve various NLP problems. Understanding their principles is valuable knowledge that will help learners advance further when studying or working with other large language models (LLMs).

Transformer architecture

When it comes to LLMs, I strongly recommend studying the evolution of at least the first three GPT models, as they have had a significant impact on the AI world we know today. In particular, I would like to highlight the concepts of few-shot and zero-shot learning, introduced in GPT-2, which enable LLMs to solve text generation tasks without explicitly receiving any training examples for them.

Another important technique developed in recent years is retrieval-augmented generation (RAG)The main limitation of LLMs is that they are only aware of the context used during their training. As a result, they lack knowledge of any information beyond their training data.

Example of a RAG pipeline

The retriever converts the input prompt into an embedding, which is then used to query a vector database. The database returns the most relevant context based on the similarity to the embedding. This retrieved context is then combined with the original prompt and passed to a generative model. The model processes both the initial prompt and the additional context to generate a more informed and contextually accurate response.

A good example of this limitation is the first version of the ChatGPT model, which was trained on data up to the year 2022 and had no knowledge of events that occurred from 2023 onward.

To address this limitation, OpenAI researchers developed a RAG pipeline, which includes a constantly updated database containing new information from external sources. When ChatGPT is given a task that requires external knowledge, it queries the database to retrieve the most relevant context and integrates it into the final prompt sent to the machine learning model.

The goal of distillation is to create a smaller model that can imitate a larger one. In practice, this means that if a large model makes a prediction, the smaller model is expected to produce a similar result.

In the modern era, LLM development has led to models with millions or even billions of parameters. As a consequence, the overall size of these models may exceed the hardware limitations of standard computers or small portable devices, which come with many constraints.

Quantization is the process of reducing the memory required to store numerical values representing a model’s weights.

This is where optimization techniques become particularly useful, allowing LLMs to be compressed without significantly compromising their performance. The most commonly used techniques today include distillation, quantization, and pruning.

Pruning refers to discarding the least important weights of a model.

Fine-tuning

Regardless of the area in which you wish to specialize, knowledge of fine-tuning is a must-have skill! Fine-tuning is a powerful concept that allows you to efficiently adapt a pre-trained model to a new task.

Fine-tuning is especially useful when working with very large models. For example, imagine you want to use BERT to perform semantic analysis on a specific dataset. While BERT is trained on general data, it might not fully understand the context of your dataset. At the same time, training BERT from scratch for your specific task would require a massive amount of resources.

Here is where fine-tuning comes in: it involves taking a pre-trained BERT (or another model) and freezing some of its layers (usually those at the beginning). As a result, BERT is retrained, but this time only on the new dataset provided. Since BERT updates only a subset of its weights and the new dataset is likely much smaller than the original one BERT was trained on, fine-tuning becomes a very efficient technique for adapting BERT’s rich knowledge to a specific domain.

Fine-tuning is widely used not only in NLP but also across many other domains.

# 05. Computer vision

As the name suggests, computer vision (CV) involves analyzing images and videos using machine learning. The most common tasks include image classification, object detection, image segmentation, and generation.

Most CV algorithms are based on neural networks, so it is essential to understand how they work in detail. In particular, CV uses a special type of network called convolutional neural networks (CNNs). These are similar to fully connected networks, except that they typically begin with a set of specialized mathematical operations called convolutions.

Computer vision roadmap

In simple terms, convolutions act as filters, enabling the model to extract the most important features from an image, which are then passed to fully connected layers for further analysis.

The next step is to study the most popular CNN architectures for classification tasks, such as AlexNet, VGG, Inception, ImageNet, and ResNet.

Speaking of the object detection task, the YOLO algorithm is a clear winner. It is not necessary to study all of the dozens of versions of YOLO. In reality, going through the original paper of the first YOLO should be sufficient to understand how a relatively difficult problem like object detection is elegantly transformed into both classification and regression problems. This approach in YOLO also provides a nice intuition on how more complex CV tasks can be reformulated in simpler terms.

While there are many architectures for performing image segmentation, I would strongly recommend learning about UNet, which introduces an encoder-decoder architecture.

Finally, image generation is probably one of the most challenging tasks in CV. Personally, I consider it an optional topic for learners, as it involves many advanced concepts. Nevertheless, gaining a high-level intuition of how generative adversial networks (GAN) function to generate images is a good way to broaden one’s horizons.

In some problems, the training data might not be enough to build a performant model. In such cases, the data augmentation technique is commonly used. It involves the artificial generation of training data from already existing data (images). By feeding the model more diverse data, it becomes capable of learning and recognizing more patterns.

# 06. Other areas

It would be very hard to present in detail the Roadmaps for all existing machine learning domains in a single article. That is why, in this section, I would like to briefly list and explain some of the other most popular areas in data science worth exploring.

First of all, recommender systems (RecSys) have gained a lot of popularity in recent years. They are increasingly implemented in online shops, social networks, and streaming services. The key idea of most algorithms is to take a large initial matrix of all users and items and decompose it into a product of several matrices in a way that associates every user and every item with a high-dimensional embedding. This approach is very flexible, as it then allows different types of comparison operations on embeddings to find the most relevant items for a given user. Moreover, it is much more rapid to perform analysis on small matrices rather than the original, which usually tends to have huge dimensions.

Matrix decomposition in recommender systems is one of the most commonly used methods

Ranking often goes hand in hand with RecSys. When a RecSys has identified a set of the most relevant items for the user, ranking algorithms are used to sort them to determine the order in which they will be shown or proposed to the user. A good example of their usage is search engines, which filter query results from top to bottom on a web page.

Closely related to ranking, there is also a matching problem that aims to optimally map objects from two sets, A and B, in a way that, on average, every object pair (a, b) is mapped “well” according to a matching criterion. A use case example might include distributing a group of students to different university disciplines, where the number of spots in each class is limited.

Clustering is an unsupervised machine learning task whose objective is to split a dataset into several regions (clusters), with each dataset object belonging to one of these clusters. The splitting criteria can vary depending on the task. Clustering is useful because it allows for grouping similar objects together. Moreover, further analysis can be applied to treat objects in each cluster separately.

The goal of clustering is to group dataset objects (on the left) into several categories (on the right) based on their similarity.

Dimensionality reduction is another unsupervised problem, where the goal is to compress an input dataset. When the dimensionality of the dataset is large, it takes more time and resources for machine learning algorithms to analyze it. By identifying and removing noisy dataset features or those that do not provide much valuable information, the data analysis process becomes considerably easier.

Similarity search is an area that focuses on designing algorithms and data structures (indexes) to optimize searches in a large database of embeddings (vector database). More precisely, given an input embedding and a vector database, the goal is to approximately find the most similar embedding in the database relative to the input embedding.

The goal of similarity search is to approximately find the most similar embedding in a vector database relative to a query embedding.

The word “approximately” means that the search is not guaranteed to be 100% precise. Nevertheless, this is the main idea behind similarity search algorithms — sacrificing a bit of accuracy in exchange for significant gains in prediction speed or data compression.

Time series analysis involves studying the behavior of a target variable over time. This problem can be solved using classical tabular algorithms. However, the presence of time introduces new factors that cannot be captured by standard algorithms. For instance:

  • the target variable can have an overall trend, where in the long term its values increase or decrease (e.g., the average yearly temperature rising due to global warming).
  • the target variable can have a seasonality which makes its values change based on the currently given period (e.g. temperature is lower in winter and higher in summer).

Most of the time series models take both of these factors into account. In general, time series models are mainly used a lot in financial, stock or demographic analysis.

Time series data if often decomposed in several components which include trend and seasonality.

Another advanced area I would recommend exploring is reinforcement learning, which fundamentally changes the algorithm design compared to classical machine learning. In simple terms, its goal is to train an agent in an environment to make optimal decisions based on a reward system (also known as the “trial and error approach”). By taking an action, the agent receives a reward, which helps it understand whether the chosen action had a positive or negative effect. After that, the agent slightly adjusts its strategy, and the entire cycle repeats.

Reinforcement learning framework. Image adopted by the author. Source: Reinforcement Learning. An Introduction. Second Edition | Richard S. Sutton and Andrew G. Barto

Reinforcement learning is particularly popular in complex environments where classical algorithms are not capable of solving a problem. Given the complexity of reinforcement learning algorithms and the computational resources they require, this area is not yet fully mature, but it has high potential to gain even more popularity in the future.

Main applications of reinforcement learning

Currently the most popular applications are:

  • Games. Existing approaches can design optimal game strategies and outperform humans. The most well-known examples are chess and Go.
  • Robotics. Advanced algorithms can be incorporated into robots to help them move, carry objects or complete routine tasks at home.
  • Autopilot. Reinforcement learning methods can be developed to automatically drive cars, control helicopters or drones.

Conclusion

This article was a logical continuation of the previous part and expanded the skill set needed to become a data scientist. While most of the mentioned topics require time to master, they can add significant value to your portfolio. This is especially true for the NLP and CV domains, which are in high demand today.

After reaching a high level of expertise in data science, it is still crucial to stay motivated and consistently push yourself to learn new topics and explore emerging algorithms.

Data science is a constantly evolving field, and in the coming years, we might witness the development of new state-of-the-art approaches that we could not have imagined in the past.

Resources

All images are by the author unless noted otherwise.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Microsoft’s largest quantum site to be built in Denmark

With this strategic move, Denmark will become Microsoft’s global quantum hub. According to the company, the expansion of the Lyngby laboratory will enable the complete core components of the Majorana chip to be manufactured directly on site. This research is based on years of cooperation with leading Danish research institutions,

Read More »

Extreme plots enterprise marketplace for AI agents, tools, apps

Extreme Networks this week previewed an AI marketplace where it plans to offer a curated catalog of AI tools, agents and applications. Called Extreme Exchange, it’s designed to give enterprise customers a way to discover, deploy, and create AI agents, microapps, and workflows in minutes rather than developing such components

Read More »

The week in 5 numbers: Electricity prices extend rise, regulators rein in data centers

The upper end of Duke Energy’s expanded five-year capital spending plan, which it expects to roll out early next year. Executives attribute the rise in spending to rapid load growth, including many data centers, which they say is likely to continue into the early 2030s. Additional generation added to Duke’s system could exceed 13 GW in the next five years, including 7.5 GW of new gas facilities. Duke is one of many utilities that have bumped their spending in response to projected load growth from artificial intelligence, manufacturing and electrification. 

Read More »

Solar project delays decreased in Q3 2025: EIA

Listen to the article 2 min This audio is auto-generated. Please let us know if you have feedback. Fewer solar developers reported delays in the third quarter of 2025 compared to the same period last year, the Energy Information Administration said in a Monday report. In the third quarter this year, “solar projects representing about 20% of planned capacity reported a delay, a decrease from 25% in the same period in 2024,” EIA said.  “Despite the relatively high number of projects reporting delays in 2024, that year was a record year for U.S. solar capacity additions,” EIA said. Developers added around 31 GW of utility-scale solar capacity last year, though their projections at the beginning of the year forecasted 36 GW in additions.  Optional Caption Courtesy of Energy Information Administration “Because survey respondents may not anticipate the occurrence or duration of delays, ultimate capacity additions tend to be less than the expected amount that developers report to us at the beginning of the year,” EIA said. The agency said in February that it predicts 32.5 GW of utility-scale solar will be added this year, indicating that less solar may come online this year than last year, despite the decrease in delays. EIA also noted that delays are more common than cancellations, and “less than 1% of planned solar capacity is entirely cancelled in a typical month … Much of the reported delayed capacity occurs at projects that are in the late construction or testing phases just before they come online. These delays are typically only for a month or two.” Justin Baca, vice president of markets and research at the Solar Energy Industries Association, said in an email that it’s “important to note that most of the solar capacity that has come online this year began construction last year.” “The

Read More »

Oil Rises as Geopolitics Heat Up

Oil rose after Ukraine attacked a key Russian oil port and Iran seized a tanker near the Strait of Hormuz, injecting a fresh geopolitical premium into prices.  West Texas Intermediate rose 2.4% to settle above $60. Brent also advanced.  A major drone attack damaged an oil depot and a vessel in the vital Black Sea port of Novorossiysk. About 700,000 barrels a day of Russian oil were shipped from there in September and October, according to vessel tracking data compiled by Bloomberg, while a nearby terminal handles more than 1.5 million barrels a day of Kazakh shipments.  Ukraine’s General Staff also said that it struck Rosneft PJSC’s Saratov refinery in Russia’s Volga region. That’s the third attack this month on the facility. The attacks came on the same day that a US defense official said Iranian forces seized a tanker after it passed the vital Strait of Hormuz chokepoint, through which about a fifth of the world’s oil flows. The ship was smuggling 3,000 liters of fuel, state-run Islamic Republic News Agency reports. While authorities are still confirming the nature of the diversion toward the country’s territorial waters, Friday’s event would add to concerns that Iran is turning to hijacking merchant ships again. Though motive remains unclear, Iran’s moves appear less likely to be a concerted effort to inhibit crude flows than a potential response to a US action against the Middle Eastern nation’s exports, said Gregory Brew, a geopolitical analyst at the Eurasia Group. Iran’s exports have been in excess of two million barrels a day over September and October, he added.  The twin concerns come against the backdrop of a tightening of US sanctions against Russia. Curbs on the country’s two largest oil companies, Rosneft and Lukoil PJSC, are due to kick in within days. Those restrictions won’t

Read More »

Energy Department Announces $355 Million to Expand Domestic Production of Critical Minerals and Materials

WASHINGTON—The U.S. Department of Energy (DOE) today announced $355 million for two notices of funding opportunities issued by DOE’s Office of Fossil Energy (FE) to expand domestic production of critical materials essential for advancing U.S. energy production, manufacturing, transportation and national defense. The first funding opportunity provides up to $275 million for American industrial facilities capable of producing valuable minerals from existing industrial and coal byproducts. The second provides up to $80 million to establish Mine of the Future proving grounds for real-world testing of next-generation mining technologies. The Department announced in August its intent to invest $1 billion to advance and scale mining, processing, and manufacturing technologies, delivering on President Trump’s Executive Orders, Unleashing American Energy and Immediate Measures to Increase American Mineral Production. These actions will secure America’s critical material supply chain, increase domestic mineral production, reduce reliance on foreign sources, and strengthen U.S. energy independence. “For too long, the United States has relied on foreign nations for the minerals and materials that power our economy,” said U.S. Secretary of Energy Chris Wright. “We have these resources here at home, but years of complacency ceded America’s mining and industrial base to other nations. Thanks to President Trump’s leadership, we are reversing that trend, rebuilding America’s ability to mine, process, and manufacture the materials essential to our energy and economic security.” “The Mine of the Future – Proving Ground Initiative will be among the Department of Energy’s first major investments into mining technology research and development in almost four decades,” said U.S. Department of Energy Assistant Secretary of the Office of Fossil Energy Kyle Haustveit. “This effort will help establish the United States as the world’s leading producer and processor of non-fuel minerals—creating economic prosperity in fossil energy communities across the country while strengthening critical mineral supply chains for

Read More »

Ukraine Drones Hit Russian Black Sea Oil Terminal

(Update) November 14, 2025, 9:45 AM GMT+1: Article updated with additional details. Ukrainian drones attacked Russia’s giant Black Sea port of Novorossiysk overnight, prompting a state of emergency, as Moscow launched a massive air strike on Kyiv that killed four and damaged several residential buildings. Falling drone debris caused a fire at the Russian depot located at Transneft PJSC’s Sheskharis oil terminal, the regional emergency service said on Telegram early Friday. The blaze was put out after more than 50 units of firefighting equipment were deployed at the site, authorities said, but provided no details on the damage. Novorossiysk Mayor Andrey Kravchenko announced the state of emergency on Telegram. Transneft didn’t immediately respond to a request for comment on the situation at the facility. Global benchmark Brent spiked as much as 3 percent in a rapid move toward $65 a barrel, before paring gains. A container terminal located in the port of Novorossiysk was damaged by falling debris, but continued to operate normally, Delo Group, which runs that facility, said in a statement on Telegram. Russia’s largest grain terminal, also operated by Delo Group, was impacted by drone debris, but continues to function, the Interfax news service reported, citing the terminal’s chief executive officer. Drones hit an unidentified civilian ship in the port of Novorossiysk as well, regional emergency services said, without specifying the type of the vessel. The city’s mayor reported damage to at least three residential buildings in separate statements on Telegram.  In Ukraine, four people were killed after Russia launched about 430 drones and 18 missiles – including ballistic ones – in the strike, President Volodymyr Zelenskiy said on the X platform Friday. Dozens of apartment buildings were damaged in the capital Kyiv, he said. At least 26 people were injured, including two children, and several residential buildings were damaged,

Read More »

Repsol Mulls Merger for $19B Upstream Unit

Repsol SA is considering a reverse merger of its upstream unit with potential partners including US energy producer APA Corp., people with knowledge of the matter said, as it seeks ways to list the business in New York. The Spanish oil and gas company has held exploratory discussions with APA, formerly known as Apache Corp., about the possibility of a deal, according to the people. It has also held initial talks with other potential merger partners for the business, they said.  Any deal could help Repsol bulk up the portfolio of its upstream business and provide it a faster route to becoming publicly traded.  APA shares surged as much as 7.3 percent in New York. The stock has gained about 16 percent over the past 12 months, giving the company a market value of roughly $9 billion. Repsol shares gained as much as 2.2 percent.  Repsol agreed in 2022 to sell a 25 percent stake in the upstream division to private equity firm EIG Global Energy Partners LLC in a deal valuing the business at $19 billion including debt. The transaction was aimed at helping the unit further expand in the US, while also raising funds for Repsol to invest in low-carbon activities.  Executives have said they’re preparing the upstream unit for a potential “liquidity event,” such as a public listing, in 2026. Repsol Chief Executive Officer Josu Jon Imaz told analysts last month that company is considering options including an IPO of the business, a reverse merger with a US-listed group or the introduction of a new private investor.  Deliberations are ongoing and there’s no certainty they will lead to a transaction, the people said, asking not to be identified because the information is private. Repsol continues to study a variety of options for the business and it may still opt for an

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

AI networking demand fueled Cisco’s upbeat Q1 financials

Customers are very focused on modernizing their network infrastructure in the enterprise in preparation for inferencing and AI workloads, Robbins said. “These things are always multi-year efforts,” and this is only the beginning, Robbins said. The AI opportunity “As we look at the AI opportunity, we see customer use cases growing across training, inferencing, and connectivity, with secure networking increasingly critical as workloads move from the data center to end users, devices, and agents at the edge,” Robbins said. “Agents are transforming network traffic from predictable bursts to persistent high-intensity loads, with agentic AI queries generating up to 25 times more network traffic than chatbots.” “Instead of pulling data to and from the data center, AI workloads require models and infrastructure to be closer to where data is created and decisions are made, particularly in industries such as retail, healthcare, and manufacturing.” Robbins pointed to last week’s introduction of Cisco Unified Edge, a converged platform that integrates networking, compute and storage to help enterprise customers more efficiently handle data from AI and other workloads at the edge. “Unified Edge enables real-time inferencing for agentic and physical AI workloads, so enterprises can confidently deploy and manage AI at scale,” Robbins said. On the hyperscaler front, “we see a lot of solid pipeline throughout the rest of the year. The use cases, we see it expanding,” Robbins said. “Obviously, we’ve been selling networking infrastructure under the training models. We’ve been selling scale-out. We launched the P200-based router that will begin to address some of the scale-across opportunities.” Cisco has also seen great success with its pluggable optics, Robbins said. “All of the hyperscalers now are officially customers of our pluggable optics, so we feel like that’s a great opportunity. They not only plug into our products, but they can be used with other companies’

Read More »

When the Cloud Leaves Earth: Google and NVIDIA Test Space Data Centers for the Orbital AI Era

On November 4, 2025, Google unveiled Project Suncatcher, a moonshot research initiative exploring the feasibility of AI data centers in space. The concept envisions constellations of solar-powered satellites in Low Earth Orbit (LEO), each equipped with Tensor Processing Units (TPUs) and interconnected via free-space optical laser links. Google’s stated objective is to launch prototype satellites by early 2027 to test the idea and evaluate scaling paths if the technology proves viable. Rather than a commitment to move production AI workloads off-planet, Suncatcher represents a time-bound research program designed to validate whether solar-powered, laser-linked LEO constellations can augment terrestrial AI factories, particularly for power-intensive, latency-tolerant tasks. The 2025–2027 window effectively serves as a go/no-go phase to assess key technical hurdles including thermal management, radiation resilience, launch economics, and optical-link reliability. If these milestones are met, Suncatcher could signal the emergence of a new cloud tier: one that scales AI with solar energy rather than substations. Inside Google’s Suncatcher Vision Google has released a detailed technical paper titled “Towards a Future Space-Based, Highly Scalable AI Infrastructure Design.” The accompanying Google Research blog describes Project Suncatcher as “a moonshot exploring a new frontier” – an early-stage effort to test whether AI compute clusters in orbit can become a viable complement to terrestrial data centers. The paper outlines several foundational design concepts: Orbit and Power Project Suncatcher targets Low Earth Orbit (LEO), where solar irradiance is significantly higher and can remain continuous in specific orbital paths. Google emphasizes that space-based solar generation will serve as the primary power source for the TPU-equipped satellites. Compute and Interconnect Each satellite would host Tensor Processing Unit (TPU) accelerators, forming a constellation connected through free-space optical inter-satellite links (ISLs). Together, these would function as a disaggregated orbital AI cluster, capable of executing large-scale batch and training workloads. Downlink

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »