Stay Ahead, Stay ONMINE

Roadmap to Becoming a Data Scientist, Part 4: Advanced Machine Learning

Introduction Data science is undoubtedly one of the most fascinating fields today. Following significant breakthroughs in machine learning about a decade ago, data science has surged in popularity within the tech community. Each year, we witness increasingly powerful tools that once seemed unimaginable. Innovations such as the Transformer architecture, ChatGPT, the Retrieval-Augmented Generation (RAG) framework, and state-of-the-art Computer Vision models — including GANs — have […]

Introduction

Data science is undoubtedly one of the most fascinating fields today. Following significant breakthroughs in machine learning about a decade ago, data science has surged in popularity within the tech community. Each year, we witness increasingly powerful tools that once seemed unimaginable. Innovations such as the Transformer architectureChatGPT, the Retrieval-Augmented Generation (RAG) framework, and state-of-the-art Computer Vision models — including GANs — have had a profound impact on our world.

However, with the abundance of tools and the ongoing hype surrounding AI, it can be overwhelming — especially for beginners — to determine which skills to prioritize when aiming for a career in data science. Moreover, this field is highly demanding, requiring substantial dedication and perseverance.

The first three parts of this series outlined the necessary skills to become a data scientist in three key areas: math, software engineering, and machine learning. While knowledge of classical Machine Learning and neural network algorithms is an excellent starting point for aspiring data specialists, there are still many important topics in machine learning that must be mastered to work on more advanced projects.

This article will focus solely on the math skills necessary to start a career in Data Science. Whether pursuing this path is a worthwhile choice based on your background and other factors will be discussed in a separate article.

The importance of learning evolution of methods in machine learning

The section below provides information about the evolution of methods in natural language processing (NLP).

In contrast to previous articles in this series, I have decided to change the format in which I present the necessary skills for aspiring data scientists. Instead of directly listing specific competencies to develop and the motivation behind mastering them, I will briefly outline the most important approaches, presenting them in chronological order as they have been developed and used over the past decades in machine learning.

The reason is that I believe it is crucial to study these algorithms from the very beginning. In machine learning, many new methods are built upon older approaches, which is especially true for NLP and computer vision.

For example, jumping directly into the implementation details of modern large language models (LLMs) without any preliminary knowledge may make it very difficult for beginners to grasp the motivation and underlying ideas of specific mechanisms.

Given this, in the next two sections, I will highlight in bold the key concepts that should be studied.

# 04. NLP

Natural language processing (NLP) is a broad field that focuses on processing textual information. Machine learning algorithms cannot work directly with raw text, which is why text is usually preprocessed and converted into numerical vectors that are then fed into neural networks.

Before being converted into vectors, words undergo preprocessing, which includes simple techniques such as parsingstemming, lemmatization, normalization, or removing stop words. After preprocessing, the resulting text is encoded into tokens. Tokens represent the smallest textual elements in a collection of documents. Generally, a token can be a part of a word, a sequence of symbols, or an individual symbol. Ultimately, tokens are converted into numerical vectors.

NLP roadmap

The bag of words method is the most basic way to encode tokens, focusing on counting the frequency of tokens in each document. However, in practice, this is usually not sufficient, as it is also necessary to account for token importance — a concept introduced in the TF-IDF and BM25 methods. While TF-IDF improves upon the naive counting approach of bag of words, researchers have developed a completely new approach called embeddings.

Embeddings are numerical vectors whose components preserve the semantic meanings of words. Because of this, embeddings play a crucial role in NLP, enabling input data to be trained or used for model inference. Additionally, embeddings can be used to compare text similarity, allowing for the retrieval of the most relevant documents from a collection.

Embeddings can also be used to encode other unstructured data, including images, audio, and videos.

As a field, NLP has been evolving rapidly over the last 10–20 years to efficiently solve various text-related problems. Complex tasks like text translation and text generation were initially addressed using recurrent neural networks (RNNs), which introduced the concept of memory, allowing neural networks to capture and retain key contextual information in long documents.

Although RNN performance gradually improved, it remained suboptimal for certain tasks. Moreover, RNNs are relatively slow, and their sequential prediction process does not allow for parallelization during training and inference, making them less efficient.

Additionally, the original Transformer architecture can be decomposed into two separate modules: BERT and GPT. Both of these form the foundation of the most state-of-the-art models used today to solve various NLP problems. Understanding their principles is valuable knowledge that will help learners advance further when studying or working with other large language models (LLMs).

Transformer architecture

When it comes to LLMs, I strongly recommend studying the evolution of at least the first three GPT models, as they have had a significant impact on the AI world we know today. In particular, I would like to highlight the concepts of few-shot and zero-shot learning, introduced in GPT-2, which enable LLMs to solve text generation tasks without explicitly receiving any training examples for them.

Another important technique developed in recent years is retrieval-augmented generation (RAG)The main limitation of LLMs is that they are only aware of the context used during their training. As a result, they lack knowledge of any information beyond their training data.

Example of a RAG pipeline

The retriever converts the input prompt into an embedding, which is then used to query a vector database. The database returns the most relevant context based on the similarity to the embedding. This retrieved context is then combined with the original prompt and passed to a generative model. The model processes both the initial prompt and the additional context to generate a more informed and contextually accurate response.

A good example of this limitation is the first version of the ChatGPT model, which was trained on data up to the year 2022 and had no knowledge of events that occurred from 2023 onward.

To address this limitation, OpenAI researchers developed a RAG pipeline, which includes a constantly updated database containing new information from external sources. When ChatGPT is given a task that requires external knowledge, it queries the database to retrieve the most relevant context and integrates it into the final prompt sent to the machine learning model.

The goal of distillation is to create a smaller model that can imitate a larger one. In practice, this means that if a large model makes a prediction, the smaller model is expected to produce a similar result.

In the modern era, LLM development has led to models with millions or even billions of parameters. As a consequence, the overall size of these models may exceed the hardware limitations of standard computers or small portable devices, which come with many constraints.

Quantization is the process of reducing the memory required to store numerical values representing a model’s weights.

This is where optimization techniques become particularly useful, allowing LLMs to be compressed without significantly compromising their performance. The most commonly used techniques today include distillation, quantization, and pruning.

Pruning refers to discarding the least important weights of a model.

Fine-tuning

Regardless of the area in which you wish to specialize, knowledge of fine-tuning is a must-have skill! Fine-tuning is a powerful concept that allows you to efficiently adapt a pre-trained model to a new task.

Fine-tuning is especially useful when working with very large models. For example, imagine you want to use BERT to perform semantic analysis on a specific dataset. While BERT is trained on general data, it might not fully understand the context of your dataset. At the same time, training BERT from scratch for your specific task would require a massive amount of resources.

Here is where fine-tuning comes in: it involves taking a pre-trained BERT (or another model) and freezing some of its layers (usually those at the beginning). As a result, BERT is retrained, but this time only on the new dataset provided. Since BERT updates only a subset of its weights and the new dataset is likely much smaller than the original one BERT was trained on, fine-tuning becomes a very efficient technique for adapting BERT’s rich knowledge to a specific domain.

Fine-tuning is widely used not only in NLP but also across many other domains.

# 05. Computer vision

As the name suggests, computer vision (CV) involves analyzing images and videos using machine learning. The most common tasks include image classification, object detection, image segmentation, and generation.

Most CV algorithms are based on neural networks, so it is essential to understand how they work in detail. In particular, CV uses a special type of network called convolutional neural networks (CNNs). These are similar to fully connected networks, except that they typically begin with a set of specialized mathematical operations called convolutions.

Computer vision roadmap

In simple terms, convolutions act as filters, enabling the model to extract the most important features from an image, which are then passed to fully connected layers for further analysis.

The next step is to study the most popular CNN architectures for classification tasks, such as AlexNet, VGG, Inception, ImageNet, and ResNet.

Speaking of the object detection task, the YOLO algorithm is a clear winner. It is not necessary to study all of the dozens of versions of YOLO. In reality, going through the original paper of the first YOLO should be sufficient to understand how a relatively difficult problem like object detection is elegantly transformed into both classification and regression problems. This approach in YOLO also provides a nice intuition on how more complex CV tasks can be reformulated in simpler terms.

While there are many architectures for performing image segmentation, I would strongly recommend learning about UNet, which introduces an encoder-decoder architecture.

Finally, image generation is probably one of the most challenging tasks in CV. Personally, I consider it an optional topic for learners, as it involves many advanced concepts. Nevertheless, gaining a high-level intuition of how generative adversial networks (GAN) function to generate images is a good way to broaden one’s horizons.

In some problems, the training data might not be enough to build a performant model. In such cases, the data augmentation technique is commonly used. It involves the artificial generation of training data from already existing data (images). By feeding the model more diverse data, it becomes capable of learning and recognizing more patterns.

# 06. Other areas

It would be very hard to present in detail the Roadmaps for all existing machine learning domains in a single article. That is why, in this section, I would like to briefly list and explain some of the other most popular areas in data science worth exploring.

First of all, recommender systems (RecSys) have gained a lot of popularity in recent years. They are increasingly implemented in online shops, social networks, and streaming services. The key idea of most algorithms is to take a large initial matrix of all users and items and decompose it into a product of several matrices in a way that associates every user and every item with a high-dimensional embedding. This approach is very flexible, as it then allows different types of comparison operations on embeddings to find the most relevant items for a given user. Moreover, it is much more rapid to perform analysis on small matrices rather than the original, which usually tends to have huge dimensions.

Matrix decomposition in recommender systems is one of the most commonly used methods

Ranking often goes hand in hand with RecSys. When a RecSys has identified a set of the most relevant items for the user, ranking algorithms are used to sort them to determine the order in which they will be shown or proposed to the user. A good example of their usage is search engines, which filter query results from top to bottom on a web page.

Closely related to ranking, there is also a matching problem that aims to optimally map objects from two sets, A and B, in a way that, on average, every object pair (a, b) is mapped “well” according to a matching criterion. A use case example might include distributing a group of students to different university disciplines, where the number of spots in each class is limited.

Clustering is an unsupervised machine learning task whose objective is to split a dataset into several regions (clusters), with each dataset object belonging to one of these clusters. The splitting criteria can vary depending on the task. Clustering is useful because it allows for grouping similar objects together. Moreover, further analysis can be applied to treat objects in each cluster separately.

The goal of clustering is to group dataset objects (on the left) into several categories (on the right) based on their similarity.

Dimensionality reduction is another unsupervised problem, where the goal is to compress an input dataset. When the dimensionality of the dataset is large, it takes more time and resources for machine learning algorithms to analyze it. By identifying and removing noisy dataset features or those that do not provide much valuable information, the data analysis process becomes considerably easier.

Similarity search is an area that focuses on designing algorithms and data structures (indexes) to optimize searches in a large database of embeddings (vector database). More precisely, given an input embedding and a vector database, the goal is to approximately find the most similar embedding in the database relative to the input embedding.

The goal of similarity search is to approximately find the most similar embedding in a vector database relative to a query embedding.

The word “approximately” means that the search is not guaranteed to be 100% precise. Nevertheless, this is the main idea behind similarity search algorithms — sacrificing a bit of accuracy in exchange for significant gains in prediction speed or data compression.

Time series analysis involves studying the behavior of a target variable over time. This problem can be solved using classical tabular algorithms. However, the presence of time introduces new factors that cannot be captured by standard algorithms. For instance:

  • the target variable can have an overall trend, where in the long term its values increase or decrease (e.g., the average yearly temperature rising due to global warming).
  • the target variable can have a seasonality which makes its values change based on the currently given period (e.g. temperature is lower in winter and higher in summer).

Most of the time series models take both of these factors into account. In general, time series models are mainly used a lot in financial, stock or demographic analysis.

Time series data if often decomposed in several components which include trend and seasonality.

Another advanced area I would recommend exploring is reinforcement learning, which fundamentally changes the algorithm design compared to classical machine learning. In simple terms, its goal is to train an agent in an environment to make optimal decisions based on a reward system (also known as the “trial and error approach”). By taking an action, the agent receives a reward, which helps it understand whether the chosen action had a positive or negative effect. After that, the agent slightly adjusts its strategy, and the entire cycle repeats.

Reinforcement learning framework. Image adopted by the author. Source: Reinforcement Learning. An Introduction. Second Edition | Richard S. Sutton and Andrew G. Barto

Reinforcement learning is particularly popular in complex environments where classical algorithms are not capable of solving a problem. Given the complexity of reinforcement learning algorithms and the computational resources they require, this area is not yet fully mature, but it has high potential to gain even more popularity in the future.

Main applications of reinforcement learning

Currently the most popular applications are:

  • Games. Existing approaches can design optimal game strategies and outperform humans. The most well-known examples are chess and Go.
  • Robotics. Advanced algorithms can be incorporated into robots to help them move, carry objects or complete routine tasks at home.
  • Autopilot. Reinforcement learning methods can be developed to automatically drive cars, control helicopters or drones.

Conclusion

This article was a logical continuation of the previous part and expanded the skill set needed to become a data scientist. While most of the mentioned topics require time to master, they can add significant value to your portfolio. This is especially true for the NLP and CV domains, which are in high demand today.

After reaching a high level of expertise in data science, it is still crucial to stay motivated and consistently push yourself to learn new topics and explore emerging algorithms.

Data science is a constantly evolving field, and in the coming years, we might witness the development of new state-of-the-art approaches that we could not have imagined in the past.

Resources

All images are by the author unless noted otherwise.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

South Sudan Says Crude Exports Back to Normal

South Sudan said it had resumed oil shipments after attacks on energy facilities in neighboring Sudan disrupted activity. “Operations in all oil fields in South Sudan have returned to a normal export,” Petroleum Ministry Undersecretary Deng Lual Wol told reporters Wednesday in the capital, Juba. “All crude exports from South Sudan are fully flowing to the export terminals in Port Sudan.” Oil companies operating in the two African countries earlier this week shuttered production after the assaults in Sudan, which is embroiled in a more than two-year civil war. Landlocked South Sudan uses pipelines to transport its crude to Red Sea terminals, from where it’s shipped to world markets. Dar Petroleum Operating Co. is producing 97,000 barrels per day following the brief shutdown, but will ramp that up to 150,000, Wol said. Greater Pioneer Operating Co.’s output is 40,000 daily barrels, and should rise to the normal level of 50,000, while Sudd Petroleum Operating Co. is pumping 13,000 barrels per day, down from 15,000 before disruption, he added. Bashayer Pipeline Co., which transports South Sudan’s Dar Blend oil to Sudan, said in a Nov. 15 notice seen by Bloomberg that it had initiated an emergency shutdown after its Al Jabalain processing plant and a power facility came under attack. Sudan’s state-owned Petrolines for Crude Oil Co. issued a Nov. 13 notice about a drone attack at the Heglig oil field, where Nile Blend is produced. It had issued a force majeure notice at 2B OPCO, an exploration and production company in which it has a 50 percent stake. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with

Read More »

Eni to Acquire 760 MW RE Assets in France from Neoen

Eni SpA said Tuesday it has entered into an agreement to buy a portfolio of already operational renewable energy projects totaling about 760 megawatts across France from Neoen. The transaction involves the transfer of 37 solar plants, 14 wind farms and one battery energy storage to Eni’s renewables arm Plenitude. The facilities produce around 1.1 terawatt hours of power annually, Italy’s state-backed Eni said in a press release. “The transaction represents one of the largest renewable energy deals completed in the French market in recent years and significantly contributes to Plenitude’s 2025 installed capacity targets”, Eni said. The parties have not disclosed the transaction price. Eni aims to reach over 5.5 gigawatts (GW) of installed renewable generation capacity this year, toward 10 GW by 2028 and 15 GW by 2030, according to a plan it announced February. As of the third quarter of 2025, it had 4.8 GW of installed renewable capacity, according to its quarterly report October 24. Eni plans to integrate the Neoen assets with its existing assets to “enable optimized operations and synergies”, Tuesday’s statement said. “The acquisition expands our presence in France, where we already serve around one million retail customers and where we are growing in both energy solutions and e-mobility markets”, said Plenitude chief executive Stefano Goberti. “Through this operation, we strengthen our integrated business model and accelerate progress toward achieving our strategic objectives”. Plenitude currently serves 10 million households and businesses across Europe, and aims to have over 11 million customers by 2028 and 15 million by 2030, Eni said. Paris-based Neoen said separately it would “continue to manage the plants for some years through the provision of asset management services to Plenitude”. Neoen said it would retain 1.1 GW of assets in operation or under construction in France including 754 MW of

Read More »

Monumental Completes Capital Raise to Fund More Production Restarts in NZ

Monumental Energy Corp said Tuesday it had completed the issuance of 16.2 million units for CAD 0.05 per unit in an oversubscribed non-brokered private placement, generating gross proceeds of CAD 810,000 ($580,000). Vancouver, Canada-based Monumental said in an online statement it would use net proceeds “to fund cost overruns on Copper Moki 1 oil and gas well, to fund the costs and expenses to formally enter into and fund additional workover projects with New Zealand Energy Corp. and L&M Energy and for general working capital purposes and corporate expenses”. “Each unit is comprised of one common share in the capital of the company and one transferable common share purchase warrant”, Toronto-listed Monumental said. “Each warrant entitles the holder thereof to purchase one additional common share of the company at a price of CAD 0.08 per share until November 18, 2028. “In connection with the private placement, the company paid in consideration of the services rendered by certain finders an aggregate cash commission of CAD 38,850 and issued an aggregate of 777,000 non-transferable common share purchase warrants. Each finder warrant entitles the holder thereof to purchase one additional common share of the company at the issuer price until November 18, 2028”. Last month Monumental said it has agreed to fund New Zealand Energy Corp’s (NZEC) share of workover costs to restart flows at several wells in the Waihapa/Ngaere field in the onshore Taranaki basin. “These workovers will follow the same royalty structure as that established for the successful Copper Moki programs, whereas Monumental will earn a 25 percent royalty on NZEC’s production share after full recovery of its capital investment, which will be repaid from 75 percent of NZEC’s net revenue interest”, Monumental, a shareholder in NZEC, said in a press release October 15. L&M Energy will shoulder the remaining investment as NZEC’s

Read More »

US Risks Winter Blackouts on Data Center Demand

Rising electricity demand from data centers is raising the risk of blackouts across a wide swath of the US during extreme conditions this winter, according to the regulatory body overseeing grid stability.  Power consumption has grown 20 gigawatts from the previous winter, the North American Electric Reliability Corp. said Tuesday in its winter assessment.  A gigawatt is the typical size of a nuclear power reactor. Supply hasn’t kept up.  As as result, a repeat of severe winter storms in North America that unleash a polar vortex, of which there have been several in recent years, could trigger energy shortfalls across the US from the Northwest to Texas to the Carolinas. All regions have adequate resources in normal conditions. “Data centers are a main contributor to load growth in those areas where demand has risen substantially since last winter,” Mark Olson, manager of the reliability assessment, said in an emailed statement.  America’s power grid has been facing rising blackout risks for years as aging infrastructure is increasingly stressed by severe storms and wildfires. Now the data center boom, driven by the spread of artificial intelligence, is adding to the strain by supercharging US electricity growth after being stagnant for two decades.  Winter is especially risky because solar generation is available for fewer hours and battery operations may be affected. Gas supplies, meantime, could drop off because of freeze-offs or pipeline constraints. The areas designated by NERC as having elevated risks of shortfall shifted from the previous winter to include the US southeast and parts of the West, including Washington and Oregon.  The Texas grid continues to be highlighted after cascading failures in February 2021 left millions of people without power for days and resulted in more than 200 deaths. New England also continues to face elevated risks on potential natural gas pipeline

Read More »

US Energy Majors among Potential Lukoil Bidders

Exxon Mobil Corp., Chevron Corp., Abu Dhabi National Oil Co. as well as US private equity giant Carlyle Group are among companies interested in Lukoil PJSC’s international assets, a sale hastened by US sanctions due to kick in next month. Suitors are lining up to look at the various parts of Russian energy giant’s sprawling international business, with some potential buyers only interested in specific assets, according to people familiar with the situation. But one potential problem is that Lukoil favors selling the assets as a single package ahead of sanctions due to take effect Dec. 13, one of the people said. This raises the possibility of a two-step process in which one buyer – such as a financial firm – acquires all of Lukoil’s non-Russian assets and then resells them piecemeal over time. A key detail in the process is that the Trump administration would prefer that Lukoil’s global assets are taken over by a US entity, a fact that may limit the pool of potential buyers, people with knowledge of the matter said. A spokesperson for the US Treasury didn’t immediately respond to a request for comment.  Lukoil had previously agreed to sell the whole international business to Gunvor Group – a deal that was then dramatically blocked by the US.  Exxon and Chevron are exploring Lukoil’s stake in the West Qurna 2 field in Iraq, said two of the people, who declined to be identified because the talks are private. Meanwhile Adnoc is looking at various Lukoil assets, with the Russian firm’s natural gas operations in Uzbekistan potentially of most interest, according to people with knowledge of the situation. Spokespeople for Chevron, Exxon, Carlyle and Adnoc’s international investment unit XRG all declined to comment. Lukoil didn’t respond to a request to do so. What do you think? We’d love to

Read More »

How engineered building solutions support data center technologies to deliver AI workloads

As demand for AI applications accelerates, deploying AI workloads at scale is a challenge many of the world’s most important industries are collaborating to solve. Data centers, power producers and infrastructure providers must all work together to figure out how to generate and distribute enough power to keep next-generation IT equipment powered and cooled.  When data centers are looking to deploy more AI-ready chips, infrastructure enhancements are a must. These chips have incredibly high-power demands, and data centers need to make sure they are ready to deliver power to racks safely and reliably. The standard for uptime in many data centers exceeds 99%, meaning power availability is paramount. Working with such a high level of power, data center designers are taking lessons from the utility industry to manage it. This includes the strategic deployment of engineered building solutions. A Flexible Solution Engineered control buildings are large, modular enclosures that house critical electrical equipment. Traditionally used in the utility sector for transmission, distribution and renewable energy applications, these structures are essential for data centers facing unprecedented power demands driven by AI workloads. These buildings often include switch gear, relay and protection and control equipment that maintain uptime and accelerate deployment. The opportunities to use engineered building solutions address both gray and white space needs. Gray space applications include modular outdoor enclosures that provide primary power with switchgear or back-up power, such as uninterruptible power supply (UPS) and battery systems. These buildings can also feature integrated cooling to ensure system reliability and optimal performance.  Within the white space, IT Pod solutions are prefabricated and shipped to site for installation inside a data center hall. They feature seamless integration of power and cooling and include IT equipment such as racks, making them ideal for scalable, high-density AI deployments.  Modular data centers provide a

Read More »

Nvidia’s first exascale system is the 4th fastest supercomputer in the world

The world’s fourth exascale supercomputer has arrived, pitting Nvidia’s proprietary chip technologies against the x86 systems that have dominated supercomputing for decades. For the 66th edition of the TOP500, El Capitan holds steady at No. 1 while JUPITER Booster becomes the fourth exascale system on the list. The JUPITER Booster supercomputer, installed in Germany, uses Nvidia CPUs and GPUs and delivers a peak performance of exactly 1 exaflop, according to the November TOP500 list of supercomputers, released on Monday. The exaflop measurement is considered a major milestone in pushing computing performance to the limits. Today’s computers are typically measured in gigaflops and teraflops—and an exaflop translates to 1 billion gigaflops. Nvidia’s GPUs dominate AI servers installed in data centers as computing shifts to AI. As part of this shift, AI servers with Nvidia’s ARM-based Grace CPUs are emerging as a high-performance alternative to x86 chips. JUPITER is the fourth-fastest supercomputer in the world, behind three systems with x86 chips from AMD and Intel, according to TOP500. The top three supercomputers on the TOP500 list are in the U.S. and owned by the U.S. Department of Energy. The top two supercomputers—the 1.8-exaflop El Capitan at Lawrence Livermore National Laboratory and the 1.35-exaflop Frontier at Oak Ridge National Laboratory—use AMD CPUs and GPUs. The third-ranked 1.01-exaflop Aurora at Argonne National Laboratory uses Intel CPUs and GPUs. Intel scrapped its GPU roadmap after the release of Aurora and is now restructuring operations. The JUPITER Booster, which was assembled by France-based Eviden, has Nvidia’s GH200 superchip, which links two Nvidia Hopper GPUs with CPUs based on ARM designs. The CPU and GPU are connected via Nvidia’s proprietary NVLink interconnect, which is based on InfiniBand and provides bandwidth of up to 900 gigabytes per second. JUPITER first entered the Top500 list at 793 petaflops, but

Read More »

Samsung’s 60% memory price hike signals higher data center costs for enterprises

Industry-wide price surge driven by AI Samsung is not alone in raising prices. In October, TrendForce reported that Samsung and SK Hynix raised DRAM and NAND flash prices by up to 30% for Q4. Similarly, SK Hynix said during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, with the company posting record quarterly operating profit exceeding $8 billion, driven by surging AI demand. Industry analysts attributed the price increases to manufacturers redirecting production capacity. HBM production for AI accelerators consumes three times the wafer capacity of standard DRAM, according to a TrendForce report, citing remarks from Micron’s Chief Business Officer. After two years of oversupply, memory inventories have dropped to approximately eight weeks from over 30 weeks in early 2023. “The memory industry is tightening faster than expected as AI server demand for HBM, DDR5, and enterprise SSDs far outpaces supply growth,” said Manish Rawat, semiconductor analyst at TechInsights. “Even with new fab capacity coming online, much of it is dedicated to HBM, leaving conventional DRAM and NAND undersupplied. Memory is shifting from a cyclical commodity to a strategic bottleneck where suppliers can confidently enforce price discipline.” This newfound pricing power was evident in Samsung’s approach to contract negotiations. “Samsung’s delayed pricing announcement signals tough behind-the-scenes negotiations, with Samsung ultimately securing the aggressive hike it wanted,” Rawat said. “The move reflects a clear power shift toward chipmakers: inventories are normalized, supply is tight, and AI demand is unavoidable, leaving buyers with little room to negotiate.” Charlie Dai, VP and principal analyst at Forrester, said the 60% increase “signals confidence in sustained AI infrastructure growth and underscores memory’s strategic role as the bottleneck in accelerated computing.” Servers to cost 10-25% more For enterprises building AI infrastructure, these supply dynamics translate directly into

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »