Stay Ahead, Stay ONMINE

R.E.D.: Scaling Text Classification with Expert Delegation

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples. What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices? The classification conundrum Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…? Welp. It is. It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under: low amount of training data per class high classification accuracy (that plummets as you add more classes) possible addition of new classes to an existing subset of classes quick training/inference cost-effectiveness (potentially) really large number of training classes (potentially) endless required retraining of some classes due to data drift, etc. Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…) Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries. In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers. The R.E.D. algorithm R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable. In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D. Let’s dive in. How it works simple representation of what R.E.D. does Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently: Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later. Let’s take a deeper look… Greedy subset selection with least similar elements When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples. This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D. Some ways of improving a classifier’s performance under these constraints: Restrict the number of classes a classifier needs to classify between Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the S subsets has elements as n training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset: import numpy as np from sklearn.metrics.pairwise import cosine_similarity def avg_embedding(candidate_embeddings): return np.mean(candidate_embeddings, axis=0) def get_least_similar_embedding(target_embedding, candidate_embeddings): similarities = cosine_similarity(target_embedding, candidate_embeddings) least_similar_index = np.argmin(similarities) # Use argmin to find the index of the minimum least_similar_element = candidate_embeddings[least_similar_index] return least_similar_element def get_embedding_class(embedding, embedding_map): reverse_embedding_map = {value: key for key, value in embedding_map.items()} return reverse_embedding_map.get(embedding) # Use .get() to handle missing keys gracefully def select_subsets(embeddings, n): visited = {cls: False for cls in embeddings.keys()} subsets = [] current_subset = [] while any(not visited[cls] for cls in visited): for cls, average_embedding in embeddings.items(): if not current_subset: current_subset.append(average_embedding) visited[cls] = True elif len(current_subset) >= n: subsets.append(current_subset.copy()) current_subset = [] else: subset_average = avg_embedding(current_subset) remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]] if not remaining_embeddings: break # handle edge case least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings) visited_class = get_embedding_class(least_similar, embeddings) if visited_class is not None: visited[visited_class] = True current_subset.append(least_similar) if current_subset: # Add any remaining elements in current_subset subsets.append(current_subset) return subsets the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only n classes. This inherently makes the job of a classifier easier, compared to the original S classes it would have to classify between otherwise! Semi-supervised classification with noise oversampling Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes. Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well? We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified. As such, we created a design for how it would treat its data: n+1 classes, where the last class is noise noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification. How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples. Proxy active learning via an LLM agent This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling Let’s get an intuitive understanding of Active Labelling: Use an ML model to learn on a sample input dataset, predict on a large set of datapoints For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions Recursively, new ‘corrected’ samples are added as training data to the ML model The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions For Active Labelling to work, there are expectations involved for an SME: when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L Given these expectations and intuitions, we can ‘mimic’ these using an LLM: give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted. Giving an LLM the capability to understand ‘why, what, and how’ Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query). Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation. The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification): import math def calculate_uncertainty(clf, sample): predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0] # Reshape sample for predict_proba uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities) return uncertainty def select_informative_samples(clf, data, k): informative_samples = [] uncertainties = [calculate_uncertainty(clf, sample) for sample in data] # Sort data by descending order of uncertainty sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True) # Get top k samples with highest uncertainty for sample, uncertainty in sorted_data[:k]: informative_samples.append(sample) return informative_samples def proxy_label(clf, llm_judge, k, testing_data): #llm_judge – any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it predicted_classes = clf.predict(testing_data) # Select k most informative samples using uncertainty sampling informative_samples = select_informative_samples(clf, testing_data, k) # List to store correct samples voted_data = [] # Evaluate informative samples with the LLM judge for sample in informative_samples: sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue predicted_class = predicted_classes[sample_index] # Check if LLM judge agrees with the prediction if llm_judge(sample, predicted_class): # If correct, add the sample to voted data voted_data.append(sample) # Return the list of correct samples with proxy labels return voted_data By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm: Recursive Expert Delegation: R.E.D. By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement). I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results. All images, unless otherwise noted, are by the author Interested in more details? Reach out to me over Medium or email for a chat!

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples.

What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices?

The classification conundrum

Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…?

Welp. It is.

It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under:

  • low amount of training data per class
  • high classification accuracy (that plummets as you add more classes)
  • possible addition of new classes to an existing subset of classes
  • quick training/inference
  • cost-effectiveness
  • (potentially) really large number of training classes
  • (potentially) endless required retraining of some classes due to data drift, etc.

Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…)

Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries.

In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers.

The R.E.D. algorithm

R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable.

In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D.

Let’s dive in.

How it works

simple representation of what R.E.D. does

Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently:

  1. Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach.
  2. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets.
  3. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output.
  4. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved

The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later.

Let’s take a deeper look…

Greedy subset selection with least similar elements

When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples.

This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D.

Some ways of improving a classifier’s performance under these constraints:

  • Restrict the number of classes a classifier needs to classify between
  • Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes

Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the subsets has elements as training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset:

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity


def avg_embedding(candidate_embeddings):
    return np.mean(candidate_embeddings, axis=0)

def get_least_similar_embedding(target_embedding, candidate_embeddings):
    similarities = cosine_similarity(target_embedding, candidate_embeddings)
    least_similar_index = np.argmin(similarities)  # Use argmin to find the index of the minimum
    least_similar_element = candidate_embeddings[least_similar_index]
    return least_similar_element


def get_embedding_class(embedding, embedding_map):
    reverse_embedding_map = {value: key for key, value in embedding_map.items()}
    return reverse_embedding_map.get(embedding)  # Use .get() to handle missing keys gracefully


def select_subsets(embeddings, n):
    visited = {cls: False for cls in embeddings.keys()}
    subsets = []
    current_subset = []

    while any(not visited[cls] for cls in visited):
        for cls, average_embedding in embeddings.items():
            if not current_subset:
                current_subset.append(average_embedding)
                visited[cls] = True
            elif len(current_subset) >= n:
                subsets.append(current_subset.copy())
                current_subset = []
            else:
                subset_average = avg_embedding(current_subset)
                remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]]
                if not remaining_embeddings:
                    break # handle edge case
                
                least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings)

                visited_class = get_embedding_class(least_similar, embeddings)

                
                if visited_class is not None:
                  visited[visited_class] = True


                current_subset.append(least_similar)
    
    if current_subset:  # Add any remaining elements in current_subset
        subsets.append(current_subset)
        

    return subsets

the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only classes. This inherently makes the job of a classifier easier, compared to the original classes it would have to classify between otherwise!

Semi-supervised classification with noise oversampling

Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes.

Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well?

We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified.

As such, we created a design for how it would treat its data:

  • n+1 classes, where the last class is noise
  • noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels

Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification.

How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples.

Proxy active learning via an LLM agent

This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling

Let’s get an intuitive understanding of Active Labelling:

  • Use an ML model to learn on a sample input dataset, predict on a large set of datapoints
  • For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions
  • Recursively, new ‘corrected’ samples are added as training data to the ML model
  • The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions

For Active Labelling to work, there are expectations involved for an SME:

  • when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is
  • a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L

Given these expectations and intuitions, we can ‘mimic’ these using an LLM:

  • give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted.
Giving an LLM the capability to understand ‘why, what, and how’
  • Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query).
  • Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation.

The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification):

import math

def calculate_uncertainty(clf, sample):
    predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0]  # Reshape sample for predict_proba
    uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities)
    return uncertainty


def select_informative_samples(clf, data, k):
    informative_samples = []
    uncertainties = [calculate_uncertainty(clf, sample) for sample in data]

    # Sort data by descending order of uncertainty
    sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True)

    # Get top k samples with highest uncertainty
    for sample, uncertainty in sorted_data[:k]:
        informative_samples.append(sample)

    return informative_samples


def proxy_label(clf, llm_judge, k, testing_data):
    #llm_judge - any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it
    predicted_classes = clf.predict(testing_data)

    # Select k most informative samples using uncertainty sampling
    informative_samples = select_informative_samples(clf, testing_data, k)

    # List to store correct samples
    voted_data = []

    # Evaluate informative samples with the LLM judge
    for sample in informative_samples:
        sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue
        predicted_class = predicted_classes[sample_index]

        # Check if LLM judge agrees with the prediction
        if llm_judge(sample, predicted_class):
            # If correct, add the sample to voted data
            voted_data.append(sample)

    # Return the list of correct samples with proxy labels
    return voted_data

By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm:

Recursive Expert Delegation: R.E.D.

By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement).

I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results.

All images, unless otherwise noted, are by the author

Interested in more details? Reach out to me over Medium or email for a chat!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Four new vulnerabilities found in Ingress NGINX

NGINX is a reverse proxy/load balancer that generally acts as the front-end web traffic receiver and directs it to the application service for data transformation. Ingress NGINX is a version used in Kubernetes as the controller for traffic coming into the infrastructure. It takes care of mapping traffic to pods

Read More »

Phillips 66 to Cut Nearly 300 Jobs as LA Refinery Shuts

Phillips 66 will lay off around half of its employees at its sole remaining oil refinery in California after shuttering operations. The Houston-based company said it will cut 122 employees effective April 3 at two facilities in Carson and Wilmington that make up the company’s Los Angeles refinery, according a notice filed Monday with California’s employment regulator. This follows a separate notice last month that 155 employees will be terminated at the refinery in December, bringing the total to 277. The century-old refinery employs about 600 staff, according to Phillips 66’s website. The fuel-making plant has been slated to close since 2024 and the facility, once capable of processing 139,000 barrels of oil a day, refined its final barrel of crude in late 2025. Another Texas-based refiner, Valero Energy Corp., is also cutting more than 200 jobs in California this year as it idles a San Francisco Bay Area plant. Oil companies have decried what they call a hostile regulatory environment in the state, whose residents regularly pay the highest gasoline prices in the nation. Chevron Corp. officially relocated its headquarters to Texas in recent years and refiners have either fled or converted plants to producing biofuels, dwindling the in-state supply of petroleum products like gasoline, diesel and jet fuel. Some state lawmakers have recently tried to soften their stance toward the oil and gas industry. Phillips 66 continues to operate a biofuels refinery near San Francisco and import fossil fuels to California. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

WTI, Brent Gain as Talks Ease Conflict Fears

Oil edged marginally higher after a choppy session as investors assessed the status of nuclear talks between the US and Iran. West Texas Intermediate settled above $63 a barrel, with markets reacting sharply to headlines tied to the meeting. Iranian Foreign Minister Abbas Araghchi said the talks had a “good start,” even as the Wall Street Journal reported that Tehran stood by its refusal to end enrichment of nuclear fuel, a major sticking point for the US. The escalation in the Middle East, which provides about a third of the world’s crude, has added a risk premium to benchmark oil prices. Traders have weighed the geopolitical tensions against an outlook for oversupply. Still, futures in New York notched their first weekly retreat since mid-December as the US-Iran talks helped allay concerns over a broader conflict in the region. Prices also extended gains after data showed US consumer sentiment unexpectedly improved to the highest in six months, calming some concerns over an economic slowdown in the country that could lead to weaker oil demand. Meanwhile, in trilateral negotiations with the US, Ukraine and Russia agreed to exchange prisoners for the first time in five months as they sought to end their four-year conflict. Talks were making progress, with results expected “in the coming weeks,” President Donald Trump’s special envoy said. Saudi Arabia cut prices for buyers in Asia by less than expected, signaling confidence in demand for its barrels, although prices have still been reduced to the lowest levels since late 2020. Oil Prices WTI for March delivery settled 0.4% higher at $63.55 a barrel in New York. Brent for April settlement rose 0.7% to close at $68.05 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy

Read More »

Saudis Cut Key Oil Price for Asian Buyers

Saudi Arabia cut the price of its main oil grade for buyers in Asia to the lowest in years, a further sign that global supplies are running ahead of demand. State oil producer Saudi Aramco will reduce the price of its Arab Light grade by 30 cents a barrel to parity with the regional benchmark for March, according to a price list seen by Bloomberg. That brings pricing for the kingdom’s most plentiful crude blend to the lowest level since late 2020. Still, Aramco’s cut was not as deep as buyers expected, coming in smaller than even the most modest estimate of a reduction in a survey of refiners and traders. That offers a sign that the kingdom has faith in demand for its barrels and Aramco’s Chief Executive Officer Amin Nasser has previously said that fears of a glut are overblown. Saudi Arabia’s monthly crude pricing is keenly watched by traders across the globe as it sets the tone for other sellers in the world’s top producing regions. Asia is the biggest market for Middle Eastern crude, with the prices set for refiners determining the profitability of processing and influencing the cost of fuels like gasoline and diesel the world over. Aramco also cut pricing for its Arab Medium and Arab Heavy crude grades to Asia to the lowest levels since mid 2020, while it increased prices for the Extra Light and Super Light blends. That split reflects that dynamic in the Middle East market where prices for the heavier and more sulfurous crudes that are most plentiful in the region have trailed those for the lighter blends. The OPEC+ producers group, led by Saudi Arabia and Russia, agreed to keep production levels steady during talks on Feb. 1, maintaining an earlier decision to forgo output increases to avoid

Read More »

Shell to Pause Kazakh Oil and Gas Investments

Shell Plc will pause investment in Kazakhstan as it navigates legal claims from the OPEC+ nation against oil majors that could stretch into the billions of dollars, Chief Executive Officer Wael Sawan said. Kazakhstan is pressing multiple western oil companies for compensation across a series of cases both in the Central Asian country’s courts and in international arbitration. This month, it emerged that Shell and partners lost a dispute that could see them pay as much as $4 billion. There is also ongoing litigation about sulfur breaches and project costs. “It does impact our appetite to invest further in Kazakhstan,” Sawan said Thursday during an earnings conference call with analysts. While the company sees plenty of investment opportunities in the future, “we will hold until we have a better line of sight to where things end up.” The setbacks in Kazakhstan come as Shell seeks to ensure future production growth with a healthy inventory of projects. Acquisitions have largely filled the company’s production gap through 2030, buying time to deal with the 2030-2035 period, Sawan said in an interview on Thursday. The Kazakh energy ministry didn’t reply to an emailed request for comment sent outside normal working hours. Sawan didn’t elaborate on whether the pause would apply to new or existing projects. Shell didn’t immediately respond to a request to clarify whether the CEO was talking about new or existing investments. The latest dispute was against the Karachaganak field joint venture led by Italy’s Eni SpA and Shell, over cost deductions. Other partners include Chevron Corp., Lukoil PJSC and KazMunayGas National Co. The venture may still appeal the decision.   Last year, the companies proposed settling the dispute by building a plant that would process natural gas from the field for domestic use. WHAT DO YOU THINK? Generated by readers,

Read More »

Tankers With Russian Oil Flock to East Asia

More than a dozen tankers loaded with Russian Urals oil are sailing toward Asia or idling along the route, a sign of producers racing to get cargoes closer to China as India pulls back from the trade.  These vessels — carrying a combined 10 million to 12 million barrels of oil — are spread across the Indian Ocean, and off the coasts of Malaysia, China and Russia. Five of them are indicating ‘for orders’ or ‘China for orders’ as their status, according to data intelligence firm Kpler, a category that usually means they don’t yet have a specific buyer or discharge port. Another six are signaling Singapore and Malaysia, and are likely heading to a popular spot for ship-to-ship transfers in the South China Sea where they can wait until the crude is bought. Four are floating off Malaysia, China and Russia’s Far East, without indicating a clear destination. Urals — Russia’s flagship crude grade, which is loaded from ports in the Baltic Sea — has become the variety of choice for Indian refiners since the invasion of Ukraine in early 2022 saw it become heavily discounted. But pressure from Washington has pushed imports lower, reaching an average of 1.2 million barrels a day in January compared with a peak of more than 2 million barrels a day in mid-2024. Indian imports of the crude could be trimmed further after President Donald Trump said on Monday the country would stop buying Russian oil as part of deal to cut trade tariffs. Prime Minister Narendra Modi confirmed the agreement but didn’t comment on oil. Some refiners are holding off purchases while they seek clarification from New Delhi.  The big question is where the surplus cargoes of Urals — the bulk of which have gone to India over the last few years — will now end up. China’s

Read More »

BP, KOC Sign ETSA Extension

In a statement sent to Rigzone on Thursday, BP announced that it and Kuwait Oil Company have signed an extension of the Enhanced Technical Services Agreement (ETSA) between the companies. The agreement “paves the way for both companies to collaboratively progress Kuwait’s most strategic asset fields”, BP noted in the statement. BP added that the deal enables it to “bring expertise in enhanced oil recovery to the Greater Burgan oil field and develop local capabilities with Kuwait Oil Company to manage the development of South and East Kuwait fields through 50 secondment opportunities of BP’s technical experts”. Rigzone asked BP to disclose the deal’s value. A BP spokesperson was unable to do so. The ETSA was originally signed in 2016 for a period of 10 years, the statement highlighted, adding that it will now extend through to March 2029. BP Executive Vice President, Gas & Low Carbon Energy, William Lin, noted in the statement, “BP’s commitment to Kuwait dates back to our participation in the discovery of the Greater Burgan oil field in the 1930s, and we appreciate the trust placed in our expertise in giant oil and gas fields to continue to help develop this important strategic asset”. “This is another example of the deep relationships we’ve formed across governments, partners, and supply chains in the regions where we operate. We look forward to continuing our strong collaboration with Kuwait and to working with KOC to help support the country’s long-term energy resilience,” he added. BP notes on its website that it was one of the founders of the original Kuwait Oil Company, which it highlighted first discovered oil at Burgan in 1938. “Exportation of KOC began in 1946, in which the first export of Kuwait crude was loaded on to the bp vessel ‘Fusilier’,” BP’s site adds. BP

Read More »

Nvidia’s $100 Billion OpenAI Bet Shrinks and Signals a New Phase in the AI Infrastructure Cycle

One of the most eye-popping figures of the AI boom – a proposed $100 billion Nvidia commitment to OpenAI and as much as 10 gigawatts of compute for the company’s Stargate AI infrastructure buildout – is no longer on the table. And that partial retreat tells the data center industry something important. According to multiple reports surfacing at the end of January, Nvidia has paused and re-scoped its previously discussed, non-binding investment framework with OpenAI, shifting from an unprecedented capital-plus-infrastructure commitment to a much smaller (though still massive) equity investment. What was once framed as a potential $100 billion alignment is now being discussed in the $20-30 billion range, as part of OpenAI’s broader effort to raise as much as $100 billion at a valuation approaching $830 billion. For data center operators, infrastructure developers, and power providers, the recalibration matters less for the headline number and more for what it reveals about risk discipline, competitive dynamics, and the limits of vertical circularity in AI infrastructure finance. From Moonshot to Measured Capital The original September 2025 memorandum reportedly contemplated not just capital, but direct alignment on compute delivery: a structure that would have tightly coupled Nvidia’s balance sheet with OpenAI’s AI-factory roadmap. By late January, however, sources indicated Nvidia executives had grown uneasy with both the scale and the structure of the deal. Speaking in Taipei on January 31, Nvidia CEO Jensen Huang pushed back on reports of friction, calling them “nonsense” and confirming Nvidia would “absolutely” participate in OpenAI’s current fundraising round. But Huang was also explicit on what had changed: the investment would be “nothing like” $100 billion, even if it ultimately becomes the largest single investment Nvidia has ever made. That nuance matters. Nvidia is not walking away from OpenAI. But it is drawing a clearer boundary around

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Onsite Engineer – Critical FacilitiesCharleston, SC This is NOT a traveling position. Having degreed engineers seems to be all the rage these days. I can also use this type of candidate in following cities: Ashburn, VA; Moncks Corner, SC; Binghamton, NY; Dallas, TX or Indianapolis, IN. Our client is an engineering design and commissioning company that is a subject matter expert in the data center space. This role will be onsite at a customer’s data center. They will provide onsite design coordination and construction administration, consulting and management support for the data center / mission critical facilities space with the mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Ashburn, VA This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Montvale, NJ; Charlotte, NC; Atlanta, GA; Hampton, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX; Kansas City, MO; Omaha, NE; Chesterton, IN or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They

Read More »

Operationalizing AI at Scale: Google Cloud on Data Infrastructure, Search, and Enterprise AI

The AI conversation has been dominated by model announcements, benchmark races, and the rapid evolution of large language models. But in enterprise environments, the harder problem isn’t building smarter models. It’s making them work reliably with real-world data. On the latest episode of the Data Center Frontier Show Podcast, Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, pulled back the curtain on the infrastructure layer where many ambitious AI initiatives succeed, or quietly fail. Krishnamurthy operates at the intersection of databases, search, and AI systems. His perspective underscores a growing reality across enterprise IT: AI success increasingly depends on how organizations manage, integrate, and govern data across operational systems, not just how powerful their models are. The Disconnect Between LLMs and Reality Enterprises today face a fundamental challenge: connecting LLMs to real-time operational data. Search systems handle documents and unstructured information well. Operational databases manage transactions, customer data, and financial records with precision. But combining the two remains difficult. Krishnamurthy described the problem as universal. “Inside enterprises, knowledge workers are often searching documents while separately querying operational systems,” he said. “But combining unstructured information with operational database data is still hard to do.” Externally, customers encounter the opposite issue. Portals expose personal data but struggle to incorporate broader contextual information. “You get a narrow view of your own data,” he explained, “but combining that with unstructured information that might answer your real question is still challenging.” The result: AI systems often operate with incomplete context. Vector Search Moves Into the Database Vector search has emerged as a bridge between structured and unstructured worlds. But its evolution over the past three years has changed how enterprises deploy it. Early use cases focused on semantic search, i.e. finding meaning rather than exact keyword matches. Bug tracking systems, for example, began

Read More »

Transmission at the Breaking Point: Why the Grid Is Becoming the Defining Constraint for AI Data Centers

Regions in a Position to Scale California (A- overall)California continues to lead in long-term, scenario-based transmission planning. CAISO’s most recent transmission plan identifies $4.8 billion in new projects to accommodate approximately 76 gigawatts of additional capacity by 2039, explicitly accounting for data center growth alongside broader electrification. For data center developers, California’s challenge is less about planning quality and more about execution. Permitting timelines, cost allocation debates, and political scrutiny remain significant hurdles. Plains / Southwest Power Pool (B- overall, A in regional planning)SPP stands out nationally for embracing ultra-high-voltage transmission as a backbone strategy. Its recent Integrated Transmission Plans approve more than $16 billion in new projects, including multiple 765-kV lines, with benefit-cost ratios exceeding 10:1. This approach positions the Plains region as one of the most structurally “AI-ready” grids in North America, particularly for multi-gigawatt campuses supported by wind, natural gas, and emerging nuclear resources. Midwest / MISO (B overall)MISO’s Long-Range Transmission Planning framework aligns closely with federal best practices, co-optimizing generation and transmission over long planning horizons. While challenges remain—particularly around interregional coordination—the Midwest is comparatively well positioned for sustained data center growth. Regions Facing Heightened Risk Texas / ERCOT (D- overall)Texas has approved massive new transmission investments, including 765-kV projects tied to explosive load growth in the Permian Basin. However, the report criticizes ERCOT’s planning for remaining largely siloed and reliability-driven, with limited long-term scenario analysis and narrow benefit assessments. For data centers, ERCOT still offers speed to market, but increasingly with risks tied to congestion, price volatility, and political backlash surrounding grid reliability. Southeast (F overall)The Southeast receives failing grades across all categories, with transmission development remaining fragmented, utility-driven, and largely disconnected from durable regional planning frameworks. As AI data centers increasingly target the region for its land availability and tax incentives, the lack of

Read More »

From Row-Level CDUs to Facility-Scale Cooling: DCX Ramps Liquid Cooling for the AI Factory Era

Enter the 8MW CDU Era The next evolution arrived just days later. On Jan. 20, DCX announced its second-generation facility-scale unit, the FDU V2AT2, pushing capacity into territory previously unimaginable for single CDU platforms. The system delivers up to 8.15 megawatts of heat transfer capacity with record flow rates designed to support 45°C warm-water cooling, aligning directly with NVIDIA’s roadmap for rack-scale AI systems, including Vera Rubin-class deployments. That temperature target is significant. Warm-water cooling at this level allows many facilities to eliminate traditional chillers for heat rejection, depending on climate and deployment design. Instead of relying on compressor-driven refrigeration, operators can shift toward dry coolers or other simplified heat rejection strategies. The result: • Reduced mechanical complexity• Lower energy consumption• Improved efficiency at scale• New opportunities for heat reuse According to DCX CTO Maciek Szadkowski, the goal is to avoid obsolescence in a single hardware generation: “As the datacenter industry transitions to AI factories, operators need cooling systems that won’t be obsolete in one platform cycle. The FDU V2AT2 replaces multiple legacy CDUs and enables 45°C supply water operation while simplifying cooling topology and significantly reducing both CAPEX and OPEX.” The unit incorporates a high-capacity heat exchanger with a 2°C approach temperature, N+1 redundant pump configuration, integrated water quality control, and diagnostics systems designed for predictive maintenance. In short, this is infrastructure built not for incremental density growth, but for hyperscale AI facilities where megawatts of cooling must scale as predictably as compute capacity. Liquid Cooling Becomes System Architecture The broader industry implication is clear: cooling is no longer an auxiliary mechanical function. It is becoming system architecture. DCX’s broader 2025 performance metrics underscore the speed of this transition. The company reported 600% revenue growth, expanded its workforce fourfold, and shipped or secured contracts covering more than 500 MW

Read More »

AI Infrastructure Scales Out and Up: Edge Expansion Meets the Gigawatt Campus Era

The AI infrastructure boom is often framed around massive hyperscale campuses racing to secure gigawatts of power. But an equally important shift is happening in parallel: AI infrastructure is also becoming more distributed, modular, and sovereign, extending compute far beyond traditional data center hubs. A wave of recent announcements across developers, infrastructure investors, and regional operators shows the market pursuing a dual strategy. On one end, developers are accelerating delivery of hyperscale campuses measured in hundreds of megawatts, and increasingly gigawatts, often located where power availability and energy economics offer structural advantage, and in some cases pairing compute directly with dedicated generation. On the other, providers are building increasingly capable regional and edge facilities designed to bring AI compute closer to users, industrial operations, and national jurisdictions. Taken together, these moves point toward a future in which AI infrastructure is no longer purely centralized, but built around interconnected hub-and-spoke architectures combining energy-advantaged hyperscale cores with rapidly deployable edge capacity. Recent developments across hyperscale developers, edge specialists, infrastructure investors, and regional operators illustrate how quickly this model is taking shape. Sovereign AI Moves Beyond the Core On Feb. 5, 2026, San Francisco-based Armada and European AI infrastructure builder Nscale signed a letter of intent to jointly deploy both large-scale and edge AI infrastructure worldwide. The collaboration targets enterprise and public sector customers seeking sovereign, secure, geographically distributed AI environments. Nscale is building large AI supercomputer clusters globally, offering vertically integrated capabilities spanning power, data centers, compute, and software. Armada specializes in modular deployments through its Galleon data centers and Armada Edge Platform, delivering compute and storage into remote or infrastructure-poor environments. The combined offering addresses a growing challenge: many governments and enterprises want AI capability deployed within their own jurisdictions, even where traditional hyperscale infrastructure does not yet exist. “There is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »