Stay Ahead, Stay ONMINE

R.E.D.: Scaling Text Classification with Expert Delegation

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples. What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices? The classification conundrum Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…? Welp. It is. It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under: low amount of training data per class high classification accuracy (that plummets as you add more classes) possible addition of new classes to an existing subset of classes quick training/inference cost-effectiveness (potentially) really large number of training classes (potentially) endless required retraining of some classes due to data drift, etc. Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…) Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries. In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers. The R.E.D. algorithm R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable. In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D. Let’s dive in. How it works simple representation of what R.E.D. does Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently: Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later. Let’s take a deeper look… Greedy subset selection with least similar elements When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples. This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D. Some ways of improving a classifier’s performance under these constraints: Restrict the number of classes a classifier needs to classify between Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the S subsets has elements as n training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset: import numpy as np from sklearn.metrics.pairwise import cosine_similarity def avg_embedding(candidate_embeddings): return np.mean(candidate_embeddings, axis=0) def get_least_similar_embedding(target_embedding, candidate_embeddings): similarities = cosine_similarity(target_embedding, candidate_embeddings) least_similar_index = np.argmin(similarities) # Use argmin to find the index of the minimum least_similar_element = candidate_embeddings[least_similar_index] return least_similar_element def get_embedding_class(embedding, embedding_map): reverse_embedding_map = {value: key for key, value in embedding_map.items()} return reverse_embedding_map.get(embedding) # Use .get() to handle missing keys gracefully def select_subsets(embeddings, n): visited = {cls: False for cls in embeddings.keys()} subsets = [] current_subset = [] while any(not visited[cls] for cls in visited): for cls, average_embedding in embeddings.items(): if not current_subset: current_subset.append(average_embedding) visited[cls] = True elif len(current_subset) >= n: subsets.append(current_subset.copy()) current_subset = [] else: subset_average = avg_embedding(current_subset) remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]] if not remaining_embeddings: break # handle edge case least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings) visited_class = get_embedding_class(least_similar, embeddings) if visited_class is not None: visited[visited_class] = True current_subset.append(least_similar) if current_subset: # Add any remaining elements in current_subset subsets.append(current_subset) return subsets the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only n classes. This inherently makes the job of a classifier easier, compared to the original S classes it would have to classify between otherwise! Semi-supervised classification with noise oversampling Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes. Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well? We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified. As such, we created a design for how it would treat its data: n+1 classes, where the last class is noise noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification. How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples. Proxy active learning via an LLM agent This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling Let’s get an intuitive understanding of Active Labelling: Use an ML model to learn on a sample input dataset, predict on a large set of datapoints For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions Recursively, new ‘corrected’ samples are added as training data to the ML model The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions For Active Labelling to work, there are expectations involved for an SME: when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L Given these expectations and intuitions, we can ‘mimic’ these using an LLM: give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted. Giving an LLM the capability to understand ‘why, what, and how’ Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query). Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation. The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification): import math def calculate_uncertainty(clf, sample): predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0] # Reshape sample for predict_proba uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities) return uncertainty def select_informative_samples(clf, data, k): informative_samples = [] uncertainties = [calculate_uncertainty(clf, sample) for sample in data] # Sort data by descending order of uncertainty sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True) # Get top k samples with highest uncertainty for sample, uncertainty in sorted_data[:k]: informative_samples.append(sample) return informative_samples def proxy_label(clf, llm_judge, k, testing_data): #llm_judge – any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it predicted_classes = clf.predict(testing_data) # Select k most informative samples using uncertainty sampling informative_samples = select_informative_samples(clf, testing_data, k) # List to store correct samples voted_data = [] # Evaluate informative samples with the LLM judge for sample in informative_samples: sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue predicted_class = predicted_classes[sample_index] # Check if LLM judge agrees with the prediction if llm_judge(sample, predicted_class): # If correct, add the sample to voted data voted_data.append(sample) # Return the list of correct samples with proxy labels return voted_data By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm: Recursive Expert Delegation: R.E.D. By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement). I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results. All images, unless otherwise noted, are by the author Interested in more details? Reach out to me over Medium or email for a chat!

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples.

What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices?

The classification conundrum

Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…?

Welp. It is.

It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under:

  • low amount of training data per class
  • high classification accuracy (that plummets as you add more classes)
  • possible addition of new classes to an existing subset of classes
  • quick training/inference
  • cost-effectiveness
  • (potentially) really large number of training classes
  • (potentially) endless required retraining of some classes due to data drift, etc.

Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…)

Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries.

In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers.

The R.E.D. algorithm

R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable.

In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D.

Let’s dive in.

How it works

simple representation of what R.E.D. does

Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently:

  1. Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach.
  2. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets.
  3. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output.
  4. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved

The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later.

Let’s take a deeper look…

Greedy subset selection with least similar elements

When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples.

This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D.

Some ways of improving a classifier’s performance under these constraints:

  • Restrict the number of classes a classifier needs to classify between
  • Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes

Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the subsets has elements as training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset:

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity


def avg_embedding(candidate_embeddings):
    return np.mean(candidate_embeddings, axis=0)

def get_least_similar_embedding(target_embedding, candidate_embeddings):
    similarities = cosine_similarity(target_embedding, candidate_embeddings)
    least_similar_index = np.argmin(similarities)  # Use argmin to find the index of the minimum
    least_similar_element = candidate_embeddings[least_similar_index]
    return least_similar_element


def get_embedding_class(embedding, embedding_map):
    reverse_embedding_map = {value: key for key, value in embedding_map.items()}
    return reverse_embedding_map.get(embedding)  # Use .get() to handle missing keys gracefully


def select_subsets(embeddings, n):
    visited = {cls: False for cls in embeddings.keys()}
    subsets = []
    current_subset = []

    while any(not visited[cls] for cls in visited):
        for cls, average_embedding in embeddings.items():
            if not current_subset:
                current_subset.append(average_embedding)
                visited[cls] = True
            elif len(current_subset) >= n:
                subsets.append(current_subset.copy())
                current_subset = []
            else:
                subset_average = avg_embedding(current_subset)
                remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]]
                if not remaining_embeddings:
                    break # handle edge case
                
                least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings)

                visited_class = get_embedding_class(least_similar, embeddings)

                
                if visited_class is not None:
                  visited[visited_class] = True


                current_subset.append(least_similar)
    
    if current_subset:  # Add any remaining elements in current_subset
        subsets.append(current_subset)
        

    return subsets

the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only classes. This inherently makes the job of a classifier easier, compared to the original classes it would have to classify between otherwise!

Semi-supervised classification with noise oversampling

Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes.

Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well?

We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified.

As such, we created a design for how it would treat its data:

  • n+1 classes, where the last class is noise
  • noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels

Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification.

How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples.

Proxy active learning via an LLM agent

This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling

Let’s get an intuitive understanding of Active Labelling:

  • Use an ML model to learn on a sample input dataset, predict on a large set of datapoints
  • For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions
  • Recursively, new ‘corrected’ samples are added as training data to the ML model
  • The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions

For Active Labelling to work, there are expectations involved for an SME:

  • when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is
  • a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L

Given these expectations and intuitions, we can ‘mimic’ these using an LLM:

  • give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted.
Giving an LLM the capability to understand ‘why, what, and how’
  • Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query).
  • Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation.

The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification):

import math

def calculate_uncertainty(clf, sample):
    predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0]  # Reshape sample for predict_proba
    uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities)
    return uncertainty


def select_informative_samples(clf, data, k):
    informative_samples = []
    uncertainties = [calculate_uncertainty(clf, sample) for sample in data]

    # Sort data by descending order of uncertainty
    sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True)

    # Get top k samples with highest uncertainty
    for sample, uncertainty in sorted_data[:k]:
        informative_samples.append(sample)

    return informative_samples


def proxy_label(clf, llm_judge, k, testing_data):
    #llm_judge - any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it
    predicted_classes = clf.predict(testing_data)

    # Select k most informative samples using uncertainty sampling
    informative_samples = select_informative_samples(clf, testing_data, k)

    # List to store correct samples
    voted_data = []

    # Evaluate informative samples with the LLM judge
    for sample in informative_samples:
        sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue
        predicted_class = predicted_classes[sample_index]

        # Check if LLM judge agrees with the prediction
        if llm_judge(sample, predicted_class):
            # If correct, add the sample to voted data
            voted_data.append(sample)

    # Return the list of correct samples with proxy labels
    return voted_data

By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm:

Recursive Expert Delegation: R.E.D.

By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement).

I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results.

All images, unless otherwise noted, are by the author

Interested in more details? Reach out to me over Medium or email for a chat!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI agent traffic drives first profitable year for Fastly

Fetcher bots, which retrieve content in real time when users make queries to AI assistants, show different concentration patterns. OpenAI’s ChatGPT and related bots generated 68% of fetcher bot requests. In some cases, fetcher bot request volumes exceeded 39,000 requests per minute to individual sites. AI agents check multiple websites

Read More »

VEN Plans to Grant More Oil Blocks to Chevron and Repsol

Venezuela plans to grant more oil-production land to Chevron Corp. and Spain’s Repsol SA as the Trump administration pushes for private companies to rebuild the nation’s energy sector, according to people with knowledge of the matter. Officials in Caracas are poised to award the exploration and production blocks as soon as this week, the people said. Giving US and European companies more access to Venezuela’s oil-rich territory is a key piece of US President Donald Trump’s push to revive the nation’s dilapidated energy sector while eroding China and Russia’s local influence.  On Thursday, US Energy Secretary Chris Wright toured a project operated by Chevron in Venezuela’s Orinoco oil belt and told reporters that the opportunity for cooperation between the US and the South American nation is immense following the capture of former Venezuela President Nicolás Maduro.  In an interview with Bloomberg TV, Wright said the US would release additional licenses “soon,” with companies like Chevron seeing benefits from an increase of as much as a 30% in production in the next 18 to 24 months.  “Chevron is being enabled to massively grow their business here. They’re the largest producer in Venezuela today, and they’re going to be able to both expand the reserves they have and expand their operations,” Wright said. “They’re just one of many, but they’re going to be a big one,” he added. Repsol declined to comment. Chevron didn’t immediately respond to a request for comment. The Trump administration is expected to issue general license to allow international oil companies to explore and produce in Venezuela without violating US sanctions, Bloomberg reported earlier this month. It would be the latest is part of a string of authorizations from the Treasury Department to open up the nation’s oil sector since US forces captured Venezuela’s former President Nicolás Maduro on Jan.

Read More »

Oil Posts Second Straight Weekly Drop

Oil notched its first back-to-back weekly drop this year as traders weighed the prospect of expanded OPEC+ supplies against US-Iran nuclear talks and recent weakness in wider markets. West Texas Intermediate fell 1% for the week and ended the day little changed on Friday. President Donald Trump said the US deployed an additional aircraft carrier to the Middle East in case a nuclear deal is not reached with Iran. “If we don’t have a deal, we’ll need it,” Trump said at the White House. He added he thinks negotiations will ultimately be successful. Traders have been watching for any uptick in tensions between Washington and Tehran that could pose a threat to supply from the Middle East. The commodity was down earlier as OPEC+ members see scope for output increases to resume in April, believing concerns about a glut are overblown, delegates said. The group has not yet committed to any course of action or begun formal discussions for a March 1 meeting, they added. A second weekly decline in the futures market stands to snap a long run of gains for early 2026, when recurrent bouts of geopolitical tension including the US stand-off with Iran supported oil prices. At an energy conference in London this week, attendees flagged that they expect worldwide supplies to top demand this year, potentially feeding into higher inventories in the Atlantic basin, the region where global prices are set. Still, a pile-up of sanctioned oil coupled with supply disruptions in various nations has limited the impact thus far. Trading may be thinner ahead of the Presidents’ Day holiday in the US, contributing to exaggerated price swings. Oil Prices WTI for March delivery settled up 0.1% at $62.89 a barrel in New York. Brent for April settlement edged 0.3% higher to $67.75 a barrel. What

Read More »

Reliance Gets USA License to Directly Buy VEN Crude

Indian refiner Reliance Industries Ltd. has received a general license from the US government that will allow it to purchase Venezuelan oil directly, according to a person familiar with the matter.  Reliance, owned by billionaire Mukesh Ambani, applied for the permit last month and received it from the Treasury Department a few days ago, the person said, asking not to be named as the matter is not public. The move comes immediately on the heels of a trade deal with the US that slashes punitive tariffs for Indian exports but demands that the country stop importing discounted Russian oil. The Indian government has asked state-owned refiners to consider buying more Venezuelan crude, as well as oil from the US.  Venezuela is unlikely to produce large volumes of crude anytime soon, but even limited supplies provide a fallback option for India’s largest refiner. The US — which has stepped up involvement in Venezuela’s oil sector after capturing the country’s president last month — has been considering general licenses to permit purchases, trading and investment in a sprawling but threadbare industry. Reliance is the first Indian refiner to receive clearance in the current push.  Reliance has historically been an important consumer of the country’s heavy crude, having struck a term deal to secure as much as 400,000 barrels a day from Petroleos de Venezuela SA in 2012. It is among only a handful of refiners in India that have the capacity to process the high-viscosity, sour oil, which is difficult to extract and refine without diluent.  The Indian refining giant took about 25% of Venezuela’s exports in 2019, before its term deal got suspended in 2019 due to US sanctions. It last received a general license in 2024 and took crude until that expired last year, and was not renewed. Reuters first reported the issuance of

Read More »

Baker Hughes Explores $1.5B Sale of Waygate Unit

Baker Hughes Co. is exploring a potential sale of its Waygate Technologies unit, which provides industrial testing and inspection equipment, people with knowledge of the matter said.  The world’s second-biggest oilfield contractor is working with advisers to study a possible divestment of the Waygate business, which could fetch around $1.5 billion, according to the people. A sale process could kick off in the next few months and attract interest from private equity firms, the people said, asking not to be identified because the information is private.  Deliberations are ongoing and there’s no certainty they will lead to a transaction, the people said. A representative for Baker Hughes declined to comment.  Waygate, based in Hürth, Germany, makes radiographic testing systems, industrial CT scanners, remote visual inspection machines and ultrasonic testing devices. It operates in more than 80 countries and is known for brands including Krautkrämer, phoenix|x-ray, Seifert, Everest and Agfa NDT.  The company was started in 2004 as GE Inspection Technologies. It’s been under the current ownership since 2017, when General Electric Co. combined its oil and gas division with Baker Hughes in a $32 billion deal.  Baker Hughes is selling the non-core asset after agreeing last year to buy industrial equipment maker Chart Industries Inc. for about $9.6 billion in one of its biggest-ever acquisitions. Chief Executive Officer Lorenzo Simonelli said in October last year that Baker Hughes is undertaking a “comprehensive evaluation” of its capital allocation focus following the Chart deal in order to boost shareholder value.  The pending sale would join other sizeable corporate divestments in Europe. Volkswagen AG has launched the sale of a majority stake in its heavy diesel engine maker Everllence, while Continental AG is selling its Contitech business. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions

Read More »

EIA Raises 2026 WTI Forecast, Lowers 2027 Projection

The U.S. Energy Information Administration (EIA) increased its 2026 West Texas Intermediate (WTI) crude oil average spot price forecast, and lowered its 2027 projection, in its latest short term energy outlook (STEO). According to the EIA’s February STEO, which was released on February 10, the EIA now sees the WTI spot price averaging $53.42 per barrel this year and $49.34 per barrel next year. In its previous STEO, which was released in January, the EIA projected that the WTI spot price would average $52.21 per barrel in 2026 and $50.36 per barrel in 2027. A quarterly breakdown included in the EIA’s latest STEO projected that the WTI average spot price will come in at $58.62 per barrel in the first quarter of this year, $53.65 per barrel in the second quarter, $51.69 per barrel in the third quarter, $50.00 per barrel in the fourth quarter, $49.00 per barrel in the first quarter of next year, $49.66 per barrel in the second quarter, $49.68 per barrel in the third quarter, and $49.00 per barrel in the fourth quarter of 2027. In its previous STEO, the EIA forecast that the WTI spot price would average $54.93 per barrel in the first quarter of this year, $52.67 per barrel in the second quarter, $52.03 per barrel in the third quarter, $49.34 per barrel in the fourth quarter, $49.00 per barrel in the first quarter of next year, $50.66 per barrel in the second quarter, $50.68 per barrel in the third quarter, and $51.00 per barrel in the fourth quarter of 2027. In a BMI report sent to Rigzone by the Fitch Group on Friday, BMI projected that the front month WTI crude price will average $64.00 per barrel in 2026 and $68.00 per barrel in 2027. Standard Chartered sees the NYMEX WTI nearby

Read More »

Some OPEC+ Members See Scope to Resume Hikes in April

Some OPEC+ members see scope for the alliance to resume supply increases in April, believing concerns of a glut in global oil markets to be overblown. The group led by Saudi Arabia and Russia hasn’t committed to any course of action or begun formal discussions ahead of its meeting on March 1, according to several delegates, who asked not to be identified as the process is private. Their ultimate decision may depend on whether US President Donald Trump launches military action against — or reaches a nuclear deal with — OPEC member Iran, one added.  Nonetheless, some nations in the Organization of the Petroleum Exporting Countries and its allies said they see room to resume the output increases the coalition paused during the seasonal demand slowdown of the first quarter.  Trump’s assertive stance toward OPEC members Venezuela and Iran, along with disruptions spanning from North America to Kazakhstan, drove oil prices to a strong start of the year despite warnings of a supply glut. Several top traders have said that prices are supported by tightness in key markets, as many of the surplus barrels are from producers subject to sanctions like Russia and Iran, and thus remain unavailable to a wider pool of buyers. That has made the market surprisingly resilient. Brent futures are up 11% this year, after spiking to a six-month high near $72 a barrel at the end of January over concerns a conflict might erupt in the Middle East. Oil inventories piled up last year at the fastest pace since the 2020 pandemic amid swelling output from both OPEC+ and its competitors in the Americas, according to the International Energy Agency, though the impact on prices was tempered as China scooped up barrels for its strategic reserves. Last April, the Saudis stunned crude traders by steering OPEC+ to

Read More »

Arista laments ‘horrendous’ memory situation

Digging in on campus Arista has been clear about its plans to grow its presence campus networking environments. Last Fall, Ullal said she expects Arista’s campus and WAN business would grow from the current $750 million-$800 million run rate to $1.25 billion, representing a 60% growth opportunity for the company. “We are committed to our aggressive goal of $1.25 billion for ’26 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine and peering use cases,” Ullal said. “In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue.” Ethernet leads the way “In terms of annual 2025 product lines, our core cloud, AI and data center products built upon our highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent,” Ullal said. “This includes our portfolio of EtherLink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage and all of the interconnect zones.” Ullal said she expects Ethernet will get even more of a boost later this year when the multivendor Ethernet for Scale-Up Networking (ESUN) specification is released.  “We have consistently described that today’s configurations are mostly a combination of scale out and scale up were largely based on 800G and smaller ratings. Now that the ESUN specification is well underway, we need a good solid spec. Otherwise, we’ll be shipping proprietary products like some people in the world do today. And so we will tie our

Read More »

From NIMBY to YIMBY: A Playbook for Data Center Community Acceptance

Across many conversations at the start of this year, at PTC and other conferences alike, the word on everyone’s lips seems to be “community.” For the data center industry, that single word now captures a turning point from just a few short years ago: we are no longer a niche, back‑of‑house utility, but a front‑page presence in local politics, school board budgets, and town hall debates. That visibility is forcing a choice in how we tell our story—either accept a permanent NIMBY-reactive framework, or actively build a YIMBY narrative that portrays the real value digital infrastructure brings to the markets and surrounding communities that host it. Speaking regularly with Ilissa Miller, CEO of iMiller Public Relations about this topic, there is work to be done across the ecosystem to build communications. Miller recently reflected: “What we’re seeing in communities isn’t a rejection of digital infrastructure, it’s a rejection of uncertainty driven by anxiety and fear. Most local leaders have never been given a framework to evaluate digital infrastructure developments the way they evaluate roads, water systems, or industrial parks. When there’s no shared planning language, ‘no’ becomes the safest answer.” A Brief History of “No” Community pushback against data centers is no longer episodic; it has become organized, media‑savvy, and politically influential in key markets. In Northern Virginia, resident groups and environmental organizations have mobilized against large‑scale campuses, pressing counties like Loudoun and Prince William to tighten zoning, question incentives, and delay or reshape projects.1 Loudoun County’s move in 2025 to end by‑right approvals for new facilities, requiring public hearings and board votes, marked a watershed moment as the world’s densest data center market signaled that communities now expect more say over where and how these campuses are built. Prince William County’s decision to sharply increase its tax rate on

Read More »

Nomads at the Frontier: PTC 2026 Signals the Digital Infrastructure Industry’s Moment of Execution

Each January, the Pacific Telecommunications Council conference serves as a barometer for where digital infrastructure is headed next. And according to Nomad Futurist founders Nabeel Mahmood and Phillip Koblence, the message from PTC 2026 was unmistakable: The industry has moved beyond hype. The hard work has begun. In the latest episode of The DCF Show Podcast, part of our ongoing ‘Nomads at the Frontier’ series, Mahmood and Koblence joined Data Center Frontier to unpack the tone shift emerging across the AI and data center ecosystem. Attendance continues to grow year over year. Conversations remain energetic. But the character of those conversations has changed. As Mahmood put it: “The hype that the market started to see is actually resulting a bit more into actions now, and those conversations are resulting into some good progress.” The difference from prior years? Less speculation. More execution. From Data Center Cowboys to Real Deployments Koblence offered perhaps the sharpest contrast between PTC conversations in 2024 and those in 2026. Two years ago, many projects felt speculative. Today, developers are arriving with secured power, customers, and construction underway. “If 2024’s PTC was data center cowboys — sites that in someone’s mind could be a data center — this year was: show me the money, show me the power, give me accurate timelines.” In other words, the market is no longer rewarding hypothetical capacity. It is demanding delivered capacity. Operators now speak in terms of deployments already underway, not aspirational campuses still waiting on permits and power commitments. And behind nearly every conversation sits the same gating factor. Power. Power Has Become the Industry’s Defining Constraint Whether discussions centered on AI factories, investment capital, or campus expansion, Mahmood and Koblence noted that every conversation eventually returned to energy availability. “All of those questions are power,” Koblence said.

Read More »

Cooling Consolidation Hits AI Scale: LiquidStack, Submer, and the Future of Data Center Thermal Strategy

As AI infrastructure scales toward ever-higher rack densities and gigawatt-class campuses, cooling has moved from a technical subsystem to a defining strategic issue for the data center industry. A trio of announcements in early February highlights how rapidly the cooling and AI infrastructure stack is consolidating and evolving: Trane Technologies’ acquisition of LiquidStack; Submer’s acquisition of Radian Arc, extending its reach from core data centers into telco edge environments; and Submer’s partnership with Anant Raj to accelerate sovereign AI infrastructure deployment across India. Layered atop these developments is fresh guidance from Oracle Cloud Infrastructure explaining why closed-loop, direct-to-chip cooling is becoming central to next-generation facility design, particularly in regions where water use has become a flashpoint in community discussions around data center growth. Taken together, these developments show how the industry is moving beyond point solutions toward integrated, scalable AI infrastructure ecosystems, where cooling, compute, and deployment models must work together across hyperscale campuses and distributed edge environments alike. Trane Moves to Own the Cooling Stack The most consequential development comes from Trane Technologies, which on February 10 announced it has entered into a definitive agreement to acquire LiquidStack, one of the pioneers and leading innovators in data center liquid cooling. The acquisition significantly strengthens Trane’s ambition to become a full-service thermal partner for data center operators, extending its reach from plant-level systems all the way down to the chip itself. LiquidStack, headquartered in Carrollton, Texas, built its reputation on immersion cooling and advanced direct-to-chip liquid solutions supporting high-density deployments across hyperscale, enterprise, colocation, edge, and blockchain environments. Under Trane, those technologies will now be scaled globally and integrated into a broader thermal portfolio. In practical terms, Trane is positioning itself to deliver cooling across the full thermal chain, including: • Central plant equipment and chillers.• Heat rejection and controls

Read More »

Infrastructure Maturity Defines the Next Phase of AI Deployment

The State of Data Infrastructure Global Report 2025 from Hitachi Vantara arrives at a moment when the data center industry is undergoing one of the most profound structural shifts in its history. The transition from enterprise IT to AI-first infrastructure has moved from aspiration to inevitability, forcing operators, developers, and investors to confront uncomfortable truths about readiness, resilience, and risk. Although framed around “AI readiness,” the report ultimately tells an infrastructure story: one that maps directly onto how data centers are designed, operated, secured, and justified economically. Drawing on a global survey of more than 1,200 IT leaders, the report introduces a proprietary maturity model that evaluates organizations across six dimensions: scalability, reliability, security, governance, sovereignty, and sustainability. Respondents are then grouped into three categories—Emerging, Defined, and Optimized—revealing a stark conclusion: most organizations are not constrained by access to AI models or capital, but by the fragility of the infrastructure supporting their data pipelines. For the data center industry, the implications are immediate, shaping everything from availability design and automation strategies to sustainability planning and evolving customer expectations. In short, extracting value from AI now depends less on experimentation and more on the strength and resilience of the underlying infrastructure. The Focus of the Survey: Infrastructure, Not Algorithms Although the report is positioned as a study of AI readiness, its primary focus is not models, training approaches, or application development, but rather the infrastructure foundations required to operate AI reliably at scale. Drawing on responses from more than 1,200 organizations, Hitachi Vantara evaluates how enterprises are positioned to support production AI workloads across six dimensions as stated above: scalability, reliability, security, governance, sovereignty, and sustainability. These factors closely reflect the operational realities shaping modern data center design and management. The survey’s central argument is that AI success is no longer

Read More »

AI’s New Land Grab: Meta’s Indiana Megaproject and the Rise of Europe’s Neocloud Challengers

While Meta’s Indiana campus anchors hyperscale expansion in the United States, Europe recorded its own major infrastructure milestone this week as Amsterdam-based AI infrastructure provider Nebius unveiled plans for a 240-megawatt data center campus in Béthune, France, near Lille in the country’s northern industrial corridor. When completed, the campus will rank among Europe’s largest AI-focused data center facilities and positions northern France as a growing node in the continent’s expanding AI infrastructure map. The development repurposes a former Bridgestone tire manufacturing site, reflecting a broader trend across Europe in which legacy industrial properties, already equipped with heavy power access, transport links, and industrial zoning, are being converted into large-scale digital infrastructure hubs. Located within reach of connectivity and enterprise corridors linking Paris, Brussels, London, and Amsterdam, the site allows Nebius to serve major European markets while avoiding the congestion and power constraints increasingly shaping Tier 1 data center hubs. Industrial Infrastructure Becomes Digital Infrastructure Developers increasingly view former industrial sites as ideal for AI campuses because they often provide: • Existing grid interconnection capacity built for heavy industry• Transport and logistics infrastructure already in place• Industrial zoning that reduces permitting friction• Large contiguous parcels suited to phased campus expansion For regions like Hauts-de-France, redevelopment projects also offer economic transition opportunities, replacing legacy manufacturing capacity with next-generation digital infrastructure investment. Local officials have positioned the project as part of broader efforts to reposition northern France as a logistics and technology hub within Europe. The Neocloud Model Gains Ground Beyond the site itself, Nebius’ expansion illustrates the rapid emergence of neocloud infrastructure providers, companies building GPU-intensive AI capacity without operating full hyperscale cloud ecosystems. These firms increasingly occupy a strategic middle ground: supplying AI compute capacity to enterprises, startups, and even hyperscalers facing short-term infrastructure constraints. Nebius’ rise over the past year

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »