Stay Ahead, Stay ONMINE

R.E.D.: Scaling Text Classification with Expert Delegation

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples. What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices? The classification conundrum Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…? Welp. It is. It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under: low amount of training data per class high classification accuracy (that plummets as you add more classes) possible addition of new classes to an existing subset of classes quick training/inference cost-effectiveness (potentially) really large number of training classes (potentially) endless required retraining of some classes due to data drift, etc. Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…) Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries. In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers. The R.E.D. algorithm R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable. In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D. Let’s dive in. How it works simple representation of what R.E.D. does Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently: Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later. Let’s take a deeper look… Greedy subset selection with least similar elements When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples. This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D. Some ways of improving a classifier’s performance under these constraints: Restrict the number of classes a classifier needs to classify between Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the S subsets has elements as n training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset: import numpy as np from sklearn.metrics.pairwise import cosine_similarity def avg_embedding(candidate_embeddings): return np.mean(candidate_embeddings, axis=0) def get_least_similar_embedding(target_embedding, candidate_embeddings): similarities = cosine_similarity(target_embedding, candidate_embeddings) least_similar_index = np.argmin(similarities) # Use argmin to find the index of the minimum least_similar_element = candidate_embeddings[least_similar_index] return least_similar_element def get_embedding_class(embedding, embedding_map): reverse_embedding_map = {value: key for key, value in embedding_map.items()} return reverse_embedding_map.get(embedding) # Use .get() to handle missing keys gracefully def select_subsets(embeddings, n): visited = {cls: False for cls in embeddings.keys()} subsets = [] current_subset = [] while any(not visited[cls] for cls in visited): for cls, average_embedding in embeddings.items(): if not current_subset: current_subset.append(average_embedding) visited[cls] = True elif len(current_subset) >= n: subsets.append(current_subset.copy()) current_subset = [] else: subset_average = avg_embedding(current_subset) remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]] if not remaining_embeddings: break # handle edge case least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings) visited_class = get_embedding_class(least_similar, embeddings) if visited_class is not None: visited[visited_class] = True current_subset.append(least_similar) if current_subset: # Add any remaining elements in current_subset subsets.append(current_subset) return subsets the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only n classes. This inherently makes the job of a classifier easier, compared to the original S classes it would have to classify between otherwise! Semi-supervised classification with noise oversampling Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes. Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well? We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified. As such, we created a design for how it would treat its data: n+1 classes, where the last class is noise noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification. How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples. Proxy active learning via an LLM agent This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling Let’s get an intuitive understanding of Active Labelling: Use an ML model to learn on a sample input dataset, predict on a large set of datapoints For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions Recursively, new ‘corrected’ samples are added as training data to the ML model The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions For Active Labelling to work, there are expectations involved for an SME: when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L Given these expectations and intuitions, we can ‘mimic’ these using an LLM: give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted. Giving an LLM the capability to understand ‘why, what, and how’ Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query). Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation. The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification): import math def calculate_uncertainty(clf, sample): predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0] # Reshape sample for predict_proba uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities) return uncertainty def select_informative_samples(clf, data, k): informative_samples = [] uncertainties = [calculate_uncertainty(clf, sample) for sample in data] # Sort data by descending order of uncertainty sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True) # Get top k samples with highest uncertainty for sample, uncertainty in sorted_data[:k]: informative_samples.append(sample) return informative_samples def proxy_label(clf, llm_judge, k, testing_data): #llm_judge – any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it predicted_classes = clf.predict(testing_data) # Select k most informative samples using uncertainty sampling informative_samples = select_informative_samples(clf, testing_data, k) # List to store correct samples voted_data = [] # Evaluate informative samples with the LLM judge for sample in informative_samples: sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue predicted_class = predicted_classes[sample_index] # Check if LLM judge agrees with the prediction if llm_judge(sample, predicted_class): # If correct, add the sample to voted data voted_data.append(sample) # Return the list of correct samples with proxy labels return voted_data By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm: Recursive Expert Delegation: R.E.D. By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement). I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results. All images, unless otherwise noted, are by the author Interested in more details? Reach out to me over Medium or email for a chat!

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples.

What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices?

The classification conundrum

Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…?

Welp. It is.

It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under:

  • low amount of training data per class
  • high classification accuracy (that plummets as you add more classes)
  • possible addition of new classes to an existing subset of classes
  • quick training/inference
  • cost-effectiveness
  • (potentially) really large number of training classes
  • (potentially) endless required retraining of some classes due to data drift, etc.

Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…)

Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries.

In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers.

The R.E.D. algorithm

R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable.

In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D.

Let’s dive in.

How it works

simple representation of what R.E.D. does

Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently:

  1. Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach.
  2. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets.
  3. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output.
  4. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved

The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later.

Let’s take a deeper look…

Greedy subset selection with least similar elements

When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples.

This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D.

Some ways of improving a classifier’s performance under these constraints:

  • Restrict the number of classes a classifier needs to classify between
  • Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes

Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the subsets has elements as training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset:

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity


def avg_embedding(candidate_embeddings):
    return np.mean(candidate_embeddings, axis=0)

def get_least_similar_embedding(target_embedding, candidate_embeddings):
    similarities = cosine_similarity(target_embedding, candidate_embeddings)
    least_similar_index = np.argmin(similarities)  # Use argmin to find the index of the minimum
    least_similar_element = candidate_embeddings[least_similar_index]
    return least_similar_element


def get_embedding_class(embedding, embedding_map):
    reverse_embedding_map = {value: key for key, value in embedding_map.items()}
    return reverse_embedding_map.get(embedding)  # Use .get() to handle missing keys gracefully


def select_subsets(embeddings, n):
    visited = {cls: False for cls in embeddings.keys()}
    subsets = []
    current_subset = []

    while any(not visited[cls] for cls in visited):
        for cls, average_embedding in embeddings.items():
            if not current_subset:
                current_subset.append(average_embedding)
                visited[cls] = True
            elif len(current_subset) >= n:
                subsets.append(current_subset.copy())
                current_subset = []
            else:
                subset_average = avg_embedding(current_subset)
                remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]]
                if not remaining_embeddings:
                    break # handle edge case
                
                least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings)

                visited_class = get_embedding_class(least_similar, embeddings)

                
                if visited_class is not None:
                  visited[visited_class] = True


                current_subset.append(least_similar)
    
    if current_subset:  # Add any remaining elements in current_subset
        subsets.append(current_subset)
        

    return subsets

the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only classes. This inherently makes the job of a classifier easier, compared to the original classes it would have to classify between otherwise!

Semi-supervised classification with noise oversampling

Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes.

Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well?

We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified.

As such, we created a design for how it would treat its data:

  • n+1 classes, where the last class is noise
  • noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels

Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification.

How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples.

Proxy active learning via an LLM agent

This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling

Let’s get an intuitive understanding of Active Labelling:

  • Use an ML model to learn on a sample input dataset, predict on a large set of datapoints
  • For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions
  • Recursively, new ‘corrected’ samples are added as training data to the ML model
  • The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions

For Active Labelling to work, there are expectations involved for an SME:

  • when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is
  • a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L

Given these expectations and intuitions, we can ‘mimic’ these using an LLM:

  • give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted.
Giving an LLM the capability to understand ‘why, what, and how’
  • Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query).
  • Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation.

The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification):

import math

def calculate_uncertainty(clf, sample):
    predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0]  # Reshape sample for predict_proba
    uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities)
    return uncertainty


def select_informative_samples(clf, data, k):
    informative_samples = []
    uncertainties = [calculate_uncertainty(clf, sample) for sample in data]

    # Sort data by descending order of uncertainty
    sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True)

    # Get top k samples with highest uncertainty
    for sample, uncertainty in sorted_data[:k]:
        informative_samples.append(sample)

    return informative_samples


def proxy_label(clf, llm_judge, k, testing_data):
    #llm_judge - any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it
    predicted_classes = clf.predict(testing_data)

    # Select k most informative samples using uncertainty sampling
    informative_samples = select_informative_samples(clf, testing_data, k)

    # List to store correct samples
    voted_data = []

    # Evaluate informative samples with the LLM judge
    for sample in informative_samples:
        sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue
        predicted_class = predicted_classes[sample_index]

        # Check if LLM judge agrees with the prediction
        if llm_judge(sample, predicted_class):
            # If correct, add the sample to voted data
            voted_data.append(sample)

    # Return the list of correct samples with proxy labels
    return voted_data

By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm:

Recursive Expert Delegation: R.E.D.

By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement).

I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results.

All images, unless otherwise noted, are by the author

Interested in more details? Reach out to me over Medium or email for a chat!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

NetBrain’s new AI agents automate network diagnosis

In testing, the system handled the majority of real-world network issues. “90% of the real-world network issues that they had when they threw them at it, it handled it,” Nixon said. “[People] couldn’t quite believe that it was at the 90% mark. People went in thinking, ‘Well, if this gives me

Read More »

IBM FlashSystems gain AI-assisted telemetry, analytics

For security, the systems include a new FlashCore Module all-flash drive, which brings hardware-accelerated, real-time ransomware detection, data reduction, analytics and operations. The devices can spot anomalies and patterns in data that need to be remediated, IBM noted. “The next-generation IBM FlashSystem elevates storage to an intelligent, always-available layer, where autonomous

Read More »

Versa bolsters data protection, AI-powered operations in SASE upgrade

Docker-containerized ML models execute data discovery and classification locally, maintaining data sovereignty while scanning file repositories, SaaS applications, and inline traffic flows, the authors stated.  “Versa DLP uses advanced transformer models and fine-tuned Large Language Models (LLMs) to detect sensitive information across diverse document types and formats. Unlike traditional pattern

Read More »

Oil Gains As Middle East Tensions Rise

Oil gained as tensions in the Middle East outweighed concerns that there is a supply glut building in the market. West Texas Intermediate rose to settle above $64 after the Wall Street Journal said that the Pentagon has told a second aircraft carrier strike group to prepare to deploy to the Middle East, citing three US officials. That follows an earlier report from the news agency that the US is considering seizing tankers with Iranian crude. “Oil right now, and just the rest of the commodity complex, is really dominated by three things: geopolitics, trade and technology,” Francisco Blanch, head of commodities research at Bank of America Global Research, said in a Bloomberg Television interview. “Certainly, right now, geopolitics are the main driving force pushing oil close to the high end of this year’s range.” Iran is the fourth-largest OPEC producer, pumping an estimated 3.3 million barrels a day in January, according to a Bloomberg survey. Crude and condensate shipments totaled about 1.63 million barrels a day last month, vessel-tracking data show. The WSJ report meant crude erased earlier losses after President Donald Trump said in a social media post that he insisted that talks with Iran continue in a meeting with Israeli Prime Minister Benjamin Netanyahu. It was widely expected that Netanyahu would push for a broad curtailment of the Islamic Republic’s military activities in the region. The commodity has also received support earlier after strong US jobs data brightened the outlook for the world’s largest economy. “A resilient labor market underpins demand for transport fuels, petrochemicals and power generation, reducing downside risks to US consumption at a time when macro sentiment had turned cautious,” said Claudio Galimberti, chief economist at Rystad Energy. The strong numbers are a sign that the demand picture is firming up, he added. Crude

Read More »

OPEC Says Oil Production Declined Last Month

OPEC+ oil production declined sharply last month amid losses in Kazakhstan, Venezuela and Iran, the group said.  The 22 nations of the alliance produced an average of 42.448 million barrels a day in January, or 439,000 a day less than the previous month, according to a copy of the group’s monthly report obtained by Bloomberg. Kazakhstan accounted for more than half of the drop. While the report didn’t give a reason for the overall decline, Kazakhstan’s production fell as it suspended operations at the Tengiz oil field, the country’s largest. The Chevron-led venture started to restore output there at the end of last month.  Separately, Venezuelan oil exports were disrupted by a US blockade during the ousting of former President Nicolas Maduro, while Iran continues to face American sanctions. Saudi Arabia and several other key nations held steady in January as the Organization of the Petroleum Exporting Countries and its allies began a three-month freeze to offset a seasonal lull in consumption. They’ll meet online on March 1 to review production levels for April and beyond. OPEC kept forecasts for global oil supply and demand unchanged for this year and next, according to the report. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Ukraine Hits Lukoil Refinery

Ukraine attacked an oil refinery in Russia’s Volgograd region in the first major strike on Russia’s oil-processing industry this year. An overnight drone strike sparked a fire at the facility, Ukraine’s General Staff said on Telegram Wednesday. “The scope of the damage is being clarified,” it said, adding that the refinery helps supply the Russian army. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks were designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines in the war, now nearing its fifth year. The Volgograd refinery, which was attacked several times last year, has a design capacity of about 300,000 barrels of crude a day. It mainly supplies oil products to southern Russia, with some volumes exported. The administration of the Volgograd region said in a Telegram statement that an an industrial plant caught fire after a drone attack but did not name the facility. Lukoil, Russia’s largest private oil producer, did not immediately respond to a request for comment. Satellite images from NASA’s Fire Information for Resource Management System show multiple fires at the refinery that began during the night of Feb. 10-11. The fires were not visible the previous day, according to the data. In January, Ukraine targeted three small independent Russian refineries, which together account for about 7% of Russia’s typical monthly crude throughput. The lull in drone strikes had offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually increase. Encouraged by the recovery, the government lifted its ban on most gasoline exports, permitting producers to resume shipments in February — a month earlier than planned. While Ukrainian attacks on Russia’s oil industry slowed in January, Moscow continued intense assaults on energy infrastructure

Read More »

TotalEnergies Cuts Buyback to Lower End of Range

(Update) February 11, 2026, 5:10 PM GMT: Article updated with comments on dividend growth, potential investment decisions and acquisitions from 14th paragraph. TotalEnergies SE trimmed its share buybacks to the lower end of its guidance range, aiming to keep debt in check as it adjusts to lower oil prices. The company plans to repurchase $750 million of stock in the first quarter, compared with $1.5 billion in the final three months of 2025, it said in an earnings statement Wednesday. For the year, its buyback target was kept at a range of $3 billion to $6 billion. TotalEnergies is the third and last of Europe’s top oil and gas producers to release earnings after Shell Plc and BP Plc published disappointing quarterly reports. The company has a lower ratio of debt to equity than its European peers and kept quarterly dividend unchanged. “This year we want to balance cash generation with cash expenditure,” Chief Executive Officer Patrick Pouyanne said during a press conference in Paris to discuss earnings. “We don’t know what will happen this year. We want to keep a healthy balance sheet.” Shares of Total closed 2.7% up, at their highest since July 2024. The company has a “solid balance sheet despite uncertain environment,“ Jefferies analysts led by Mark Wilson said in a note after the earnings release. While Big Oil is still churning out hefty profits, cash flows — particularly in Europe — have been undermined by last year’s 18% dive in crude prices. There are also widespread forecasts that the market will remain oversupplied this year as production swells both inside and outside the OPEC+ alliance. “Oil supply remains abundant, so the market is rather trending down,” Pouyanne said, adding that sanctions on Russia are causing a buildup of the nation’s crude at sea. Total’s adjusted

Read More »

EIA Sees Brent Price Dropping in 2026 and 2027

In its latest short term energy outlook (STEO), which was released on February 10, the U.S. Energy Information Administration (EIA) projected that the average Brent spot price will drop in 2026 and 2027. According to this STEO, the EIA sees the Brent spot price coming in at $57.69 per barrel in 2026 and $53.00 per barrel in 2027. The Brent spot price averaged $69.04 per barrel in 2025, the STEO showed. A quarterly breakdown included in the EIA’s latest STEO showed that the organization expects the Brent spot price to come in at $64.44 per barrel in the first quarter of this year, $57.32 per barrel in the second quarter, $55.35 per barrel in the third quarter, $54.00 per barrel in the fourth quarter, and $53.00 per barrel across the first, second, third, and fourth quarters of next year. In the STEO, the EIA highlighted that the Brent crude oil spot price averaged $67 per barrel in January, which it pointed out was $4 per barrel higher than the average in December. The EIA noted that daily Brent crude oil prices increased from an average of $62 per barrel on January 2 to $72 per barrel on January 30. “Crude oil prices rose in response to disruptions to crude oil production in the United States and Kazakhstan,” the EIA highlighted in the STEO. “Despite the near-term increase in prices and short-term disruptions to oil supply, we forecast that strong growth in global oil production will result in high global oil inventory builds over the forecast, causing crude oil prices to fall,” it added. “We forecast that Brent spot prices will average $58 per barrel in 2026 and $53 per barrel in 2027, down from an average of $69 per barrel in 2025,” it continued. In its STEO, the EIA said

Read More »

USA Allows Oilfield Contractors to Go to Work in VEN Fields

The US government issued a general license to allow oilfield-service companies to work in Venezuela as the Trump administration eases sanctions and pushes to rebuild the nation’s crude infrastructure. The license issued by the Treasury Department allows US firms to explore, develop and produce oil and natural gas in Venezuela under certain limited conditions, according to a statement Tuesday. The move is the latest in a series of steps Washington has taken to entice US companies to revive output from Venezuela’s vast crude reserves after last month’s capture of strongman Nicolás Maduro. In January, the US issued a general license that allowed for a wide range of crude operations, including exporting, transporting, refining and buying and selling crude. The general license announced Tuesday involves tasks such as geological mapping, reservoir analysis and related tasks that augment the commencement of oil production.  However, the license does not allow new joint ventures in Venezuela. US people and firms will need to provide detailed plans to the State Department and Department of Energy for any work in the country, according to the statement. The Treasury Department is also preparing to issue a general license allowing companies to pump oil in Venezuela, Bloomberg reported earlier this month.  Oilfield service companies are hired by producers to asses discoveries, drill wells, and enhance output from older assets. SLB Ltd., Halliburton Co. and Baker Hughes Co. dominate the sector. SLB has been working in Venezuela for Chevron Corp., operating under a US license held by the supermajor. The other large contractors scaled back or shut down their primary operations in the country as the previous regime tightened control over the energy industry.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

Read More »

Energy providers seek flexible load strategies for data center operations

“In theory, yes, they’d have to wait a little bit longer while their queries are routed to a data center that has capacity,” said Lawrence. The one thing the industry cannot do is operate like it has in the past, where data center power was tuned and then forgotten for six months. Previously, data centers would test their power sources once or twice a year. They don’t have that luxury anymore. They need to check their power sources and loads far more regularly, according to Lawrence. “I think that for that for the data center industry to continue to survive like we all need it, there’s going to have to be some realignment on the incentives to why somebody would become flexible,” said Lawrence. The survey suggests that utilities and load operators expect to expand their demand response activities and budgets in the near term. Sixty-three percent of respondents anticipate DR program funding to grow by 50% or more over the next three years. While they remain a major source of load growth and system strain, 57% of respondents indicate that onsite power generation from data centers will be most important to improving grid stability over the next five years. One of the proposed fixes to the power shortage has been small modular nuclear reactors. These have gained a lot of traction in the marketplace even if they have nothing to sell yet. But Lawrence said that that’s not an ideal solution for existing power generators, ironically enough.

Read More »

Nokia predicts huge WAN traffic growth, but experts question assumptions

Consumer, which includes both mobile access and fixed access, including fixed wireless access. Enterprise and industrial, which covers wide-area connectivity that supports knowledge work, automation, machine vision, robotics coordination, field support, and industrial IoT. AI, including applications that people directly invoke, such as assistants, copilots, and media generation, as well as autonomous use cases in which AI systems trigger other AI systems to perform functions and move data across networks. The report outlines three scenarios: conservative, moderate, and aggressive. “Our goal is to present scenarios that fall within a realistic range of possible outcomes, encouraging stakeholders to plan across the full spectrum of high-impact demand possibilities,” the report says. Nokia’s prediction for global WAN traffic growth ranges from a 13% CAGR for the conservative scenario to 16% CAGR for moderate and 22% CAGR for aggressive. Looking more closely at the moderate scenario, it’s clear that consumer traffic dominates. Enterprise and industrial traffic make up only about 14% to 17% of overall WAN traffic, although their share is expected to grow during the 10-year forecast period. “On the consumer side, the vast majority of traffic by volume is video,” says William Webb, CEO of the consulting firm Commcisive. Asked whether any of that consumer traffic is at some point served up by enterprises, the answer is a decisive “no.” It’s mostly YouTube and streaming services like Netflix, he says. In short, that doesn’t raise enterprise concerns. Nokia predicts AI traffic boom AI is a different story. “Consumer- and enterprise-generated AI traffic imposes a substantial impact on the wide-area network (WAN) by adding AI workloads processed by data centers across the WAN. AI traffic does not stay inside one data center; it moves across edge, metro, core, and cloud infrastructure, driving dense lateral flows and new capacity demands,” the report says. An

Read More »

Cisco amps up Silicon One line, delivers new systems and optics for AI networking

Those building blocks include the new G300 as well as the G200 51.2 Tbps chip, which is aimed at spine and aggregation applications, and the G100 25.6 Tbps chip, which is aimed at leaf operations. Expanded portfolio of Silicon One P200-powered systems Cisco in October rolled out the P200 Silicon One chip and the high-end, 51.2 Tbps 8223 router aimed at distributed AI workloads. That system supports Octal Small Form-Factor Pluggable (OSFP) and Quad Small Form-Factor Pluggable Double Density (QSFP-DD) optical form factors that help the box support geographically dispersed AI clusters. Cisco grew the G200 family this week with the addition of the 8122X-64EF-O, a 64x800G switch that will run the SONiC OS and includes support for Cisco 800G Linear Pluggable Optics (LPO) connectivity. LPO components typically set up direct links between fiber optic modules, eliminating the need for traditional components such as a digital signal processor. Cisco said its P200 systems running IOS XR software now better support core routing services to allow data-center-to-data-center links and data center interconnect applications. In addition, Cisco introduced a P200-powered 88-LC2-36EF-M line card, which delivers 28.8T of capacity. “Available for both our 8-slot and 18-slot modular systems, this line card enables up to an unprecedented 518.4T of total system bandwidth, the highest in the industry,” wrote Guru Shenoy, senior vice president of the Cisco provider connectivity group, in a blog post about the news. “When paired with Cisco 800G ZR/ZR+ coherent pluggable optics, these systems can easily connect sites over 1,000 kilometers apart, providing the high-density performance needed for modern data center interconnects and core routing.”

Read More »

NetBox Labs ships AI copilot designed for network engineers, not developers

Natural language for network engineers Beevers explained that network operations teams face two fundamental barriers to automation. First, they lack accurate data about their infrastructure. Second, they aren’t software developers and shouldn’t have to become them. “These are not software developers. They are network engineers or IT infrastructure engineers,” Beevers said. “The big realization for us through the copilot journey is they will never be software developers. Let’s stop trying to make them be. Let’s let these computers that are really good at being software developers do that, and let’s let the network engineers or the data center engineers be really good at what they’re really good at.”  That vision drove the development of NetBox Copilot’s natural language interface and its capabilities. Grounding AI in infrastructure reality The challenge with deploying AI  in network operations is trust. Generic large language models hallucinate, produce inconsistent results, and lack the operational context to make reliable decisions. NetBox Copilot addresses this by grounding the AI agent in NetBox’s comprehensive infrastructure data model. NetBox serves as the system of record for network and infrastructure teams, maintaining a semantic map of devices, connections, IP addressing, rack layouts, power distribution and the relationships between these elements. Copilot has native awareness of this data structure and the context it provides. This enables queries that would be difficult or impossible with traditional interfaces. Network engineers can ask “Which devices are missing IP addresses?” to validate data completeness, “Who changed this prefix last week?” for change tracking and compliance, or “What depends on this switch?” for impact analysis before maintenance windows.

Read More »

US pushes voluntary pact to curb AI data center energy impact

Others note that cost pressure isn’t limited to the server rack. Danish Faruqui, CEO of Fab Economics, said the AI ecosystem is layered from silicon to software services, creating multiple points where infrastructure expenses eventually resurface. “Cloud service providers are likely to gradually introduce more granular pricing models across cloud, AI, and SaaS offerings, tailored by customer type, as they work to absorb the costs associated with the White House energy and grid compact,” Faruqui said.   This may not show up as explicit energy surcharges, but instead surface through reduced discounts, higher spending commitments, and premiums for guaranteed capacity or performance. “Smaller enterprises will feel the impact first, while large strategic customers remain insulated longer,” Rawat said. “Ultimately, the compact would delay and redistribute cost pressure; it does not eliminate it.” Implications for data center design The proposal is also likely to accelerate changes in how AI facilities are designed. “Data centers will evolve into localized microgrids that combine utility power with on-site generation and higher-level implementation of battery energy storage systems,” Faruqui said. “Designing for grid interaction will become imperative for AI data centers, requiring intelligent, high-speed switching gear, increased battery energy storage capacity for frequency regulation, and advanced control systems that can manage on-site resources.”

Read More »

Intel teams with SoftBank to develop new memory type

However, don’t expect anything anytime soon. Intel’s Director of Global Strategic Partnerships Sanam Masroor outlined the plans in a blog post. Operations are expected to begin in Q1 2026, with prototypes due in 2027 and commercial products by 2030. While Intel has not come out and said it, that memory design is almost identical to HBM used in GPU accelerators and AI data centers. HBM sits right on the GPU die for immediate access to the GPU, unlike standard DRAM which resides on memory sticks plugged into the motherboard. HBM is much faster than DDR memory but is also much more expensive to produce. It’s also much more profitable than standard DRAM which is why the big three memory makers – Micron, Samsung, and SK Hynix – are favoring production of it.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »