Stay Ahead, Stay ONMINE

R.E.D.: Scaling Text Classification with Expert Delegation

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples. What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices? The classification conundrum Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…? Welp. It is. It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under: low amount of training data per class high classification accuracy (that plummets as you add more classes) possible addition of new classes to an existing subset of classes quick training/inference cost-effectiveness (potentially) really large number of training classes (potentially) endless required retraining of some classes due to data drift, etc. Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…) Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries. In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers. The R.E.D. algorithm R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable. In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D. Let’s dive in. How it works simple representation of what R.E.D. does Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently: Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later. Let’s take a deeper look… Greedy subset selection with least similar elements When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples. This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D. Some ways of improving a classifier’s performance under these constraints: Restrict the number of classes a classifier needs to classify between Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the S subsets has elements as n training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset: import numpy as np from sklearn.metrics.pairwise import cosine_similarity def avg_embedding(candidate_embeddings): return np.mean(candidate_embeddings, axis=0) def get_least_similar_embedding(target_embedding, candidate_embeddings): similarities = cosine_similarity(target_embedding, candidate_embeddings) least_similar_index = np.argmin(similarities) # Use argmin to find the index of the minimum least_similar_element = candidate_embeddings[least_similar_index] return least_similar_element def get_embedding_class(embedding, embedding_map): reverse_embedding_map = {value: key for key, value in embedding_map.items()} return reverse_embedding_map.get(embedding) # Use .get() to handle missing keys gracefully def select_subsets(embeddings, n): visited = {cls: False for cls in embeddings.keys()} subsets = [] current_subset = [] while any(not visited[cls] for cls in visited): for cls, average_embedding in embeddings.items(): if not current_subset: current_subset.append(average_embedding) visited[cls] = True elif len(current_subset) >= n: subsets.append(current_subset.copy()) current_subset = [] else: subset_average = avg_embedding(current_subset) remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]] if not remaining_embeddings: break # handle edge case least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings) visited_class = get_embedding_class(least_similar, embeddings) if visited_class is not None: visited[visited_class] = True current_subset.append(least_similar) if current_subset: # Add any remaining elements in current_subset subsets.append(current_subset) return subsets the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only n classes. This inherently makes the job of a classifier easier, compared to the original S classes it would have to classify between otherwise! Semi-supervised classification with noise oversampling Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes. Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well? We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified. As such, we created a design for how it would treat its data: n+1 classes, where the last class is noise noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification. How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples. Proxy active learning via an LLM agent This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling Let’s get an intuitive understanding of Active Labelling: Use an ML model to learn on a sample input dataset, predict on a large set of datapoints For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions Recursively, new ‘corrected’ samples are added as training data to the ML model The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions For Active Labelling to work, there are expectations involved for an SME: when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L Given these expectations and intuitions, we can ‘mimic’ these using an LLM: give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted. Giving an LLM the capability to understand ‘why, what, and how’ Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query). Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation. The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification): import math def calculate_uncertainty(clf, sample): predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0] # Reshape sample for predict_proba uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities) return uncertainty def select_informative_samples(clf, data, k): informative_samples = [] uncertainties = [calculate_uncertainty(clf, sample) for sample in data] # Sort data by descending order of uncertainty sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True) # Get top k samples with highest uncertainty for sample, uncertainty in sorted_data[:k]: informative_samples.append(sample) return informative_samples def proxy_label(clf, llm_judge, k, testing_data): #llm_judge – any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it predicted_classes = clf.predict(testing_data) # Select k most informative samples using uncertainty sampling informative_samples = select_informative_samples(clf, testing_data, k) # List to store correct samples voted_data = [] # Evaluate informative samples with the LLM judge for sample in informative_samples: sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue predicted_class = predicted_classes[sample_index] # Check if LLM judge agrees with the prediction if llm_judge(sample, predicted_class): # If correct, add the sample to voted data voted_data.append(sample) # Return the list of correct samples with proxy labels return voted_data By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm: Recursive Expert Delegation: R.E.D. By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement). I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results. All images, unless otherwise noted, are by the author Interested in more details? Reach out to me over Medium or email for a chat!

With the new age of problem-solving augmented by Large Language Models (LLMs), only a handful of problems remain that have subpar solutions. Most classification problems (at a PoC level) can be solved by leveraging LLMs at 70–90% Precision/F1 with just good prompt engineering techniques, as well as adaptive in-context-learning (ICL) examples.

What happens when you want to consistently achieve performance higher than that — when prompt engineering no longer suffices?

The classification conundrum

Text classification is one of the oldest and most well-understood examples of supervised learning. Given this premise, it should really not be hard to build robust, well-performing classifiers that handle a large number of input classes, right…?

Welp. It is.

It actually has to do a lot more with the ‘constraints’ that the algorithm is generally expected to work under:

  • low amount of training data per class
  • high classification accuracy (that plummets as you add more classes)
  • possible addition of new classes to an existing subset of classes
  • quick training/inference
  • cost-effectiveness
  • (potentially) really large number of training classes
  • (potentially) endless required retraining of some classes due to data drift, etc.

Ever tried building a classifier beyond a few dozen classes under these conditions? (I mean, even GPT could probably do a great job up to ~30 text classes with just a few samples…)

Considering you take the GPT route — If you have more than a couple dozen classes or a sizeable amount of data to be classified, you are gonna have to reach deep into your pockets with the system prompt, user prompt, few shot example tokens that you will need to classify one sample. That is after making peace with the throughput of the API, even if you are running async queries.

In applied ML, problems like these are generally tricky to solve since they don’t fully satisfy the requirements of supervised learning or aren’t cheap/fast enough to be run via an LLM. This particular pain point is what the R.E.D algorithm addresses: semi-supervised learning, when the training data per class is not enough to build (quasi)traditional classifiers.

The R.E.D. algorithm

R.E.D: Recursive Expert Delegation is a novel framework that changes how we approach text classification. This is an applied ML paradigm — i.e., there is no fundamentally different architecture to what exists, but its a highlight reel of ideas that work best to build something that is practical and scalable.

In this post, we will be working through a specific example where we have a large number of text classes (100–1000), each class only has few samples (30–100), and there are a non-trivial number of samples to classify (10,000–100,000). We approach this as a semi-supervised learning problem via R.E.D.

Let’s dive in.

How it works

simple representation of what R.E.D. does

Instead of having a single classifier classify between a large number of classes, R.E.D. intelligently:

  1. Divides and conquers — Break the label space (large number of input labels) into multiple subsets of labels. This is a greedy label subset formation approach.
  2. Learns efficiently — Trains specialized classifiers for each subset. This step focuses on building a classifier that oversamples on noise, where noise is intelligently modeled as data from other subsets.
  3. Delegates to an expert — Employes LLMs as expert oracles for specific label validation and correction only, similar to having a team of domain experts. Using an LLM as a proxy, it empirically ‘mimics’ how a human expert validates an output.
  4. Recursive retraining — Continuously retrains with fresh samples added back from the expert until there are no more samples to be added/a saturation from information gain is achieved

The intuition behind it is not very hard to grasp: Active Learning employs humans as domain experts to consistently ‘correct’ or ‘validate’ the outputs from an ML model, with continuous training. This stops when the model achieves acceptable performance. We intuit and rebrand the same, with a few clever innovations that will be detailed in a research pre-print later.

Let’s take a deeper look…

Greedy subset selection with least similar elements

When the number of input labels (classes) is high, the complexity of learning a linear decision boundary between classes increases. As such, the quality of the classifier deteriorates as the number of classes increases. This is especially true when the classifier does not have enough samples to learn from — i.e. each of the training classes has only a few samples.

This is very reflective of a real-world scenario, and the primary motivation behind the creation of R.E.D.

Some ways of improving a classifier’s performance under these constraints:

  • Restrict the number of classes a classifier needs to classify between
  • Make the decision boundary between classes clearer, i.e., train the classifier on highly dissimilar classes

Greedy Subset Selection does exactly this — since the scope of the problem is Text Classification, we form embeddings of the training labels, reduce their dimensionality via UMAP, then form S subsets from them. Each of the subsets has elements as training labels. We pick training labels greedily, ensuring that every label we pick for the subset is the most dissimilar label w.r.t. the other labels that exist in the subset:

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity


def avg_embedding(candidate_embeddings):
    return np.mean(candidate_embeddings, axis=0)

def get_least_similar_embedding(target_embedding, candidate_embeddings):
    similarities = cosine_similarity(target_embedding, candidate_embeddings)
    least_similar_index = np.argmin(similarities)  # Use argmin to find the index of the minimum
    least_similar_element = candidate_embeddings[least_similar_index]
    return least_similar_element


def get_embedding_class(embedding, embedding_map):
    reverse_embedding_map = {value: key for key, value in embedding_map.items()}
    return reverse_embedding_map.get(embedding)  # Use .get() to handle missing keys gracefully


def select_subsets(embeddings, n):
    visited = {cls: False for cls in embeddings.keys()}
    subsets = []
    current_subset = []

    while any(not visited[cls] for cls in visited):
        for cls, average_embedding in embeddings.items():
            if not current_subset:
                current_subset.append(average_embedding)
                visited[cls] = True
            elif len(current_subset) >= n:
                subsets.append(current_subset.copy())
                current_subset = []
            else:
                subset_average = avg_embedding(current_subset)
                remaining_embeddings = [emb for cls_, emb in embeddings.items() if not visited[cls_]]
                if not remaining_embeddings:
                    break # handle edge case
                
                least_similar = get_least_similar_embedding(target_embedding=subset_average, candidate_embeddings=remaining_embeddings)

                visited_class = get_embedding_class(least_similar, embeddings)

                
                if visited_class is not None:
                  visited[visited_class] = True


                current_subset.append(least_similar)
    
    if current_subset:  # Add any remaining elements in current_subset
        subsets.append(current_subset)
        

    return subsets

the result of this greedy subset sampling is all the training labels clearly boxed into subsets, where each subset has at most only classes. This inherently makes the job of a classifier easier, compared to the original classes it would have to classify between otherwise!

Semi-supervised classification with noise oversampling

Cascade this after the initial label subset formation — i.e., this classifier is only classifying between a given subset of classes.

Picture this: when you have low amounts of training data, you absolutely cannot create a hold-out set that is meaningful for evaluation. Should you do it at all? How do you know if your classifier is working well?

We approached this problem slightly differently — we defined the fundamental job of a semi-supervised classifier to be pre-emptive classification of a sample. This means that regardless of what a sample gets classified as it will be ‘verified’ and ‘corrected’ at a later stage: this classifier only needs to identify what needs to be verified.

As such, we created a design for how it would treat its data:

  • n+1 classes, where the last class is noise
  • noise: data from classes that are NOT in the current classifier’s purview. The noise class is oversampled to be 2x the average size of the data for the classifier’s labels

Oversampling on noise is a faux-safety measure, to ensure that adjacent data that belongs to another class is most likely predicted as noise instead of slipping through for verification.

How do you check if this classifier is working well — in our experiments, we define this as the number of ‘uncertain’ samples in a classifier’s prediction. Using uncertainty sampling and information gain principles, we were effectively able to gauge if a classifier is ‘learning’ or not, which acts as a pointer towards classification performance. This classifier is consistently retrained unless there is an inflection point in the number of uncertain samples predicted, or there is only a delta of information being added iteratively by new samples.

Proxy active learning via an LLM agent

This is the heart of the approach — using an LLM as a proxy for a human validator. The human validator approach we are talking about is Active Labelling

Let’s get an intuitive understanding of Active Labelling:

  • Use an ML model to learn on a sample input dataset, predict on a large set of datapoints
  • For the predictions given on the datapoints, a subject-matter expert (SME) evaluates ‘validity’ of predictions
  • Recursively, new ‘corrected’ samples are added as training data to the ML model
  • The ML model consistently learns/retrains, and makes predictions until the SME is satisfied by the quality of predictions

For Active Labelling to work, there are expectations involved for an SME:

  • when we expect a human expert to ‘validate’ an output sample, the expert understands what the task is
  • a human expert will use judgement to evaluate ‘what else’ definitely belongs to a label L when deciding if a new sample should belong to L

Given these expectations and intuitions, we can ‘mimic’ these using an LLM:

  • give the LLM an ‘understanding’ of what each label means. This can be done by using a larger model to critically evaluate the relationship between {label: data mapped to label} for all labels. In our experiments, this was done using a 32B variant of DeepSeek that was self-hosted.
Giving an LLM the capability to understand ‘why, what, and how’
  • Instead of predicting what is the correct label, leverage the LLM to identify if a prediction is ‘valid’ or ‘invalid’ only (i.e., LLM only has to answer a binary query).
  • Reinforce the idea of what other valid samples for the label look like, i.e., for every pre-emptively predicted label for a sample, dynamically source c closest samples in its training (guaranteed valid) set when prompting for validation.

The result? A cost-effective framework that relies on a fast, cheap classifier to make pre-emptive classifications, and an LLM that verifies these using (meaning of the label + dynamically sourced training samples that are similar to the current classification):

import math

def calculate_uncertainty(clf, sample):
    predicted_probabilities = clf.predict_proba(sample.reshape(1, -1))[0]  # Reshape sample for predict_proba
    uncertainty = -sum(p * math.log(p, 2) for p in predicted_probabilities)
    return uncertainty


def select_informative_samples(clf, data, k):
    informative_samples = []
    uncertainties = [calculate_uncertainty(clf, sample) for sample in data]

    # Sort data by descending order of uncertainty
    sorted_data = sorted(zip(data, uncertainties), key=lambda x: x[1], reverse=True)

    # Get top k samples with highest uncertainty
    for sample, uncertainty in sorted_data[:k]:
        informative_samples.append(sample)

    return informative_samples


def proxy_label(clf, llm_judge, k, testing_data):
    #llm_judge - any LLM with a system prompt tuned for verifying if a sample belongs to a class. Expected output is a bool : True or False. True verifies the original classification, False refutes it
    predicted_classes = clf.predict(testing_data)

    # Select k most informative samples using uncertainty sampling
    informative_samples = select_informative_samples(clf, testing_data, k)

    # List to store correct samples
    voted_data = []

    # Evaluate informative samples with the LLM judge
    for sample in informative_samples:
        sample_index = testing_data.tolist().index(sample.tolist()) # changed from testing_data.index(sample) because of numpy array type issue
        predicted_class = predicted_classes[sample_index]

        # Check if LLM judge agrees with the prediction
        if llm_judge(sample, predicted_class):
            # If correct, add the sample to voted data
            voted_data.append(sample)

    # Return the list of correct samples with proxy labels
    return voted_data

By feeding the valid samples (voted_data) to our classifier under controlled parameters, we achieve the ‘recursive’ part of our algorithm:

Recursive Expert Delegation: R.E.D.

By doing this, we were able to achieve close-to-human-expert validation numbers on controlled multi-class datasets. Experimentally, R.E.D. scales up to 1,000 classes while maintaining a competent degree of accuracy almost on par with human experts (90%+ agreement).

I believe this is a significant achievement in applied ML, and has real-world uses for production-grade expectations of cost, speed, scale, and adaptability. The technical report, publishing later this year, highlights relevant code samples as well as experimental setups used to achieve given results.

All images, unless otherwise noted, are by the author

Interested in more details? Reach out to me over Medium or email for a chat!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AMD to build two more supercomputers at Oak Ridge National Labs

Lux is engineered to train, refine, and deploy AI foundation models that accelerate scientific and engineering progress. Its advanced architecture supports data-intensive and model-centric workloads, thereby enhancing AI-driven research capabilities. Discovery differs from Lux in that it uses Instinct MI430X GPUs instead of the 300 series. The MI400 Series is

Read More »

Oil Holds Steady Ahead of OPEC+ Talks

Oil was little changed for a second day as investors await clarity on how latest US sanctions on top Russian producers will affect flows to India and if progress in trade talks will revive American crude shipments to China. West Texas Intermediate rose about 0.1% to settle below $61 a barrel. Some Indian refiners have paused purchases of Russian fuel following the US blacklisting of two major Russian producers last week. Earlier in the week, however, Indian Oil Corp said that the company is “absolutely not going to discontinue” its purchases of Russian crude. The boss of TotalEnergies, meanwhile, said on Thursday he thinks that the oil market is underestimating the impact of the sanctions. Traders are keeping a close eye on next moves by India and China, top buyers of Moscow’s supplies, for clues on impacts to global balances. US President Donald Trump said China would buy more US energy as part of a wider trade truce but investors are still waiting to see signs that the purchases will materialize. China last imported US crude oil in May and liquefied natural gas in February, according to Customs data. Offering a ceiling to prices, Federal Reserve Chair Jerome Powell on Wednesday tempered expectations for another December rate cut as Fed officials navigate a slowing labor market and above target inflation. That stoked worry that the second-largest crude importing economy won’t get a boost from lower rates just as excess supplies are expected to hit the market. “The quarter percentage point cut in interest rates was within expectations, however the Fed Chairman’s comments are a near-term pressure point for crude,” said Dennis Kissler, senior vice president for trading at BOK Financial. WTI front-month futures will be met with technical support at the $58.18 level, he added. Crude remains on track for

Read More »

TotalEnergies Sees Debt Pile Falling Further

TotalEnergies SE said debt came down in the third quarter and will likely fall further by the end of the year as more asset sales complete. Net debt receded to $24.6 billion at the end of September from $26 billion at the end of June, the French energy major said in a statement. It also reported quarterly earnings that met analyst estimates. Total’s shares have trailed those of US and UK peers for much of the year as debt ballooned. Investor concerns about borrowings prompted the company to reduce its quarterly share buybacks last month, and it signaled repurchases may fall further next year if oil prices continue to weaken. The stock dropped as much as 3.2% on Thursday, potentially reflecting plans by the French opposition National Rally to make buybacks more expensive. The lower house — where no party or coalition has an outright majority — has adopted an amendment proposed by the far-right group, which would raise a levy on repurchases to 33%. “This has something to do with the underperformance” of Total today, said Ahmed Ben Salem, an analyst at Oddo BHF. It’s unclear whether the amendment will remain the same in the final budget bill, he said. The shares traded down 1.7% at €53.22 as of 3:53 p.m. in Paris. There’s “big fiscal creativity” in parliament, Chief Executive Officer Patrick Pouyanne said on a conference call. “There is a huge debate making a lot of noise, but I trust that at the end of the day we land on a reasonable” path. Profit Falls Total posted a 2.3% decline in quarterly adjusted net income, citing lower oil and gas prices, while hydrocarbon production increased. Gearing — a measure of indebtedness — eased to 17.3% at the end of the period and is seen as low as 15% at

Read More »

USA Energy Sec Says USA Is Ready to Sell More Oil, Gas to China

Energy Secretary Chris Wright said the US is prepared to sell more oil and natural gas to China if Beijing cuts back on purchases from Russia.  “There’s so much space for mutually beneficial deals between the US and China,” Wright said Thursday during a Bloomberg Television interview, noting that the US is the world’s largest oil and gas exporter, while China is the biggest importer.  The energy secretary plans to travel to Asia within weeks, or possibly sooner, following President Donald Trump trip to the continent this week.  During his trip, Trump said he reached deals with Chinese President Xi Jinping and South Korea President Lee Jae Myung to buy more US oil and gas. Trump also cited a “very large scale” transaction involving Alaskan oil and gas in a post on the social media site Truth Social but didn’t provide more details. “There is lots of room from the United States to grow our role in supplying natural gas, oil, and frankly nuclear technology to South Korea,” Wright said in the interview.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

DTE inks first data center deal to grow electric load 25%

8.4 GW data center pipeline DTE Energy has signed agreements to serve a 1.4 GW hyperscaler and has line of sight to another 7 GW of potential large loads, officials said. $30 billion investment pipeline DTE plans to invest $30 billion in generation, distribution and other infrastructure across the 2026-2030 timeframe. 12 GW New generation DTE expects to add from 2026 to 2032, including batteries, renewables and gas. DTE Energy has signed a 1.4 GW agreement to serve a hyperscale data center and sees “transformational growth” ahead in a project pipeline that could represent up to an additional 7 GW of load. It is the utility’s first hyperscaler agreement at a time when data centers are rapidly expanding around the United States.  Large loads, including AI data centers, could ultimately add 20% to U.S. utilities’ peak demand, most within the next decade, Wood Mackenzie said in a September report. DTE serves about 2.3 million customers in southeast Michigan, including Detroit. “This is an exciting milestone,” DTE President and CEO Joi Harris said in a Thursday call with analysts. “Aside from the 1.4 GW of new load, we are still in late-stage negotiations with an additional 3 GW of data center load providing potential further upside to our capital plan as we advance these negotiations. … And we have a pipeline of an additional 3-4 GW behind that.” The data center contract of 1.4 GW increases DTE’s electric load by 25%, officials said. “We also expect longer-term growth opportunities through the expansion of these initial hyperscaler projects,” Harris said. The generation investment needed to support the additional load “could very well come into the back end of our five-year plan, providing incremental capital investment.” The utility has added about $6 billion to its five-year plan and now expects to invest $30 billion in

Read More »

AEP capital spending plan surges 33% to $72B in utility ‘super cycle’

$72 billion capital expenditure plan Up 33% from AEP’s previous five-year capex plan, partly driven by 765-kV transmission projects in Texas and the PJM Interconnection region. 65 GW peak load in 2030 Up 76% from AEP’s summer peak, driven by 28 GW in data center and other large load agreements. $2.6 billion year-to-date operating earnings Up 13% from a year ago, partly driven by 765-kV transmission projects in Texas and the PJM Interconnection region. 7% to 9% earnings per share annual growth rate Up from 6% to 8% previously. AEP’s stock price jumped 6% Wednesday to $122.11/share. 3.5% Annual residential rate hikes AEP expects its customers will face over the next five years. Surging loads In the last 12 months, AEP’s utilities sold 6% more electricity compared to the previous year, with residential sales up 2.3% and commercial sales up 7.9%, and those sales are expected to continue growing, according to the company. About 2 GW of data centers came online in the third quarter, Trevor Mihalik, AEP vice president and CFO, said Wednesday during an earnings conference call. AEP expects its peak load will hit 65 GW by 2030, up from 37 GW, with demand surging in Indiana, Ohio, Oklahoma and Texas, according to William Fehrman, AEP chairman, president and CEO. The growth estimate includes 28 GW of customers with electric service agreements or letters of agreement, he said. About half of that 28 GW is in the Electric Reliability Council of Texas market, 40% in the PJM Interconnection and 10% in the Southwest Power Pool, according to Fehrman. About 80% of that pending demand is from hyperscalers such as Google, AWS and Meta, Mihalik said. The remaining demand growth is from industrial customers with projects such as a Nucor steel mill in West Virginia and Cheniere Energy’s liquefied

Read More »

A hydrogen ‘do-over’ for California

Melanie Davidson is a hydrogen policy and markets expert. Most recently she led clean fuels strategy at San Diego Gas & Electric. She is former board member of the California Hydrogen Business Council and was a founding staff member of the Green Hydrogen Coalition. Recently, over $2 billion of federal funding for the U.S. Department of Energy California and Pacific Northwest Hydrogen Hubs was terminated. These Hubs were premised on the use of “renewable, electrolytic hydrogen” — meaning hydrogen generated by using renewable electricity to power water-splitting electrolyzers. The resultant hydrogen would have replaced fossil fuels for heavy duty transportation, port operations, and power generation.  The idea of a fully renewable, water-based, hydrogen economy for the West was an exciting one — both in its altruism and the premise, backed by the DOE’s 2021 “Hydrogen Shot.” The idea was for cheap, abundant solar and rapidly declining electrolyzer cost curves to generate hydrogen from water with zero emissions — for $1/kg by 2030, no less.  However, cuts in Hub funding, together with a 2027 sunset date for projects to qualify for the hydrogen production tax credit, are just two more blows to the many pre-existing economic challenges facing a renewable hydrogen future. At least in California, those challenges include rising (not falling) capital costs for electrolyzers and electrical equipment, high interest rates, a scarcity of water rights, and high costs of grid electricity, qualifying renewable energy credits, and land.  The renewable Hubs were anchored on the idea that by leveraging otherwise curtailed solar (terrawatt-hours worth annually), we could generate cheap, abundant, seasonally stored renewable electrolytic hydrogen at distributed locations, then convert the hydrogen back to the grid via fuel cells as needed. It’s an elegant idea, but it doesn’t pencil. The capital costs of those electrolyzers, compressors, liquefiers, hydrogen storage vessels and fuel

Read More »

Nvidia GTC show news you need to know round-up

In the case of Flex, it will use digital twins to unify inventory, labor, and freight operations, streamlining logistics across Flex’s worldwide network. Flex’s new 400,000 sq. ft. facility in Dallas is purpose-built for data center infrastructure, aiming to significantly shorten lead times for U.S. customers. The Flex/Nvidia partnership aims to address the country’s labor shortages and drive innovation in manufacturing, pharmaceuticals, and technology. The companies believe the partnership sets the stage for a new era of giga-scale AI factories. Nvidia and Oracle to Build DOE’s Largest AI Supercomputer Oracle continues its aggressive push into supercomputing with a deal to build the largest AI supercomputer for scientific discovery — Using Nvidia GPUs, obviously — at a Department of Energy facility. The system, dubbed Solstice, will feature an incredible 100,000 Nvidia Blackwell GPUs. A second system, dubbed Equinox, will include 10,000 Blackwell GPUs and is expected to be available in the first half of 2026. Both systems will be interconnected by Nvidia networking and deliver a combined 2,200 exaflops of AI performance. The Solstice and Equinox supercomputers will be located at Argonne National Laboratory, the home to the Aurora supercomputer, built using all Intel parts. They will enable scientists and researchers to develop and train new frontier models and AI reasoning models for open science using the Nvidia Megatron-Core library and scale them using the Nvidia TensorRT inference software stack.

Read More »

AWS opens giant data center for AI training

Just over a year after construction began, Amazon Web Services (AWS) has opened its giant data center near Lake Michigan in the US state of Indiana. The data center, which is part of AWS Project Rainier, covers 1,200 acres, or 4.86 square kilometers. This makes it one of the largest data centers in the world, CNBC reports. The construction cost amounted to 11 billion dollars, which is currently equivalent to 103 billion Swedish kronor.

Read More »

Samsung’s memory ramp-up may ease AI and cloud upgrade concerns

The company confirmed that its latest-generation HBM3E chips are now being shipped to “all related customers,” a possible sign that supply to major AI chipmakers like Nvidia may be stabilizing. With mass production of HBM4 expected next year, Samsung could eventually help relieve pressure on the broader enterprise infrastructure ecosystem, from cloud providers building new AI clusters to data center operators seeking to expand switching and storage capacity. Samsung’s Foundry division also plans to begin operating its new 2nm fab in Taylor, Texas, in 2026 and supply HBM4 base-dies, a move that could further stabilize component availability for US cloud and networking infrastructure providers. Easing the memory chokehold Easing DRAM and NAND lead times will unlock delayed infrastructure projects, particularly among hyperscalers, according to Manish Rawat, semiconductor analyst at TechInsights. “As component availability improves from months to weeks, deferred server and storage upgrades can transition to active scheduling,” Rawat said. “Hyperscalers are expected to lead these restarts, followed by large enterprises once pricing and delivery stabilize. Improved access to high-density memory will also drive faster refresh cycles and higher-performance rack designs, favoring denser server configurations. Procurement models may shift from long-term, buffer-heavy strategies to more agile, just-in-time or spot-buy approaches.” Samsung’s expanded role as a “meaningful volume supplier” of HBM3E 12-high DRAM will also be crucial for hyperscalers planning their 2026 AI infrastructure rollouts, according to Danish Faruqui, CEO of Fab Economics. “Without Samsung’s contribution, most hyperscaler ASIC programs, including Google’s TPU v7, AWS’s Trainium 3, and Microsoft’s in-house accelerators, were facing one- to two-quarter delays due to the limited HBM3E 12-high supply from SK Hynix,” Faruqui said. “These products form the backbone of next-generation AI data centers, and volume ramp-up depends directly on Samsung’s ability to deliver.”

Read More »

Oracle’s cloud strategy an increasingly risky bet

However, he pointed out, “theatre is not delivery. What Oracle served was less a coronation than a carefully staged performance: a heady cocktail of ambition, backlog, and speculation. At Greyhound Research, we argue that such moments call not for applause but for scrutiny. The right instinct is not to toast, but to check the bill.” Oracle ‘betting the farm’ on AI Rob Tiffany, research director in IDC’s worldwide infrastructure research organization, had a different view, saying, “in an effort to catch up with the other hyperscaler clouds, Oracle has been aggressively building out its Oracle Cloud Infrastructure (OCI) data center regions all over the world prior to their Stargate endeavor with Crusoe, OpenAI, and SoftBank, to capitalize on the AI opportunity.” Speculation about the burst of the AI bubble aside, he said, “the strength and success of the OCI buildout thus far rests with Oracle’s dominant database and Fusion Cloud ERP, and those enterprise customers should be confident  in Oracle’s future.” Scott Bickley, advisory fellow at Info-Tech Research Group, added, “[while it is] extraordinary to see them take on this kind of debt, [Oracle] are really betting the farm on the AI revolution panning out. There are a lot of risks involved if momentum in the AI space loses its current trajectory. There could be a lot of stranded infrastructure and capital.” The ultimate risk, he said “lies in the viability of OpenAI. These guys have said they’re going to spend $1.4 trillion on AI capacity build out, and they’re sitting on a revenue base of $13 billion a year right now. If they go up in smoke, then that could leave a lot of this investment stranded. That would be the worst case kind of Black Swan scenario.” At this point, he said, “CIOs would not want that bubble

Read More »

Google wants to restart closed nuclear power plant in Iowa

The enormous amount of energy required to power a modern data center has prompted major tech companies to sign major partnership agreements with power companies. Most recently, Google signed an agreement with Next Era Energy to restart the Duane Arnold Energy Center in Iowa. The nuclear power plant in question was shut down in 2020 and it is expected to take four years to make it operational again, CNBC reports.

Read More »

Arista fills out AI networking portfolio

The 7280R4-32PE features 25.6 Tbps switching capacity and supports 32x 800 GbE ports with Octal Small Form-Factor Pluggable (OSFP) or Quad Small Form-Factor Pluggable – Double Density (QSFP-DD) optical uplinks. It’s targeted at customers that need to support AI/ML workloads and routing-intensive edge use cases, Arista stated. It supports 25% lower power per Gbps compared to the prior generation, according to Arista.  A second version, the 7280R4-64QC-10PE, is aimed at dense, deep buffer-requiring workloads in data centers with 100G/800G requirements. The box supports 64x 100 GbE and 10x 800 GbE OSFP in addition to 4x 1/10/25 GbE for management or additional low-speed interfaces, Arista started. The box promises 20% lower power requirement per Gbps over the prior generation of the box, Arista stated.  At the high end, the new 7800R4 is the vendor’s latest flagship networking box capable of supporting 36 ports of 800GbE OSFP and QSFP-DD line cards in 4, 8, 12, and 16-slot chassis configurations. The box offers a high radix capacity – meaning it can be fully loaded with line card and support 576 physical 800 Gigabit Ethernet ports or 1,152 400GbE ports, Arista stated.  In addition, the 7800R supports a new 3.2 TbpsEthernet line card called HyperPort that supports 4 800G channels to tie together widely dispersed data centers via a technique Arista calls “scale across.” It’s designed to scale across buildings in the same metropolitan region or across sites in different cities or countries. This routed Data Center Interconnect technology that can extend AI clusters over Metro or long-haul WAN links, according to Arista. “Building on the flexible Extensible Operating System (EOS) software foundation [which runs across all Arista networking gear] and deep buffering, HyperPort delivers up to 44% faster job completion time (JCT) for high-bandwidth AI flows via a single high-speed port, compared to

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »