Stay Ahead, Stay ONMINE

Multilayer Perceptron Explained with a Real-Life Example and Python Code: Sentiment Analysis

This is the first article in a series dedicated to Deep Learning, a group of Machine Learning methods that has its roots dating back to the 1940’s. Deep Learning gained attention in the last decades for its groundbreaking application in areas like image classification, speech recognition, and machine translation. Stay tuned if you’d like to see different Deep Learning algorithms explained with real-life examples and some Python code. This series of articles focuses on Deep Learning algorithms, which have been getting a lot of attention in the last few years, as many of its applications take center stage in our day-to-day life. From self-driving cars to voice assistants, face recognition or the ability to transcribe speech into text. These applications are just the tip of the iceberg. A long path of research and incremental applications has been paved since the early 1940’s. The improvements and widespread applications we’re seeing today are the culmination of the hardware and data availability catching up with computational demands of these complex methods. In traditional Machine Learning anyone who is building a model either has to be an expert in the problem area they are working on, or team up with one. Without this expert knowledge, designing and engineering features becomes an increasingly difficult challenge[1]. The quality of a Machine Learning model depends on the quality of the dataset, but also on how well features encode the patterns in the data. Deep Learning algorithms use Artificial Neural Networks as their main structure. What sets them apart from other algorithms is that they don’t require expert input during the feature design and engineering phase. Neural Networks can learn the characteristics of the data. Deep Learning algorithms take in the dataset and learn its patterns, they learn how to represent the data with features they extract on their own. Then they combine different representations of the dataset, each one identifying a specific pattern or characteristic, into a more abstract, high-level representation of the dataset[1]. This hands-off approach, without much human intervention in feature design and extraction, allows algorithms to adapt much faster to the data at hand[2]. Neural Networks are inspired by, but not necessarily an exact model of, the structure of the brain. There’s a lot we still don’t know about the brain and how it works, but it has been serving as inspiration in many scientific areas due to its ability to develop intelligence. And although there are neural networks that were created with the sole purpose of understanding how brains work, Deep Learning as we know it today is not intended to replicate how the brain works. Instead, Deep Learning focuses on enabling systems that learn multiple levels of pattern composition[1]. And, as with any scientific progress, Deep Learning didn’t start off with the complex structures and widespread applications you see in recent literature. It all started with a basic structure, one that resembles brain’s neuron. In the early 1940’s Warren McCulloch, a neurophysiologist, teamed up with logician Walter Pitts to create a model of how brains work. It was a simple linear model that produced a positive or negative output, given a set of inputs and weights. McCulloch and Pitts neuron model. (Image by author) This model of computation was intentionally called neuron, because it tried to mimic how the core building block of the brain worked. Just like brain neurons receive electrical signals, McCulloch and Pitts’ neuron received inputs and, if these signals were strong enough, passed them on to other neurons. Neuron and it’s different components. (Image Credits) The first application of the neuron replicated a logic gate, where you have one or two binary inputs, and a boolean function that only gets activated given the right inputs and weights. However, this model had a problem. It couldn’t learn like the brain. The only way to get the desired output was if the weights, working as catalyst in the model, were set beforehand. The nervous system is a net of neurons, each having a soma and an axon […] At any instant a neuron has some threshold, which excitation must exceed to initiate an impulse[3]. It was only a decade later that Frank Rosenblatt extended this model, and created an algorithm that could learn the weights in order to generate an output. Building onto McCulloch and Pitt’s neuron, Rosenblatt developed the Perceptron. Although today the Perceptron is widely recognized as an algorithm, it was initially intended as an image recognition machine. It gets its name from performing the human-like function of perception, seeing and recognizing images. In particular, interest has been centered on the idea of a machine which would be capable of conceptualizing inputs impinging directly from the physical environment of light, sound, temperature, etc. — the “phenomenal world” with which we are all familiar — rather than requiring the intervention of a human agent to digest and code the necessary information.[4] Rosenblatt’s perceptron machine relied on a basic unit of computation, the neuron. Just like in previous models, each neuron has a cell that receives a series of pairs of inputs and weights. The major difference in Rosenblatt’s model is that inputs are combined in a weighted sum and, if the weighted sum exceeds a predefined threshold, the neuron fires and produces an output. Perceptrons neuron model (left) and threshold logic (right). (Image by author) Threshold T represents the activation function. If the weighted sum of the inputs is greater than zero the neuron outputs the value 1, otherwise the output value is zero. With this discrete output, controlled by the activation function, the perceptron can be used as a binary classification model, defining a linear decision boundary. It finds the separating hyperplane that minimizes the distance between misclassified points and the decision boundary[6]. Perceptron’s loss function. (Image by author) To minimize this distance, Perceptron uses Stochastic Gradient Descent as the optimization function. If the data is linearly separable, it is guaranteed that Stochastic Gradient Descent will converge in a finite number of steps. The last piece that Perceptron needs is the activation function, the function that determines if the neuron will fire or not. Initial Perceptron models used sigmoid function, and just by looking at its shape, it makes a lot of sense! The sigmoid function maps any real input to a value that is either 0 or 1, and encodes a non-linear function. The neuron can receive negative numbers as input, and it will still be able to produce an output that is either 0 or 1. Sigmoid function (Image by author). But, if you look at Deep Learning papers and algorithms from the last decade, you’ll see the most of them use the Rectified Linear Unit (ReLU) as the neuron’s activation function. ReLU function. (Image by author) The reason why ReLU became more adopted is that it allows better optimization using Stochastic Gradient Descent, more efficient computation and is scale-invariant, meaning, its characteristics are not affected by the scale of the input. Putting it all together The neuron receives inputs and picks an initial set of weights a random. These are combined in weighted sum and then ReLU, the activation function, determines the value of the output. Perceptrons neuron model (left) and activation function (right). (Image by Author) But you might be wondering, Doesn’t Perceptron actually learn the weights? It does! Perceptron uses Stochastic Gradient Descent to find, or you might say learn, the set of weight that minimizes the distance between the misclassified points and the decision boundary. Once Stochastic Gradient Descent converges, the dataset is separated into two regions by a linear hyperplane. Although it was said the Perceptron could represent any circuit and logic, the biggest criticism was that it couldn’t represent the XOR gate, exclusive OR, where the gate only returns 1 if the inputs are different. This was proved almost a decade later by Minsky and Papert, in 1969[5] and highlights the fact that Perceptron, with only one neuron, can’t be applied to non-linear data. The Multilayer Perceptron was developed to tackle this limitation. It is a neural network where the mapping between inputs and output is non-linear. A Multilayer Perceptron has input and output layers, and one or more hidden layers with many neurons stacked together. And while in the Perceptron the neuron must have an activation function that imposes a threshold, like ReLU or sigmoid, neurons in a Multilayer Perceptron can use any arbitrary activation function. Multilayer Perceptron. (Image by author) Multilayer Perceptron falls under the category of feedforward algorithms, because inputs are combined with the initial weights in a weighted sum and subjected to the activation function, just like in the Perceptron. But the difference is that each linear combination is propagated to the next layer. Each layer is feeding the next one with the result of their computation, their internal representation of the data. This goes all the way through the hidden layers to the output layer. But it has more to it. If the algorithm only computed the weighted sums in each neuron, propagated results to the output layer, and stopped there, it wouldn’t be able to learn the weights that minimize the cost function. If the algorithm only computed one iteration, there would be no actual learning. This is where Backpropagation[7] comes into play. Backpropagation is the learning mechanism that allows the Multilayer Perceptron to iteratively adjust the weights in the network, with the goal of minimizing the cost function. There is one hard requirement for backpropagation to work properly. The function that combines inputs and weights in a neuron, for instance the weighted sum, and the threshold function, for instance ReLU, must be differentiable. These functions must have a bounded derivative, because Gradient Descent is typically the optimization function used in MultiLayer Perceptron. In each iteration, after the weighted sums are forwarded through all layers, the gradient of the Mean Squared Error is computed across all input and output pairs. Then, to propagate it back, the weights of the first hidden layer are updated with the value of the gradient. That’s how the weights are propagated back to the starting point of the neural network! One iteration of Gradient Descent. (Image by author) This process keeps going until gradient for each input-output pair has converged, meaning the newly computed gradient hasn’t changed more than a specified convergence threshold, compared to the previous iteration. Let’s see this with a real-world example. Your parents have a cozy bed and breakfast in the countryside with the traditional guestbook in the lobby. Every guest is welcome to write a note before they leave and, so far, very few leave without writing a short note or inspirational quote. Some even leave drawings of Molly, the family dog. Summer season is getting to a close, which means cleaning time, before work starts picking up again for the holidays. In the old storage room, you’ve stumbled upon a box full of guestbooks your parents kept over the years. Your first instinct? Let’s read everything! After reading a few pages, you just had a much better idea. Why not try to understand if guests left a positive or negative message? You’re a Data Scientist, so this is the perfect task for a binary classifier. So you picked a handful of guestbooks at random, to use as training set, transcribed all the messages, gave it a classification of positive or negative sentiment, and then asked your cousins to classify them as well. In Natural Language Processing tasks, some of the text can be ambiguous, so usually you have a corpus of text where the labels were agreed upon by 3 experts, to avoid ties. Sample of guest messages. (Image by author) With the final labels assigned to the entire corpus, you decided to fit the data to a Perceptron, the simplest neural network of all. But before building the model itself, you needed to turn that free text into a format the Machine Learning model could work with. In this case, you represented the text from the guestbooks as a vector using the Term Frequency — Inverse Document Frequency (TF-IDF). This method encodes any kind of text as a statistic of how frequent each word, or term, is in each sentence and the entire document. In Python you used TfidfVectorizer method from ScikitLearn, removing English stop-words and even applying L1 normalization. TfidfVectorizer(stop_words=’english’, lowercase=True, norm=’l1′) On to binary classification with Perceptron! To accomplish this, you used Perceptron completely out-of-the-box, with all the default parameters. Python source code to run Perceptron on a corpus. (Image by author) After vectorizing the corpus and fitting the model and testing on sentences the model has never seen before, you realize the Mean Accuracy of this model is 67%. Mean accuracy of the Perceptron model. (Image by author) That’s not bad for a simple neural network like Perceptron! On average, Perceptron will misclassify roughly 1 in every 3 messages your parents’ guests wrote. Which makes you wonder if perhaps this data is not linearly separable and that you could also achieve a better result with a slightly more complex neural network. Using SckitLearn’s MultiLayer Perceptron, you decided to keep it simple and tweak just a few parameters: Activation function: ReLU, specified with the parameter activation=’relu’ Optimization function: Stochastic Gradient Descent, specified with the parameter solver=’sgd’ Learning rate: Inverse Scaling, specified with the parameter learning_rate=’invscaling’ Number of iterations: 20, specified with the parameter max_iter=20 Python source code to run MultiLayer Perceptron on a corpus. (Image by author) By default, Multilayer Perceptron has three hidden layers, but you want to see how the number of neurons in each layer impacts performance, so you start off with 2 neurons per hidden layer, setting the parameter num_neurons=2. Finally, to see the value of the loss function at each iteration, you also added the parameter verbose=True. Mean accuracy of the Multilayer Perceptron model with 3 hidden layers, each with 2 nodes. (Image by author) In this case, the Multilayer Perceptron has 3 hidden layers with 2 nodes each, performs much worse than a simple Perceptron. It converges relatively fast, in 24 iterations, but the mean accuracy is not good. While the Perceptron misclassified on average 1 in every 3 sentences, this Multilayer Perceptron is kind of the opposite, on average predicts the correct label 1 in every 3 sentences. What about if you added more capacity to the neural network? What happens when each hidden layer has more neurons to learn the patterns of the dataset? Using the same method, you can simply change the num_neurons parameter an set it, for instance, to 5. buildMLPerceptron(train_features, test_features, train_targets, test_targets, num_neurons=5) Adding more neurons to the hidden layers definitely improved Model accuracy! Mean accuracy of the Multilayer Perceptron model with 3 hidden layers, each with 5 nodes. (Image by author) You kept the same neural network structure, 3 hidden layers, but with the increased computational power of the 5 neurons, the model got better at understanding the patterns in the data. It converged much faster and mean accuracy doubled! In the end, for this specific case and dataset, the Multilayer Perceptron performs as well as a simple Perceptron. But it was definitely a great exercise to see how changing the number of neurons in each hidden-layer impacts model performance. It’s not a perfect model, there’s possibly some room for improvement, but the next time a guest leaves a message that your parents are not sure if it’s positive or negative, you can use Perceptron to get a second opinion. The first Deep Learning algorithm was very simple, compared to the current state-of-the-art. Perceptron is a neural network with only one neuron, and can only understand linear relationships between the input and output data provided. However, with Multilayer Perceptron, horizons are expanded and now this neural network can have many layers of neurons, and ready to learn more complex patterns. Hope you’ve enjoyed learning about algorithms! Stay tuned for the next articles in this series, where we continue to explore Deep Learning algorithms. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. The MIT Press. McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943) Frank Rosenblatt. The Perceptron, a Perceiving and Recognizing Automaton Project Para. Cornell Aeronautical Laboratory 85, 460–461 (1957) Minsky M. L. and Papert S. A. 1969. Perceptrons. Cambridge, MA: MIT Press. Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. (2013). An introduction to statistical learning : with applications in R. New York :Springer D. Rumelhart, G. Hinton, and R. Williams. Learning Representations by Back-propagating Errors. Nature 323 (6088): 533–536 (1986).

This is the first article in a series dedicated to Deep Learning, a group of Machine Learning methods that has its roots dating back to the 1940’s. Deep Learning gained attention in the last decades for its groundbreaking application in areas like image classification, speech recognition, and machine translation.

Stay tuned if you’d like to see different Deep Learning algorithms explained with real-life examples and some Python code.


This series of articles focuses on Deep Learning algorithms, which have been getting a lot of attention in the last few years, as many of its applications take center stage in our day-to-day life. From self-driving cars to voice assistants, face recognition or the ability to transcribe speech into text.

These applications are just the tip of the iceberg. A long path of research and incremental applications has been paved since the early 1940’s. The improvements and widespread applications we’re seeing today are the culmination of the hardware and data availability catching up with computational demands of these complex methods.

In traditional Machine Learning anyone who is building a model either has to be an expert in the problem area they are working on, or team up with one. Without this expert knowledge, designing and engineering features becomes an increasingly difficult challenge[1]. The quality of a Machine Learning model depends on the quality of the dataset, but also on how well features encode the patterns in the data.

Deep Learning algorithms use Artificial Neural Networks as their main structure. What sets them apart from other algorithms is that they don’t require expert input during the feature design and engineering phase. Neural Networks can learn the characteristics of the data.

Deep Learning algorithms take in the dataset and learn its patterns, they learn how to represent the data with features they extract on their own. Then they combine different representations of the dataset, each one identifying a specific pattern or characteristic, into a more abstract, high-level representation of the dataset[1]. This hands-off approach, without much human intervention in feature design and extraction, allows algorithms to adapt much faster to the data at hand[2].

Neural Networks are inspired by, but not necessarily an exact model of, the structure of the brain. There’s a lot we still don’t know about the brain and how it works, but it has been serving as inspiration in many scientific areas due to its ability to develop intelligence. And although there are neural networks that were created with the sole purpose of understanding how brains work, Deep Learning as we know it today is not intended to replicate how the brain works. Instead, Deep Learning focuses on enabling systems that learn multiple levels of pattern composition[1].

And, as with any scientific progress, Deep Learning didn’t start off with the complex structures and widespread applications you see in recent literature.

It all started with a basic structure, one that resembles brain’s neuron.

In the early 1940’s Warren McCulloch, a neurophysiologist, teamed up with logician Walter Pitts to create a model of how brains work. It was a simple linear model that produced a positive or negative output, given a set of inputs and weights.

McCulloch and Pitts neuron model. (Image by author)

This model of computation was intentionally called neuron, because it tried to mimic how the core building block of the brain worked. Just like brain neurons receive electrical signals, McCulloch and Pitts’ neuron received inputs and, if these signals were strong enough, passed them on to other neurons.

Neuron and it’s different components. (Image Credits)

The first application of the neuron replicated a logic gate, where you have one or two binary inputs, and a boolean function that only gets activated given the right inputs and weights.

However, this model had a problem. It couldn’t learn like the brain. The only way to get the desired output was if the weights, working as catalyst in the model, were set beforehand.

The nervous system is a net of neurons, each having a soma and an axon […] At any instant a neuron has some threshold, which excitation must exceed to initiate an impulse[3].

It was only a decade later that Frank Rosenblatt extended this model, and created an algorithm that could learn the weights in order to generate an output.

Building onto McCulloch and Pitt’s neuron, Rosenblatt developed the Perceptron.

Although today the Perceptron is widely recognized as an algorithm, it was initially intended as an image recognition machine. It gets its name from performing the human-like function of perception, seeing and recognizing images.

In particular, interest has been centered on the idea of a machine which would be capable of conceptualizing inputs impinging directly from the physical environment of light, sound, temperature, etc. — the “phenomenal world” with which we are all familiar — rather than requiring the intervention of a human agent to digest and code the necessary information.[4]

Rosenblatt’s perceptron machine relied on a basic unit of computation, the neuron. Just like in previous models, each neuron has a cell that receives a series of pairs of inputs and weights.

The major difference in Rosenblatt’s model is that inputs are combined in a weighted sum and, if the weighted sum exceeds a predefined threshold, the neuron fires and produces an output.

Perceptrons neuron model (left) and threshold logic (right). (Image by author)

Threshold represents the activation function. If the weighted sum of the inputs is greater than zero the neuron outputs the value 1, otherwise the output value is zero.

With this discrete output, controlled by the activation function, the perceptron can be used as a binary classification model, defining a linear decision boundary. It finds the separating hyperplane that minimizes the distance between misclassified points and the decision boundary[6].

Perceptron’s loss function. (Image by author)

To minimize this distance, Perceptron uses Stochastic Gradient Descent as the optimization function.

If the data is linearly separable, it is guaranteed that Stochastic Gradient Descent will converge in a finite number of steps.

The last piece that Perceptron needs is the activation function, the function that determines if the neuron will fire or not.

Initial Perceptron models used sigmoid function, and just by looking at its shape, it makes a lot of sense!

The sigmoid function maps any real input to a value that is either 0 or 1, and encodes a non-linear function.

The neuron can receive negative numbers as input, and it will still be able to produce an output that is either 0 or 1.

Sigmoid function (Image by author).

But, if you look at Deep Learning papers and algorithms from the last decade, you’ll see the most of them use the Rectified Linear Unit (ReLU) as the neuron’s activation function.

ReLU function. (Image by author)

The reason why ReLU became more adopted is that it allows better optimization using Stochastic Gradient Descent, more efficient computation and is scale-invariant, meaning, its characteristics are not affected by the scale of the input.

Putting it all together

The neuron receives inputs and picks an initial set of weights a random. These are combined in weighted sum and then ReLU, the activation function, determines the value of the output.

Perceptrons neuron model (left) and activation function (right). (Image by Author)

But you might be wondering, Doesn’t Perceptron actually learn the weights?

It does! Perceptron uses Stochastic Gradient Descent to find, or you might say learn, the set of weight that minimizes the distance between the misclassified points and the decision boundary. Once Stochastic Gradient Descent converges, the dataset is separated into two regions by a linear hyperplane.

Although it was said the Perceptron could represent any circuit and logic, the biggest criticism was that it couldn’t represent the XOR gateexclusive OR, where the gate only returns 1 if the inputs are different.

This was proved almost a decade later by Minsky and Papert, in 1969[5] and highlights the fact that Perceptron, with only one neuron, can’t be applied to non-linear data.

The Multilayer Perceptron was developed to tackle this limitation. It is a neural network where the mapping between inputs and output is non-linear.

A Multilayer Perceptron has input and output layers, and one or more hidden layers with many neurons stacked together. And while in the Perceptron the neuron must have an activation function that imposes a threshold, like ReLU or sigmoid, neurons in a Multilayer Perceptron can use any arbitrary activation function.

Multilayer Perceptron. (Image by author)

Multilayer Perceptron falls under the category of feedforward algorithms, because inputs are combined with the initial weights in a weighted sum and subjected to the activation function, just like in the Perceptron. But the difference is that each linear combination is propagated to the next layer.

Each layer is feeding the next one with the result of their computation, their internal representation of the data. This goes all the way through the hidden layers to the output layer.

But it has more to it.

If the algorithm only computed the weighted sums in each neuron, propagated results to the output layer, and stopped there, it wouldn’t be able to learn the weights that minimize the cost function. If the algorithm only computed one iteration, there would be no actual learning.

This is where Backpropagation[7] comes into play.

Backpropagation is the learning mechanism that allows the Multilayer Perceptron to iteratively adjust the weights in the network, with the goal of minimizing the cost function.

There is one hard requirement for backpropagation to work properly. The function that combines inputs and weights in a neuron, for instance the weighted sum, and the threshold function, for instance ReLU, must be differentiable. These functions must have a bounded derivative, because Gradient Descent is typically the optimization function used in MultiLayer Perceptron.

In each iteration, after the weighted sums are forwarded through all layers, the gradient of the Mean Squared Error is computed across all input and output pairs. Then, to propagate it back, the weights of the first hidden layer are updated with the value of the gradient. That’s how the weights are propagated back to the starting point of the neural network!

One iteration of Gradient Descent. (Image by author)

This process keeps going until gradient for each input-output pair has converged, meaning the newly computed gradient hasn’t changed more than a specified convergence threshold, compared to the previous iteration.

Let’s see this with a real-world example.

Your parents have a cozy bed and breakfast in the countryside with the traditional guestbook in the lobby. Every guest is welcome to write a note before they leave and, so far, very few leave without writing a short note or inspirational quote. Some even leave drawings of Molly, the family dog.

Summer season is getting to a close, which means cleaning time, before work starts picking up again for the holidays. In the old storage room, you’ve stumbled upon a box full of guestbooks your parents kept over the years. Your first instinct? Let’s read everything!

After reading a few pages, you just had a much better idea. Why not try to understand if guests left a positive or negative message?

You’re a Data Scientist, so this is the perfect task for a binary classifier.

So you picked a handful of guestbooks at random, to use as training set, transcribed all the messages, gave it a classification of positive or negative sentiment, and then asked your cousins to classify them as well.

In Natural Language Processing tasks, some of the text can be ambiguous, so usually you have a corpus of text where the labels were agreed upon by 3 experts, to avoid ties.

Sample of guest messages. (Image by author)

With the final labels assigned to the entire corpus, you decided to fit the data to a Perceptron, the simplest neural network of all.

But before building the model itself, you needed to turn that free text into a format the Machine Learning model could work with.

In this case, you represented the text from the guestbooks as a vector using the Term Frequency — Inverse Document Frequency (TF-IDF). This method encodes any kind of text as a statistic of how frequent each word, or term, is in each sentence and the entire document.

In Python you used TfidfVectorizer method from ScikitLearn, removing English stop-words and even applying L1 normalization.

TfidfVectorizer(stop_words='english', lowercase=True, norm='l1')

On to binary classification with Perceptron!

To accomplish this, you used Perceptron completely out-of-the-box, with all the default parameters.

Python source code to run Perceptron on a corpus. (Image by author)

After vectorizing the corpus and fitting the model and testing on sentences the model has never seen before, you realize the Mean Accuracy of this model is 67%.

Mean accuracy of the Perceptron model. (Image by author)

That’s not bad for a simple neural network like Perceptron!

On average, Perceptron will misclassify roughly 1 in every 3 messages your parents’ guests wrote. Which makes you wonder if perhaps this data is not linearly separable and that you could also achieve a better result with a slightly more complex neural network.

Using SckitLearn’s MultiLayer Perceptron, you decided to keep it simple and tweak just a few parameters:

  • Activation function: ReLU, specified with the parameter activation=’relu’
  • Optimization function: Stochastic Gradient Descent, specified with the parameter solver=’sgd’
  • Learning rate: Inverse Scaling, specified with the parameter learning_rate=’invscaling’
  • Number of iterations: 20, specified with the parameter max_iter=20
Python source code to run MultiLayer Perceptron on a corpus. (Image by author)

By default, Multilayer Perceptron has three hidden layers, but you want to see how the number of neurons in each layer impacts performance, so you start off with 2 neurons per hidden layer, setting the parameter num_neurons=2.

Finally, to see the value of the loss function at each iteration, you also added the parameter verbose=True.

Mean accuracy of the Multilayer Perceptron model with 3 hidden layers, each with 2 nodes. (Image by author)

In this case, the Multilayer Perceptron has 3 hidden layers with 2 nodes each, performs much worse than a simple Perceptron.

It converges relatively fast, in 24 iterations, but the mean accuracy is not good.

While the Perceptron misclassified on average 1 in every 3 sentences, this Multilayer Perceptron is kind of the opposite, on average predicts the correct label 1 in every 3 sentences.

What about if you added more capacity to the neural network? What happens when each hidden layer has more neurons to learn the patterns of the dataset?

Using the same method, you can simply change the num_neurons parameter an set it, for instance, to 5.

buildMLPerceptron(train_features, test_features, train_targets, test_targets, num_neurons=5)

Adding more neurons to the hidden layers definitely improved Model accuracy!

Mean accuracy of the Multilayer Perceptron model with 3 hidden layers, each with 5 nodes. (Image by author)

You kept the same neural network structure, 3 hidden layers, but with the increased computational power of the 5 neurons, the model got better at understanding the patterns in the data. It converged much faster and mean accuracy doubled!

In the end, for this specific case and dataset, the Multilayer Perceptron performs as well as a simple Perceptron. But it was definitely a great exercise to see how changing the number of neurons in each hidden-layer impacts model performance.

It’s not a perfect model, there’s possibly some room for improvement, but the next time a guest leaves a message that your parents are not sure if it’s positive or negative, you can use Perceptron to get a second opinion.

The first Deep Learning algorithm was very simple, compared to the current state-of-the-art. Perceptron is a neural network with only one neuron, and can only understand linear relationships between the input and output data provided.

However, with Multilayer Perceptron, horizons are expanded and now this neural network can have many layers of neurons, and ready to learn more complex patterns.

Hope you’ve enjoyed learning about algorithms!

Stay tuned for the next articles in this series, where we continue to explore Deep Learning algorithms.

  1. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015)
  2. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. The MIT Press.
  3. McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943)
  4. Frank Rosenblatt. The Perceptron, a Perceiving and Recognizing Automaton Project Para. Cornell Aeronautical Laboratory 85, 460–461 (1957)
  5. Minsky M. L. and Papert S. A. 1969. Perceptrons. Cambridge, MA: MIT Press.
  6. Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani. (2013)An introduction to statistical learning : with applications in R. New York :Springer
  7. D. Rumelhart, G. Hinton, and R. Williams. Learning Representations by Back-propagating Errors. Nature 323 (6088): 533–536 (1986).
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Ubuntu namespace vulnerability should be addressed quickly: Expert

Thus, “there is little impact of not ‘patching’ the vulnerability,” he said. “Organizations using centralized configuration tools like Ansible may deploy these changes with regularly scheduled maintenance or reboot windows.”  Features supposed to improve security Ironically, last October Ubuntu introduced AppArmor-based features to improve security by reducing the attack surface

Read More »

Google Cloud partners with mLogica to offer mainframe modernization

Other than the partnership with mLogica, Google Cloud also offers a variety of other mainframe migration tools, including Radis and G4 that can be employed to modernize specific applications. Enterprises can also use a combination of migration tools to modernize their mainframe applications. Some of these tools include the Gemini-powered

Read More »

Trump Revokes Permits to USA, Foreign Oil Firms in Venezuela

The Trump administration revoked permits and waivers allowing Western energy firms to operate in Venezuela, three people familiar with the issue said Saturday. The move further isolates Venezuelan President Nicolas Maduro from the world oil market.  The decision covers a number of companies including US oil firm Global Oil Terminals, led by energy magnate Harry Sargeant III. Spain’s Repsol and France’s Maurel et Prom are among the other companies that must wind down operations in Venezuela by May 27, the people said.  The decision also targets licenses issued to Venezuelan gas companies that hold deals with the state petroleum company, PDVSA, one of the people said.  The US Treasury Department had issued different permits to international oil and gas companies, including licenses, waivers or letters of comfort, to allow them to conduct different operations in Venezuela, such as exporting PDVSA’s oil, despite sanctions. Sargeant’s Global Oil Terminals must also end financial transactions with PDVSA by April 2 and pay any remaining debt owed for the purchases of asphalt oil. The Wall Street Journal reported late Friday that Sargeant’s oil-trading company was ordered to leave Venezuela, citing a letter from Treasury. The Trump administration recently gave Chevron a May 27 deadline to wrap up its operations with Venezuela as a way to pressure Maduro’s autocratic regime to make democratic reforms and accept the return of more Venezuelans who had migrated to the US. Sargeant had initially obtained a two-year-waiver in May of last year, allowing Global Oil Terminals to purchase and transport asphalt to the US and the Caribbean.  The US Treasury Department declined to comment, and the White House, National Security Council and State Department didn’t immediately respond to requests for comment. Repsol, Maurel et Prom and PDVSA didn’t immediately reply to a request for comment.  –With assistance from Skylar

Read More »

Woodside Sells Greater Angostura Stakes to Perenco

Woodside Energy Group Ltd. has signed an agreement to sell its producing oil and gas assets in Greater Angostura in Trinidad and Tobago to Perenco Group for AUD 206 million ($129.82 million). The divestment includes Woodside’s operating stakes in the shallow-water Angostura and Ruby oil and gas fields, as well as the associated production facilities and onshore terminal. The Australian oil and gas exploration and production company owns 45 percent of Angostura in Block 2(c) and 68.46 percent in Ruby in Block 3(a). Its Angostura partners are the South American country’s National Gas Co. with a 30 percent stake and Chaoyang Petroleum (Trinidad) Block 2C Ltd with a 25 percent stake. In Ruby, National Gas owns the remaining 31.54 percent. Greater Angostura produces about 12 percent of Trinidad and Tobago’s natural gas supply, Woodside chief executive Meg O’Neill said in a company statement announcing the sale. “The transaction provides near-term cash flow to support ongoing investments and shareholder distributions and builds on the Australian asset swap announced in December 2024, further simplifying Woodside’s portfolio”, Woodside said. Woodside and Chevron Australia Pty. Ltd., an indirect subsidiary of Chevron Corp., have entered a swap agreement to consolidate assets including the Wheatstone and North West Shelf (NWS) gas projects in Western Australia. Upon the completion of the swap transaction, Woodside will hold a 50 percent stake in the NWS Project, 66.67 percent in the NWS Oil Project and 40 percent in the Angel Carbon, Capture and Storage Project. “Chevron Australia will acquire Woodside’s 13 percent non-operated interest in the Wheatstone Project and 65 percent operated interest in the Julimar-Brunello Project”, Chevron Australia said December 19, 2024. Chevron will pay Woodside AUD 300 million in cash plus up to AUD 100 million in contingent payments related to the handover of the Julimar Phase 3 Project

Read More »

Texas Oil, Gas Industry Continues Hot Streak in 2024

The Texas oil and gas industry continued a hot streak in 2024 with production volumes surpassing records that were set in 2023, the Texas Railroad Commission (RRC) said in a statement posted on its website recently. In the statement, the RRC noted that it tallies production reports submitted by operators and outlined that the latest reports show that oil production came in at 2,003,844,281 barrels, and natural gas production hit 12.62 trillion cubic feet, last year. The RRC highlighted in the statement that this was the first time oil “surpassed the two billion threshold”. The RRC statement pointed out that Texas’ top five crude oil and condensate production years came in 2024, at 2.00 billion barrels, 2023, at 1.99 billion barrels, 2022, at 1.87 billion barrels, 2019, at 1.86 billion barrels, and 2020, at 1.77 billion barrels. Texas’ top five gas production years, including gas well and casinghead gas, were seen in 2024, at 12.62 trillion cubic feet, 2023, at 12.30 trillion cubic feet, 2022, at 11.43 trillion cubic feet, 2021, at 10.51 trillion cubic feet, and 2020, at 10.24 trillion cubic feet, the statement highlighted. “These latest records further demonstrate Texas’s position as a global leader in oil and gas production,” RRC Chairman Christi Craddick said in the statement. RRC Commissioner Wayne Christian said in the statement, “Texas oil and gas powers the state, nation, and the world both with energy and economics”. “The Texas ‘Economic Miracle’ happens because of oil and gas, which brings in hundreds of billions of dollars that has financially enriched the Lone Star State’s education, infrastructure, health care, and more,” he added. “Energy independence is key to a secure and prosperous nation, and Texas’ production is vital to making that a reality,” Christian continued. RRC Commissioner Jim Wright said in the statement, “yet another

Read More »

BP Plans to Sell Fuel Retail Sites in Austria

BP PLC is initiating a marketing process to divest its retail sites, associated fleet and electric vehicle (EV) charging infrastructure in Austria, as well as its stake in the Linz fuel terminal. “This decision is the latest example of bp’s strategy to reshape and focus its downstream businesses”, the British energy major said in an online statement. BP previously exited the retail sector in Switzerland and Turkey and recently initiated retail and downstream divestments in Germany and the Netherlands. The Austria sale includes over 260 retail sites, of which about 120 are owned by BP, and the associated fleet; EV charging assets including those under development; and BP’s interest in the company operating the Linz fuel terminal non-operated joint venture. BP plans to complete the sale this year, subject to regulatory approvals. “Over recent years we have grown the business to become number two major branded retailer in the market”, said Emma Delaney, executive vice president for customers and products at BP. “As bp now looks to focus downstream and reshape our portfolio, we believe that a new owner will be best placed to unlock the business’s full potential”. On February 6, 2025, BP announced a marketing process for the potential sale of Ruhr Oel GmbH-BP Gelsenkirchen (ROG), which operates in Germany and the Netherlands. Sale completion is targeted 2025. “bp needs to continually manage its global portfolio as we position to grow as a simpler, more focused, higher-value company”, Delaney said then. With a workforce of about 2,000 employees, ROG’s German operations include 2 plants in Horst and Scholven in Gelsenkirchen that form a refining and petrochemical site. The refinery can process up to approximately 12 million metric tons of petroleum a year. ROG also owns the Bottrop tank farm, DHC Solvent Chemie GmbH and Nord-West Oelleitung GmbH, which

Read More »

Empire Petroleum Loss Widens for 2024 as EOR Activities Drag On

Empire Petroleum Corp. has posted a $16.2 million net loss for 2024, affected by operational challenges on the initial production optimization associated with enhanced oil recovery (EOR) efforts in the Starbuck drilling program in North Dakota. I had recorded a net loss of $12.5 million for 2023. Empire said in a media release that its total product revenue in 2024 reached $44 million, $4 million above 2023. The Starbuck drilling program helped the company increase oil sales volumes. Earnings before interest, taxes, depreciation and amortization (EBITDA) was $0.7 million for 2024, compared to negative $2.4 million in EBITDA for 2023. “As an emerging, agile company, Empire Petroleum has a unique ability to pivot quickly as we receive new data and insights. This flexibility is a tremendous advantage in the dynamic energy sector, allowing us to efficiently allocate capital and resources to the most promising opportunities where they will have the greatest impact”, chair Phil Mulacek said. Empire said that during the fourth quarter of 2024, it obtained authorization from the North Dakota Industrial Commission (NDIC) to transform two additional oil wells into injectors, further progressing its enhanced oil recovery strategy. The previous conversion of three wells resulted in a reduction of its short-term output but has improved prospects for long-term production growth, according to the company. In February 2025, Empire also said it had secured NDIC approval for five new drilling permits for horizontal wells. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR

Read More »

House Republicans probe EPA climate grant recipients

Dive Brief: Republicans on the House Oversight and Government Reform Committee have called on eight environmental organizations that received grants from the U.S. Environmental Protection Agency to submit documents, communications and other internal records related to the funding they received. The probe, announced Thursday, is spearheaded by Committee Chair James Comer and seeks to investigate the funding disbursed to recipients under the Biden administration’s EPA, including grants from the Greenhouse Gas Reduction Fund. The $27 billion fund was established as part of the 2022 Inflation Reduction Act, the former president’s landmark climate law. Comer said in his March 27 letters that the committee is “examining potential entanglements or conflicts of interests” between the environmental nonprofits and the Biden administration’s EPA. All eight groups received grants collectively worth $20 billion from the GGRF — a financial commitment Republicans have dubbed an “unprecedented arrangement.” Dive Insight: Comer sent letters to Climate United, Coalition for Green Capital, Power Forward Communities, Opportunity Finance Network, Inclusiv, Justice Climate Fund, Appalachian Community Capital and Native CDFI Network on Thursday. The letters requested that all eight groups provide information on employees, salaries and communication that took place between them and the former EPA administration related to GGRF grants by April 10. Earlier this month, the EPA — under Trump-nominated Lee Zeldin — froze access to the $20 billion grants committed through the GGRF, leaving recipients in a precarious financial situation. Climate United Fund, a grantee and one of the eight organizations that received a letter from Comer Thursday, said at the time the abrupt halt on funding impacted its ability to make payroll for employees and disburse funding to its contracted borrowers. In his March 3 letter to the EPA’s Acting Inspector General Nicole Murley, Zeldin said the structure of the IRA-backed fund was fraudulent and made

Read More »

Silicon Motion rolls SSD kit to bolster AI workload performance

The kit utilizes the PCIe Dual Ported enterprise-grade SM8366 controller with support for PCIe Gen 5 x4 NVMe 2.0 and OCP 2.5 data center specifications. The 128TB SSD RDK also supports NVMe 2.0 Flexible Data Placement (FDP), a feature that allows advanced data management and improved SSD write efficiency and endurance. “Silicon Motion’s MonTitan SSD RDK offers a comprehensive solution for our customers, enabling them to rapidly develop and deploy enterprise-class SSDs tailored for AI data center and edge server applications.” said Alex Chou, senior vice president of the enterprise storage & display interface solution business at Silicon Motion. Silicon Motion doesn’t make drives, rather it makes reference design kits in different form factors that its customers use to build their own product. Its kits come in E1.S, E3.S, and U.2 form factors. The E1.S and U.2 forms mirror the M.2, which looks like a stick of gum and installs on the motherboard. There are PCI Express enclosures that hold four to six of those drives and plug into one card slot and appear to the system as a single drive.

Read More »

Executive Roundtable: Cooling Imperatives for Managing High-Density AI Workloads

Michael Lahoud, Stream Data Centers: For the past two years, Stream Data Centers has been developing a modular, configurable air and liquid cooling system that can handle the highest densities in both mediums. Based on our collaboration with customers, we see a future that still requires both cooling mediums, but with the flexibility to deploy either type as the IT stack destined for that space demands. With this necessity as a backdrop, we saw a need to develop a scalable mix-and-match front-end thermal solution that gives us the ability to late bind the equipment we need to meet our customers’ changing cooling needs. It’s well understood that liquid far outperforms air in its ability to transport heat, but further to this, with the right IT configuration, cooling fluid temperatures can also be raised, and this affords operators the ability to use economization for a greater number of hours a year. These key properties can help reduce the energy needed for the mechanical part of a data center’s operations substantially.  It should also be noted that as servers are redesigned for liquid cooling and the onboard server fans get removed or reduced in quantity, more of the critical power delivered to the server is being used for compute. This means that liquid cooling also drives an improvement in overall compute productivity despite not being noted in facility PUE metrics.  Counter to air cooling, liquid cooling certainly has some added management challenges related to fluid cleanliness, concurrent maintainability and resiliency/redundancy, but once those are accounted for, the clusters become stable, efficient and more sustainable with improved overall productivity.

Read More »

Airtel connects India with 100Tbps submarine cable

“Businesses are becoming increasingly global and digital-first, with industries such as financial services, data centers, and social media platforms relying heavily on real-time, uninterrupted data flow,” Sinha added. The 2Africa Pearls submarine cable system spans 45,000 kilometers, involving a consortium of global telecommunications leaders including Bayobab, China Mobile International, Meta, Orange, Telecom Egypt, Vodafone Group, and WIOCC. Alcatel Submarine Networks is responsible for the cable’s manufacturing and installation, the statement added. This cable system is part of a broader global effort to enhance international digital connectivity. Unlike traditional telecommunications infrastructure, the 2Africa Pearls project represents a collaborative approach to solving complex global communication challenges. “The 100 Tbps capacity of the 2Africa Pearls cable significantly surpasses most existing submarine cable systems, positioning India as a key hub for high-speed connectivity between Africa, Europe, and Asia,” said Prabhu Ram, VP for Industry Research Group at CyberMedia Research. According to Sinha, Airtel’s infrastructure now spans “over 400,000 route kilometers across 34+ cables, connecting 50 countries across five continents. This expansive infrastructure ensures businesses and individuals stay seamlessly connected, wherever they are.” Gogia further emphasizes the broader implications, noting, “What also stands out is the partnership behind this — Airtel working with Meta and center3 signals a broader shift. India is no longer just a consumer of global connectivity. We’re finally shaping the routes, not just using them.”

Read More »

Former Arista COO launches NextHop AI for customized networking infrastructure

Sadana argued that unlike traditional networking where an IT person can just plug a cable into a port and it works, AI networking requires intricate, custom solutions. The core challenge is creating highly optimized, efficient networking infrastructure that can support massive AI compute clusters with minimal inefficiencies. How NextHop is looking to change the game for hyperscale networking NextHop AI is working directly alongside its hyperscaler customers to develop and build customized networking solutions. “We are here to build the most efficient AI networking solutions that are out there,” Sadana said. More specifically, Sadana said that NextHop is looking to help hyperscalers in several ways including: Compressing product development cycles: “Companies that are doing things on their own can compress their product development cycle by six to 12 months when they partner with us,” he said. Exploring multiple technological alternatives: Sadana noted that hyperscalers might try and build on their own and will often only be able to explore one or two alternative approaches. With NextHop, Sadana said his company will enable them to explore four to six different alternatives. Achieving incremental efficiency gains: At the massive cloud scale that hyperscalers operate, even an incremental one percent improvement can have an oversized outcome. “You have to make AI clusters as efficient as possible for the world to use all the AI applications at the right cost structure, at the right economics, for this to be successful,” Sadana said. “So we are participating by making that infrastructure layer a lot more efficient for cloud customers, or the hyperscalers, which, in turn, of course, gives the benefits to all of these software companies trying to run AI applications in these cloud companies.” Technical innovations: Beyond traditional networking In terms of what the company is actually building now, NextHop is developing specialized network switches

Read More »

Microsoft abandons data center projects as OpenAI considers its own, hinting at a market shift

A potential ‘oversupply position’ In a new research note, TD Cowan analysts reportedly said that Microsoft has walked away from new data center projects in the US and Europe, purportedly due to an oversupply of compute clusters that power AI. This follows reports from TD Cowen in February that Microsoft had “cancelled leases in the US totaling a couple of hundred megawatts” of data center capacity. The researchers noted that the company’s pullback was a sign of it “potentially being in an oversupply position,” with demand forecasts lowered. OpenAI, for its part, has reportedly discussed purchasing billions of dollars’ worth of data storage hardware and software to increase its computing power and decrease its reliance on hyperscalers. This fits with its planned Stargate Project, a $500 billion, US President Donald Trump-endorsed initiative to build out its AI infrastructure in the US over the next four years. Based on the easing of exclusivity between the two companies, analysts say these moves aren’t surprising. “When looking at storage in the cloud — especially as it relates to use in AI — it is incredibly expensive,” said Matt Kimball, VP and principal analyst for data center compute and storage at Moor Insights & Strategy. “Those expenses climb even higher as the volume of storage and movement of data grows,” he pointed out. “It is only smart for any business to perform a cost analysis of whether storage is better managed in the cloud or on-prem, and moving forward in a direction that delivers the best performance, best security, and best operational efficiency at the lowest cost.”

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »