Stay Ahead, Stay ONMINE

How AI can help supercharge creativity

Sometimes Lizzie Wilson shows up to a rave with her AI sidekick.  One weeknight this past February, Wilson plugged her laptop into a projector that threw her screen onto the wall of a low-ceilinged loft space in East London. A small crowd shuffled in the glow of dim pink lights. Wilson sat down and started programming. Techno clicks and whirs thumped from the venue’s speakers. The audience watched, heads nodding, as Wilson tapped out code line by line on the projected screen—tweaking sounds, looping beats, pulling a face when she messed up.   Wilson is a live coder. Instead of using purpose-built software like most electronic music producers, live coders create music by writing the code to generate it on the fly. It’s an improvised performance art known as algorave. “It’s kind of boring when you go to watch a show and someone’s just sitting there on their laptop,” she says. “You can enjoy the music, but there’s a performative aspect that’s missing. With live coding, everyone can see what it is that I’m typing. And when I’ve had my laptop crash, people really like that. They start cheering.” Taking risks is part of the vibe. And so Wilson likes to dial up her performances one more notch by riffing off what she calls a live-coding agent, a generative AI model that comes up with its own beats and loops to add to the mix. Often the model suggests sound combinations that Wilson hadn’t thought of. “You get these elements of surprise,” she says. “You just have to go for it.” ADELA FESTIVAL Wilson, a researcher at the Creative Computing Institute at the University of the Arts London, is just one of many working on what’s known as co-­creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves. She and her colleagues built the live-­coding agent to explore how artificial intelligence can be used to support human artistic endeavors—in Wilson’s case, musical improvisation. It’s a vision that goes beyond the promise of existing generative tools put out by companies like OpenAI and Google DeepMind. Those can automate a striking range of creative tasks and offer near-instant gratification—but at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop. And so they are looking for ways to inject human creativity back into the process. The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together. Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers.  Explosion of creativity There’s no one way to be creative, but we all do it. We make everything from memes to masterpieces, infant doodles to industrial designs. There’s a mistaken belief, typically among adults, that creativity is something you grow out of. But being creative—whether cooking, singing in the shower, or putting together super-weird TikToks—is still something that most of us do just for the fun of it. It doesn’t have to be high art or a world-changing idea (and yet it can be). Creativity is basic human behavior; it should be celebrated and encouraged.  When generative text-to-image models like Midjourney, OpenAI’s DALL-E, and the popular open-source Stable Diffusion arrived, they sparked an explosion of what looked a lot like creativity. Millions of people were now able to create remarkable images of pretty much anything, in any style, with the click of a button. Text-to-video models came next. Now startups like Udio are developing similar tools for music. Never before have the fruits of creation been within reach of so many. But for a number of researchers and artists, the hype around these tools has warped the idea of what creativity really is. “If I ask the AI to create something for me, that’s not me being creative,” says Jeba Rezwana, who works on co-creativity at Towson University in Maryland. “It’s a one-shot interaction: You click on it and it generates something and that’s it. You cannot say ‘I like this part, but maybe change something here.’ You cannot have a back-and-forth dialogue.” Rezwana is referring to the way most generative models are set up. You can give the tools feedback and ask them to have another go. But each new result is generated from scratch, which can make it hard to nail exactly what you want. As the filmmaker Walter Woodman put it last year after his art collective Shy Kids made a short film with OpenAI’s text-to-video model for the first time: “Sora is a slot machine as to what you get back.” What’s more, the latest versions of some of these generative tools do not even use your submitted prompt as is to produce an image or video (at least not on their default settings). Before a prompt is sent to the model, the software edits it—often by adding dozens of hidden words—to make it more likely that the generated image will appear polished. “Extra things get added to juice the output,” says Mike Cook, a computational creativity researcher at King’s College London. “Try asking Midjourney to give you a bad drawing of something—it can’t do it.” These tools do not give you what you want; they give you what their designers think you want. COURTESY OF MIKE COOK All of which is fine if you just need a quick image and don’t care too much about the details, says Nick Bryan-Kinns, also at the Creative Computing Institute: “Maybe you want to make a Christmas card for your family or a flyer for your community cake sale. These tools are great for that.” In short, existing generative models have made it easy to create, but they have not made it easy to be creative. And there’s a big difference between the two. For Cook, relying on such tools could in fact harm people’s creative development in the long run. “Although many of these creative AI systems are promoted as making creativity more accessible,” he wrote in a paper published last year, they might instead have “adverse effects on their users in terms of restricting their ability to innovate, ideate, and create.” Given how much generative models have been championed for putting creative abilities at everyone’s fingertips, the suggestion that they might in fact do the opposite is damning.   In the game Disc Room, players navigate a room of moving buzz saws.DEVOLVER DIGITAL Cook used AI to design a new level for the game. The result was a room where none of the discs actually moved.COURTESY OF MIKE COOK He’s far from the only researcher worrying about the cognitive impact of these technologies. In February a team at Microsoft Research Cambridge published a report concluding that generative AI tools “can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” The researchers found that with the use of generative tools, people’s effort “shifts from task execution to task stewardship.” Cook is concerned that generative tools don’t let you fail—a crucial part of learning new skills. We have a habit of saying that artists are gifted, says Cook. But the truth is that artists work at their art, developing skills over months and years. “If you actually talk to artists, they say, ‘Well, I got good by doing it over and over and over,’” he says. “But failure sucks. And we’re always looking at ways to get around that.” Generative models let us skip the frustration of doing a bad job.  “Unfortunately, we’re removing the one thing that you have to do to develop creative skills for yourself, which is fail,” says Cook. “But absolutely nobody wants to hear that.” Surprise me And yet it’s not all bad news. Artists and researchers are buzzing at the ways generative tools could empower creators, pointing them in surprising new directions and steering them away from dead ends. Cook thinks the real promise of AI will be to help us get better at what we want to do rather than doing it for us. For that, he says, we’ll need to create new tools, different from the ones we have now. “Using Midjourney does not do anything for me—it doesn’t change anything about me,” he says. “And I think that’s a wasted opportunity.” Ask a range of researchers studying creativity to name a key part of the creative process and many will say: reflection. It’s hard to define exactly, but reflection is a particular type of focused, deliberate thinking. It’s what happens when a new idea hits you. Or when an assumption you had turns out to be wrong and you need to rethink your approach. It’s the opposite of a one-shot interaction. Looking for ways that AI might support or encourage reflection—asking it to throw new ideas into the mix or challenge ideas you already hold—is a common thread across co-creativity research. If generative tools like DALL-E make creation frictionless, the aim here is to add friction back in. “How can we make art without friction?” asks Elisa Giaccardi, who studies design at the Polytechnic University of Milan in Italy. “How can we engage in a truly creative process without material that pushes back?” Take Wilson’s live-coding agent. She claims that it pushes her musical improvisation in directions she might not have taken by herself. Trained on public code shared by the wider live-coding community, the model suggests snippets of code that are closer to other people’s styles than her own. This makes it more likely to produce something unexpected. “Not because you couldn’t produce it yourself,” she says. “But the way the human brain works, you tend to fall back on repeated ideas.” Last year, Wilson took part in a study run by Bryan-Kinns and his colleagues in which they surveyed six experienced musicians as they used a variety of generative models to help them compose a piece of music. The researchers wanted to get a sense of what kinds of interactions with the technology were useful and which were not. The participants all said they liked it when the models made surprising suggestions, even when those were the result of glitches or mistakes. Sometimes the results were simply better. Sometimes the process felt fresh and exciting. But a few people struggled with giving up control. It was hard to direct the models to produce specific results or to repeat results that the musicians had liked. “In some ways it’s the same as being in a band,” says Bryan-Kinns. “You need to have that sense of risk and a sense of surprise, but you don’t want it totally random.” Alternative designs Cook comes at surprise from a different angle: He coaxes unexpected insights out of AI tools that he has developed to co-create video games. One of his tools, Puck, which was first released in 2022, generates designs for simple shape-matching puzzle games like Candy Crush or Bejeweled. A lot of Puck’s designs are experimental and clunky—don’t expect it to come up with anything you are ever likely to play. But that’s not the point: Cook uses Puck—and a newer tool called Pixie—to explore what kinds of interactions people might want to have with a co-creative tool. Pixie can read computer code for a game and tweak certain lines to come up with alternative designs. Not long ago, Cook was working on a copy of a popular game called Disc Room, in which players have to cross a room full of moving buzz saws. He asked Pixie to help him come up with a design for a level that skilled and unskilled players would find equally hard. Pixie designed a room where none of the discs actually moved. Cook laughs: It’s not what he expected. “It basically turned the room into a minefield,” he says. “But I thought it was really interesting. I hadn’t thought of that before.” COURTESY OF ANNE ARZBERGER COURTESY OF ANNE ARZBERGER Researcher Anne Arzberger developed experimental AI tools to come up with gender-neutral toy designs. Pushing back on assumptions, or being challenged, is part of the creative process, says Anne Arzberger, a researcher at the Delft University of Technology in the Netherlands. “If I think of the people I’ve collaborated with best, they’re not the ones who just said ‘Yes, great’ to every idea I brought forth,” she says. “They were really critical and had opposing ideas.” She wants to build tech that provides a similar sounding board. As part of a project called Creating Monsters, Arzberger developed two experimental AI tools that help designers find hidden biases in their designs. “I was interested in ways in which I could use this technology to access information that would otherwise be difficult to access,” she says. For the project, she and her colleagues looked at the problem of designing toy figures that would be gender neutral. She and her colleagues (including Giaccardi) used Teachable Machine, a web app built by Google researchers in 2017 that makes it easy to train your own machine-learning model to classify different inputs, such as images. They trained this model with a few dozen images that Arzberger had labeled as being masculine, feminine, or gender neutral. Arzberger then asked the model to identify the genders of new candidate toy designs. She found that quite a few designs were judged to be feminine even when she had tried to make them gender neutral. She felt that her views of the world—her own hidden biases—were being exposed. But the tool was often right: It challenged her assumptions and helped the team improve the designs. The same approach could be used to assess all sorts of design characteristics, she says. Arzberger then used a second model, a version of a tool made by the generative image and video startup Runway, to come up with gender-neutral toy designs of its own. First the researchers trained the model to generate and classify designs for male- and female-looking toys. They could then ask the tool to find a design that was exactly midway between the male and female designs it had learned. Generative models can give feedback on designs that human designers might miss by themselves, she says: “We can really learn something.”  Taking control The history of technology is full of breakthroughs that changed the way art gets made, from recipes for vibrant new paint colors to photography to synthesizers. In the 1960s, the Stanford researcher John Chowning spent years working on an esoteric algorithm that could manipulate the frequencies of computer-generated sounds. Stanford licensed the tech to Yamaha, which built it into its synthesizers—including the DX7, the cool new sound behind 1980s hits such as Tina Turner’s “The Best,” A-ha’s “Take On Me,” and Prince’s “When Doves Cry.” Bryan-Kinns is fascinated by how artists and designers find ways to use new technologies. “If you talk to artists, most of them don’t actually talk about these AI generative models as a tool—they talk about them as a material, like an artistic material, like a paint or something,” he says. “It’s a different way of thinking about what the AI is doing.” He highlights the way some people are pushing the technology to do weird things it wasn’t designed to do. Artists often appropriate or misuse these kinds of tools, he says. Bryan-Kinns points to the work of Terence Broad, another colleague of his at the Creative Computing Institute, as a favorite example. Broad employs techniques like network bending, which involves inserting new layers into a neural network to produce glitchy visual effects in generated images, and generating images with a model trained on no data, which produces almost Rothko-like abstract swabs of color. But Broad is an extreme case. Bryan-Kinns sums it up like this: “The problem is that you’ve got this gulf between the very commercial generative tools that produce super-high-quality outputs but you’ve got very little control over what they do—and then you’ve got this other end where you’ve got total control over what they’re doing but the barriers to use are high because you need to be somebody who’s comfortable getting under the hood of your computer.” “That’s a small number of people,” he says. “It’s a very small number of artists.” Arzberger admits that working with her models was not straightforward. Running them took several hours, and she’s not sure the Runway tool she used is even available anymore. Bryan-Kinns, Arzberger, Cook, and others want to take the kinds of creative interactions they are discovering and build them into tools that can be used by people who aren’t hardcore coders.  COURTESY OF TERENCE BROAD COURTESY OF TERENCE BROAD Researcher Terence Broad creates dynamic images using a model trained on no data, which produces almost Rothko-like abstract color fields. Finding the right balance between surprise and control will be hard, though. Midjourney can surprise, but it gives few levers for controlling what it produces beyond your prompt. Some have claimed that writing prompts is itself a creative act. “But no one struggles with a paintbrush the way they struggle with a prompt,” says Cook. Faced with that struggle, Cook sometimes watches his students just go with the first results a generative tool gives them. “I’m really interested in this idea that we are priming ourselves to accept that whatever comes out of a model is what you asked for,” he says. He is designing an experiment that will vary single words and phrases in similar prompts to test how much of a mismatch people see between what they expect and what they get.  But it’s early days yet. In the meantime, companies developing generative models typically emphasize results over process. “There’s this impressive algorithmic progress, but a lot of the time interaction design is overlooked,” says Rezwana.   For Wilson, the crucial choice in any co-creative relationship is what you do with what you’re given. “You’re having this relationship with the computer that you’re trying to mediate,” she says. “Sometimes it goes wrong, and that’s just part of the creative process.”  When AI gives you lemons—make art. “Wouldn’t it be fun to have something that was completely antagonistic in a performance—like, something that is actively going against you—and you kind of have an argument?” she says. “That would be interesting to watch, at least.” 

Sometimes Lizzie Wilson shows up to a rave with her AI sidekick. 

One weeknight this past February, Wilson plugged her laptop into a projector that threw her screen onto the wall of a low-ceilinged loft space in East London. A small crowd shuffled in the glow of dim pink lights. Wilson sat down and started programming.

Techno clicks and whirs thumped from the venue’s speakers. The audience watched, heads nodding, as Wilson tapped out code line by line on the projected screen—tweaking sounds, looping beats, pulling a face when she messed up.  

Wilson is a live coder. Instead of using purpose-built software like most electronic music producers, live coders create music by writing the code to generate it on the fly. It’s an improvised performance art known as algorave.

“It’s kind of boring when you go to watch a show and someone’s just sitting there on their laptop,” she says. “You can enjoy the music, but there’s a performative aspect that’s missing. With live coding, everyone can see what it is that I’m typing. And when I’ve had my laptop crash, people really like that. They start cheering.”

Taking risks is part of the vibe. And so Wilson likes to dial up her performances one more notch by riffing off what she calls a live-coding agent, a generative AI model that comes up with its own beats and loops to add to the mix. Often the model suggests sound combinations that Wilson hadn’t thought of. “You get these elements of surprise,” she says. “You just have to go for it.”

two performers at a table with a disapproving cat covered in code on a screen behind them

ADELA FESTIVAL

Wilson, a researcher at the Creative Computing Institute at the University of the Arts London, is just one of many working on what’s known as co-­creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves. She and her colleagues built the live-­coding agent to explore how artificial intelligence can be used to support human artistic endeavors—in Wilson’s case, musical improvisation.

It’s a vision that goes beyond the promise of existing generative tools put out by companies like OpenAI and Google DeepMind. Those can automate a striking range of creative tasks and offer near-instant gratificationbut at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop.

And so they are looking for ways to inject human creativity back into the process. The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together.

Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers. 

Explosion of creativity

There’s no one way to be creative, but we all do it. We make everything from memes to masterpieces, infant doodles to industrial designs. There’s a mistaken belief, typically among adults, that creativity is something you grow out of. But being creative—whether cooking, singing in the shower, or putting together super-weird TikToks—is still something that most of us do just for the fun of it. It doesn’t have to be high art or a world-changing idea (and yet it can be). Creativity is basic human behavior; it should be celebrated and encouraged. 

When generative text-to-image models like Midjourney, OpenAI’s DALL-E, and the popular open-source Stable Diffusion arrived, they sparked an explosion of what looked a lot like creativity. Millions of people were now able to create remarkable images of pretty much anything, in any style, with the click of a button. Text-to-video models came next. Now startups like Udio are developing similar tools for music. Never before have the fruits of creation been within reach of so many.

But for a number of researchers and artists, the hype around these tools has warped the idea of what creativity really is. “If I ask the AI to create something for me, that’s not me being creative,” says Jeba Rezwana, who works on co-creativity at Towson University in Maryland. “It’s a one-shot interaction: You click on it and it generates something and that’s it. You cannot say ‘I like this part, but maybe change something here.’ You cannot have a back-and-forth dialogue.”

Rezwana is referring to the way most generative models are set up. You can give the tools feedback and ask them to have another go. But each new result is generated from scratch, which can make it hard to nail exactly what you want. As the filmmaker Walter Woodman put it last year after his art collective Shy Kids made a short film with OpenAI’s text-to-video model for the first time: “Sora is a slot machine as to what you get back.”

What’s more, the latest versions of some of these generative tools do not even use your submitted prompt as is to produce an image or video (at least not on their default settings). Before a prompt is sent to the model, the software edits it—often by adding dozens of hidden words—to make it more likely that the generated image will appear polished.

“Extra things get added to juice the output,” says Mike Cook, a computational creativity researcher at King’s College London. “Try asking Midjourney to give you a bad drawing of something—it can’t do it.” These tools do not give you what you want; they give you what their designers think you want.

Mike Cook

COURTESY OF MIKE COOK

All of which is fine if you just need a quick image and don’t care too much about the details, says Nick Bryan-Kinns, also at the Creative Computing Institute: “Maybe you want to make a Christmas card for your family or a flyer for your community cake sale. These tools are great for that.”

In short, existing generative models have made it easy to create, but they have not made it easy to be creative. And there’s a big difference between the two. For Cook, relying on such tools could in fact harm people’s creative development in the long run. “Although many of these creative AI systems are promoted as making creativity more accessible,” he wrote in a paper published last year, they might instead have “adverse effects on their users in terms of restricting their ability to innovate, ideate, and create.” Given how much generative models have been championed for putting creative abilities at everyone’s fingertips, the suggestion that they might in fact do the opposite is damning.  

screenshot from the game with overlapping saws
In the game Disc Room, players navigate a room of moving buzz saws.
DEVOLVER DIGITAL
screenshot from the AI-generated game with tiny saws
Cook used AI to design a new level for the game. The result was a room where none of the discs actually moved.
COURTESY OF MIKE COOK

He’s far from the only researcher worrying about the cognitive impact of these technologies. In February a team at Microsoft Research Cambridge published a report concluding that generative AI tools “can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” The researchers found that with the use of generative tools, people’s effort “shifts from task execution to task stewardship.”

Cook is concerned that generative tools don’t let you fail—a crucial part of learning new skills. We have a habit of saying that artists are gifted, says Cook. But the truth is that artists work at their art, developing skills over months and years.

“If you actually talk to artists, they say, ‘Well, I got good by doing it over and over and over,’” he says. “But failure sucks. And we’re always looking at ways to get around that.”

Generative models let us skip the frustration of doing a bad job. 

“Unfortunately, we’re removing the one thing that you have to do to develop creative skills for yourself, which is fail,” says Cook. “But absolutely nobody wants to hear that.”

Surprise me

And yet it’s not all bad news. Artists and researchers are buzzing at the ways generative tools could empower creators, pointing them in surprising new directions and steering them away from dead ends. Cook thinks the real promise of AI will be to help us get better at what we want to do rather than doing it for us. For that, he says, we’ll need to create new tools, different from the ones we have now. “Using Midjourney does not do anything for me—it doesn’t change anything about me,” he says. “And I think that’s a wasted opportunity.”

Ask a range of researchers studying creativity to name a key part of the creative process and many will say: reflection. It’s hard to define exactly, but reflection is a particular type of focused, deliberate thinking. It’s what happens when a new idea hits you. Or when an assumption you had turns out to be wrong and you need to rethink your approach. It’s the opposite of a one-shot interaction.

Looking for ways that AI might support or encourage reflection—asking it to throw new ideas into the mix or challenge ideas you already hold—is a common thread across co-creativity research. If generative tools like DALL-E make creation frictionless, the aim here is to add friction back in. “How can we make art without friction?” asks Elisa Giaccardi, who studies design at the Polytechnic University of Milan in Italy. “How can we engage in a truly creative process without material that pushes back?”

Take Wilson’s live-coding agent. She claims that it pushes her musical improvisation in directions she might not have taken by herself. Trained on public code shared by the wider live-coding community, the model suggests snippets of code that are closer to other people’s styles than her own. This makes it more likely to produce something unexpected. “Not because you couldn’t produce it yourself,” she says. “But the way the human brain works, you tend to fall back on repeated ideas.”

Last year, Wilson took part in a study run by Bryan-Kinns and his colleagues in which they surveyed six experienced musicians as they used a variety of generative models to help them compose a piece of music. The researchers wanted to get a sense of what kinds of interactions with the technology were useful and which were not.

The participants all said they liked it when the models made surprising suggestions, even when those were the result of glitches or mistakes. Sometimes the results were simply better. Sometimes the process felt fresh and exciting. But a few people struggled with giving up control. It was hard to direct the models to produce specific results or to repeat results that the musicians had liked. “In some ways it’s the same as being in a band,” says Bryan-Kinns. “You need to have that sense of risk and a sense of surprise, but you don’t want it totally random.”

Alternative designs

Cook comes at surprise from a different angle: He coaxes unexpected insights out of AI tools that he has developed to co-create video games. One of his tools, Puck, which was first released in 2022, generates designs for simple shape-matching puzzle games like Candy Crush or Bejeweled. A lot of Puck’s designs are experimental and clunky—don’t expect it to come up with anything you are ever likely to play. But that’s not the point: Cook uses Puck—and a newer tool called Pixie—to explore what kinds of interactions people might want to have with a co-creative tool.

Pixie can read computer code for a game and tweak certain lines to come up with alternative designs. Not long ago, Cook was working on a copy of a popular game called Disc Room, in which players have to cross a room full of moving buzz saws. He asked Pixie to help him come up with a design for a level that skilled and unskilled players would find equally hard. Pixie designed a room where none of the discs actually moved. Cook laughs: It’s not what he expected. “It basically turned the room into a minefield,” he says. “But I thought it was really interesting. I hadn’t thought of that before.”

Anne Arzberger

COURTESY OF ANNE ARZBERGER
a stuffed unicorn and sewing materials

COURTESY OF ANNE ARZBERGER

Researcher Anne Arzberger developed experimental AI tools to come up with gender-neutral toy designs.

Pushing back on assumptions, or being challenged, is part of the creative process, says Anne Arzberger, a researcher at the Delft University of Technology in the Netherlands. “If I think of the people I’ve collaborated with best, they’re not the ones who just said ‘Yes, great’ to every idea I brought forth,” she says. “They were really critical and had opposing ideas.”

She wants to build tech that provides a similar sounding board. As part of a project called Creating Monsters, Arzberger developed two experimental AI tools that help designers find hidden biases in their designs. “I was interested in ways in which I could use this technology to access information that would otherwise be difficult to access,” she says.

For the project, she and her colleagues looked at the problem of designing toy figures that would be gender neutral. She and her colleagues (including Giaccardi) used Teachable Machine, a web app built by Google researchers in 2017 that makes it easy to train your own machine-learning model to classify different inputs, such as images. They trained this model with a few dozen images that Arzberger had labeled as being masculine, feminine, or gender neutral.

Arzberger then asked the model to identify the genders of new candidate toy designs. She found that quite a few designs were judged to be feminine even when she had tried to make them gender neutral. She felt that her views of the world—her own hidden biases—were being exposed. But the tool was often right: It challenged her assumptions and helped the team improve the designs. The same approach could be used to assess all sorts of design characteristics, she says.

Arzberger then used a second model, a version of a tool made by the generative image and video startup Runway, to come up with gender-neutral toy designs of its own. First the researchers trained the model to generate and classify designs for male- and female-looking toys. They could then ask the tool to find a design that was exactly midway between the male and female designs it had learned.

Generative models can give feedback on designs that human designers might miss by themselves, she says: “We can really learn something.” 

Taking control

The history of technology is full of breakthroughs that changed the way art gets made, from recipes for vibrant new paint colors to photography to synthesizers. In the 1960s, the Stanford researcher John Chowning spent years working on an esoteric algorithm that could manipulate the frequencies of computer-generated sounds. Stanford licensed the tech to Yamaha, which built it into its synthesizers—including the DX7, the cool new sound behind 1980s hits such as Tina Turner’s “The Best,” A-ha’s “Take On Me,” and Prince’s “When Doves Cry.”

Bryan-Kinns is fascinated by how artists and designers find ways to use new technologies. “If you talk to artists, most of them don’t actually talk about these AI generative models as a tool—they talk about them as a material, like an artistic material, like a paint or something,” he says. “It’s a different way of thinking about what the AI is doing.” He highlights the way some people are pushing the technology to do weird things it wasn’t designed to do. Artists often appropriate or misuse these kinds of tools, he says.

Bryan-Kinns points to the work of Terence Broad, another colleague of his at the Creative Computing Institute, as a favorite example. Broad employs techniques like network bending, which involves inserting new layers into a neural network to produce glitchy visual effects in generated images, and generating images with a model trained on no data, which produces almost Rothko-like abstract swabs of color.

But Broad is an extreme case. Bryan-Kinns sums it up like this: “The problem is that you’ve got this gulf between the very commercial generative tools that produce super-high-quality outputs but you’ve got very little control over what they do—and then you’ve got this other end where you’ve got total control over what they’re doing but the barriers to use are high because you need to be somebody who’s comfortable getting under the hood of your computer.”

“That’s a small number of people,” he says. “It’s a very small number of artists.”

Arzberger admits that working with her models was not straightforward. Running them took several hours, and she’s not sure the Runway tool she used is even available anymore. Bryan-Kinns, Arzberger, Cook, and others want to take the kinds of creative interactions they are discovering and build them into tools that can be used by people who aren’t hardcore coders. 

Terence Broad

COURTESY OF TERENCE BROAD
ai-generated color field image

COURTESY OF TERENCE BROAD

Researcher Terence Broad creates dynamic images using a model trained on no data, which produces almost Rothko-like abstract color fields.

Finding the right balance between surprise and control will be hard, though. Midjourney can surprise, but it gives few levers for controlling what it produces beyond your prompt. Some have claimed that writing prompts is itself a creative act. “But no one struggles with a paintbrush the way they struggle with a prompt,” says Cook.

Faced with that struggle, Cook sometimes watches his students just go with the first results a generative tool gives them. “I’m really interested in this idea that we are priming ourselves to accept that whatever comes out of a model is what you asked for,” he says. He is designing an experiment that will vary single words and phrases in similar prompts to test how much of a mismatch people see between what they expect and what they get. 

But it’s early days yet. In the meantime, companies developing generative models typically emphasize results over process. “There’s this impressive algorithmic progress, but a lot of the time interaction design is overlooked,” says Rezwana.  

For Wilson, the crucial choice in any co-creative relationship is what you do with what you’re given. “You’re having this relationship with the computer that you’re trying to mediate,” she says. “Sometimes it goes wrong, and that’s just part of the creative process.” 

When AI gives you lemons—make art. “Wouldn’t it be fun to have something that was completely antagonistic in a performance—like, something that is actively going against you—and you kind of have an argument?” she says. “That would be interesting to watch, at least.” 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

VMware (quietly) brings back its free ESXi hypervisor

By many accounts, Broadcom’s handling of the VMware acquisition was clumsy and caused many enterprises to reevaluate their relationship with the vendor The move to subscription models was tilted in favor of larger customers and longer, three-year licenses. Because the string of bad publicity and VMware’s competitors pounced, offering migration

Read More »

CoreWeave offers cloud-based Grace Blackwell GPUs for AI training

Cloud services provider CoreWeave has announced it is offering Nvidia’s GB200 NVL72 systems, otherwise known as “Grace Blackwell,” to customers looking to do intensive AI training. CoreWeave said its portfolio of cloud services are optimized for the GB200 NVL72, including CoreWeave’s Kubernetes Service, Slurm on Kubernetes (SUNK), Mission Control, and

Read More »

Kyndryl launches private cloud services for enterprise AI deployments

Kyndryl’s AI Private Cloud environment includes services and capabilities around containerization, data science tools, and microservices to deploy and manage AI applications on the private cloud. The service supports AI data foundations and MLOps/LLMOps services, letting customers manage their AI data pipelines and machine learning operation, Kyndryl stated. These tools facilitate

Read More »

Question time for North Sea debate events launched

The trade body representing the offshore energy industry including the oil and gas sector has launched a series of debate events across the UK. Events in Aberdeen, Falkirk and Newcastle will feature a Question Time-style debate format that brings together politicians, industry leaders, unions, and the local community. Offshore Energies UK (OEUK) has launched a ballot registration system for people who want to attend and will be “selecting attendees to ensure a broad range of perspectives”. The body said it wants to encouraging “people from all walks of life” to attend and actively participate in discussions on the UK’s energy future. The debates are free to attend and are aimed at anyone interested in the future of the UK’s energy supply, from workers in the sector, to people simply curious about how the nation will tackle its energy challenges in the years ahead, OEUK said. OEUK chief executive David Whitehouse said:  “We’re hosting these events to open up the conversation on energy production. “Whether you work in the offshore energy sector or not, these debates are a chance for everyone to have their say on what the UK’s energy future should look like. “We want to hear from local communities, businesses and workers who will be affected by these decisions.” The body has highlighted the sector faces a critical time period. The UK government is currently mulling a number of existential issues in the sector including the so-called windfall tax on North Sea oil and gas producers, new legal requirements on the environmental impact of oil and gas as well as support for clean energy production such as carbon capture and storage (CCS) and hydrogen. OEUK said with the UK government currently consulting on key energy policies, and domestic energy production at record lows, these events offer a “timely opportunity for local

Read More »

Trump tariff oil price slump ‘painful but not causing injury’ to North Sea operators

The fall in global oil prices caused by uncertainty around US trade tariffs is currently “painful but not causing injury” for North Sea operators, according to an analyst. US President Donald Trump imposed sweeping tariffs on trading partners around the world on 3 April, leading to widespread economic uncertainty. Fears of a global slowdown in trade has seen a steep drop in oil prices, with Brent crude dropping from around $75 on 2 April to around $65 on 16 April. European natural gas prices have also seen similar falls since the tariff uncertainty began. As a result, the International Energy Agency (IEA) has forecast that the world will use less oil this year, and warned oil markets are “in for a bumpy ride” as multiple countries scramble to enter trade negotiations with the White House. North Sea impacts of oil price slump But Wood Mackenzie research director Gail Anderson told Energy Voice that the fall in prices is not currently causing major problems for North Sea producers. However, Anderson said that could change if oil and gas prices drop further, with possible impacts on North Sea exploration. “If [oil prices] were to go down below $60 per barrel then we could see operators revising near term capex guidance to preserve cash flow and canning any discretionary spend like exploration, etc.,” Anderson said. © ShutterstockAn offshore oil and gas platform in the North Sea. While all North Sea operators are “being impacted to a similar degree,” Anderson said “the most oil-weighted players are feeling the biggest impact”. Rosebank ‘probably won’t be affected’ But amidst the near-term uncertainty for 2025 and 2026, Anderson said Wood Mackenzie’s “long-term assumption for prices remains unchanged”. This means that the current oil price levels should not change the economic outlook for North Sea projects which are

Read More »

Energy transition hub launched to help Scottish companies enter UAE

Scottish Enterprise has announced plans to establish an energy transition hub in Abu Dhabi to help Scottish companies enter the United Arab Emirate’s clean energy sector. SNP deputy First Minister Kate Forbes made the announcement during a trade visit to the Gulf country this week. While exact details are still to be released, the hub aims to provide businesses with a base in the country for 12-18 months. In its first year, the hub will be home to five to 10 Scottish companies entering the UAE market for the first time. The hub will give them access to advice and support on setting up, mentoring and coaching on the local business environment, funding advice and sources, and practical advice on doing business in UAE. Scottish Enterprise chief executive Adrian Gillespie said: “This innovative approach will help ambitious Scottish companies take their first steps in the UAE. We’re intentionally locating the hub in Abu Dhabi to be near businesses at the forefront of UAE’s energy transition, such as Masdar, ADNOC and TAQA. “Helping Scottish companies identify new opportunities and partnerships outside of Scotland is vital to our future economic success. It’s been a productive visit so far with our companies making incredibly useful connections that will help them further develop a strong presence in UAE when the Scottish energy transition hub opens.” UAE energy market The UAE is a leading global trade hub and key target market for Scottish companies – Scotland exported £548 million worth of goods in 2023. The country is also among the Middle East’s leaders for clean energy, having set 2050 for its net-zero target and targeting 14.2GW of clean energy production capacity by 2030. It is home to some of the world’s biggest solar farms, such as the 2GW Al Dhafra Solar Park and the Mohammed

Read More »

Fracker Liberty’s Profit Falls to 3-Year Low as Oil Slumps

Shale fracker Liberty Energy Inc. posted its worst earnings in three years amid plunging oil prices and mounting concerns about energy demand. Adjusted first-quarter profit fell to 4 cents a share, according to a statement Wednesday, matching the average estimate among analysts. Sales and capital spending both came in better than expected, prompting the shares to rise more than 9% before the start of regular trading in New York on Thursday. Current levels of fracking activity suggest US oil output will hold steady, “mitigating the possibility of steep declines experienced by the service industry in past cycles,” the company said. Meanwhile, major shale driller Diamondback Energy Inc. said Wednesday that it’s “actively reviewing its operating plan” for the rest of the year given market volatility, according to a separate statement.  “While the current tumult in commodity prices is not immediately driving changes in North American activity, we expect oil producers are evaluating a range of scenarios in anticipation of oil price pressure,” Liberty said.  Liberty’s broad footprint across North American shale provides it a unique scope of vision for domestic oil-production trends. The Denver-based oilfield contractor has tumbled roughly 40% this year as US President Donald Trump’s trade war punished crude prices and tarnished the outlook for near-term fossil-fuel demand. Liberty is the first major US-based oil-service company to post quarterly results, with rival Halliburton Co. set to follow Tuesday morning.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Oil Surges Amid Iran Sanctions

Oil rose for a second day after the Trump administration ratcheted up pressure on Iran’s energy exports, while talks between the US and a handful of key trade partners stirred optimism that agreements on trade can be reached. West Texas Intermediate surged 3.5% to settle near $65 a barrel, marking the largest two-day increase since early January. President Donald Trump said he is confident a trade deal with the European Union could be achieved, and negotiations between the US and Japan bolstered expectations that deals can be struck to avoid the worst effects of tariffs. Futures were also propelled higher by investors covering short positions and algorithmic traders turning marginally more bullish ahead of the long weekend. Oil futures won’t trade on Friday, a holiday in many countries, crimping volumes. On the Middle East front, Treasury Secretary Scott Bessent said the US would apply maximum pressure to disrupt Iran’s oil supply chain as his department sanctioned a second Chinese refinery accused of handling crude from the Islamic Republic. The so-called teapot oil processor sanctioned by the US — Shandong Shengxing Chemical Co. — had allegedly handled more than $1 billion of Iranian crude, the Treasury Department said. Tehran, meanwhile, warned that nuclear talks with Washington may fall apart if the Trump administration “moves the goalposts.” “While the macroeconomic backdrop remains mixed, it has the potential to either amplify market rallies or derail them entirely, depending on how these geopolitical tensions evolve,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. The pressure of Trump’s sweeping trade measures has put crude on the back foot this month, with prices at one point trading about 30% lower than their high for the year. This week’s rebound was aided by US government data that showed inventory levels at Cushing, Oklahoma

Read More »

EPA granted appellate stay after judge rules to end GGRF funding freeze

The U.S. Court of Appeals for the District of Columbia Circuit on Wednesday issued a stay on a district court judge’s decision ordering the Environmental Protection Agency to unfreeze Greenhouse Gas Reduction Fund funding. “The district court’s order is stayed insofar as it enables or requires Citibank to release, disburse, transfer, otherwise move, or allow access to funds,” wrote Circuit Judges Neomi Rao, Gregory Katsas and Nina Pillard. “It is further ORDERED that no party take any action, directly or indirectly, with regard to the disputed contracts, grants, awards or funds.” The stay came after the U.S. District Court for the District of Columbia on Tuesday granted an injunction to the Climate United Fund, Coalition for Green Capital, and Power Forward Communities — nonprofit “green banks” which received $6.97 billion, $5 billion and $2 billion, respectively, from the Inflation Reduction Act’s GGRF.  Those funds, held at Citibank, have been frozen since Feb. 16. The nonprofits sued EPA and Citibank last month, alleging serious financial harm to both their institutions and their subgrantees should the freeze continue, and asked for an injunction to end it.  In a Wednesday memorandum opinion regarding her order to grant the injunction, Judge Tanya Chutkan wrote that EPA “acted arbitrarily and capriciously when it failed to explain its reasoning and acted contrary to its regulations in suspending and terminating Plaintiffs’ grants.” “Though repeatedly pressed on the issue, EPA offers no rational explanation for why it suspended the grants and then immediately terminated the entire [National Community Investment Fund] and [Clean Communities Investment Accelerator] grant programs overnight,” Chutkan said.  She continued, “Nor has EPA offered any rational explanation for why it needed to cancel the grants to safeguard taxpayer resources, especially when it had begun examining the grant programs to add oversight mechanisms, or why it needed

Read More »

Intel sells off majority stake in its FPGA business

Altera will continue offering field-programmable gate array (FPGA) products across a wide range of use cases, including automotive, communications, data centers, embedded systems, industrial, and aerospace.  “People were a bit surprised at Intel’s sale of the majority stake in Altera, but they shouldn’t have been. Lip-Bu indicated that shoring up Intel’s balance sheet was important,” said Jim McGregor, chief analyst with Tirias Research. The Altera has been in the works for a while and is a relic of past mistakes by Intel to try to acquire its way into AI, whether it was through FPGAs or other accelerators like Habana or Nervana, note Anshel Sag, principal analyst with Moor Insight and Research. “Ultimately, the 50% haircut on the valuation of Altera is unfortunate, but again is a demonstration of Intel’s past mistakes. I do believe that finishing the process of spinning it out does give Intel back some capital and narrows the company’s focus,” he said. So where did it go wrong? It wasn’t with FPGAs because AMD is making a good run of it with its Xilinx acquisition. The fault, analysts say, lies with Intel, which has a terrible track record when it comes to acquisitions. “Altera could have been a great asset to Intel, just as Xilinx has become a valuable asset to AMD. However, like most of its acquisitions, Intel did not manage Altera well,” said McGregor.

Read More »

Intelligence at the edge opens up more risks: how unified SASE can solve it

In an increasingly mobile and modern workforce, smart technologies such as AI-driven edge solutions and the Internet of Things (IoT) can help enterprises improve productivity and efficiency—whether to address operational roadblocks or respond faster to market demands. However, new solutions also come with new challenges, mainly in cybersecurity. The decentralized nature of edge computing—where data is processed, transmitted, and secured closer to the source rather than in a data center—has presented new risks for businesses and their everyday operations. This shift to the edge increases the number of exposed endpoints and creates new vulnerabilities as the attack surface expands. Enterprises will need to ensure their security is watertight in today’s threat landscape if they want to reap the full benefits of smart technologies at the edge. Bypassing the limitations of traditional network security  For the longest time, enterprises have relied on traditional network security approaches to protect their edge solutions. However, these methods are becoming increasingly insufficient as they typically rely on static rules and assumptions, making them inflexible and predictable for malicious actors to circumvent.  While effective in centralized infrastructures like data centers, traditional network security models fall short when applied to the distributed nature of edge computing. Instead, organizations need to adopt more adaptive, decentralized, and intelligent security frameworks built with edge deployments in mind.  Traditional network security typically focuses on keeping out external threats. But today’s threat landscape has evolved significantly, with threat actors leveraging AI to launch advanced attacks such as genAI-driven phishing, sophisticated social engineering attacks, and malicious GPTs. Combined with the lack of visibility with traditional network security, a cybersecurity breach could remain undetected until it’s too late, resulting in consequences extending far beyond IT infrastructures.  Next generation of enterprise security with SASE As organizations look into implementing new technologies to spearhead their business, they

Read More »

Keysight tools tackle data center deployment efficiency

Test and performance measurement vendor Keysight Technologies has developed Keysight Artificial Intelligence (KAI) to identify performance inhibitors affecting large GPU deployments. It emulates workload profiles, rather than using actual resources, to pinpoint performance bottlenecks. Scaling AI data centers requires testing throughout the design and build process – every chip, cable, interconnect, switch, server, and GPU needs to be validated, Keysight says. From the physical layer through the application layer, KAI is designed to identify weak links that degrade the performance of AI data centers, and it validates and optimizes system-level performance for optimal scaling and throughput. AI providers, semiconductor fabricators, and network equipment manufacturers can use KAI to accelerate design, development, deployment, and operations by pinpointing performance issues before deploying in production.

Read More »

U.S. Advances AI Data Center Push with RFI for Infrastructure on DOE Lands

ORNL is also the home of the Center for Artificial Intelligence Security Research (CAISER), which Edmon Begoli, CAISER founding director, described as being in place to build the security necessary by defining a new field of AI research targeted at fighting future AI security risks. Also, at the end of 2024, Google partner Kairos Power started construction of their Hermes demonstration SMR in Oak Ridge. Hermes is a high-temperature gas-cooled reactor (HTGR) that uses triso-fueled pebbles and a molten fluoride salt coolant (specifically Flibe, a mix of lithium fluoride and beryllium fluoride). This demonstration reactor is expected to be online by 2027, with a production level system becoming available in the 2030 timeframe. Also located in a remote area of Oak Ridge is the Tennessee Valley Clinch River project, where the TVA announced a signed agreement with GE-Hitachi to plan and license a BWRX-300 small modular reactor (SMR). On Integrating AI and Energy Production The foregoing are just examples of ongoing projects at the sites named by the DOE’s RFI. Presuming that additional industry power, utility, and data center providers get on board with these locations, any of the 16 could be the future home of AI data centers and on-site power generation. The RFI marks a pivotal step in the U.S. government’s strategy to solidify its global dominance in AI development and energy innovation. By leveraging the vast resources and infrastructure of its national labs and research sites, the DOE is positioning the country to meet the enormous power and security demands of next-generation AI technologies. The selected locations, already home to critical energy research and cutting-edge supercomputing, present a compelling opportunity for industry stakeholders to collaborate on building integrated, sustainable AI data centers with dedicated energy production capabilities. With projects like Oak Ridge’s pioneering SMRs and advanced AI security

Read More »

Generac Sharpens Focus on Data Center Power with Scalable Diesel and Natural Gas Generators

In a digital economy defined by constant uptime and explosive compute demand, power reliability is more than a design criterion—it’s a strategic imperative. In response to such demand, Generac Power Systems, a company long associated with residential backup and industrial emergency power, is making an assertive move into the heart of the digital infrastructure sector with a new portfolio of high-capacity generators engineered for the data center market. Unveiled this week, Generac’s new lineup includes five generators ranging from 2.25 MW to 3.25 MW. These units are available in both diesel and natural gas configurations, and form part of a broader suite of multi-asset energy systems tailored to hyperscale, colocation, enterprise, and edge environments. The product introductions expand Generac’s commercial and industrial capabilities, building on decades of experience with mission-critical power in hospitals, telecom, and manufacturing, now optimized for the scale and complexity of modern data centers. “Coupled with our expertise in designing generators specific to a wide variety of industries and uses, this new line of generators is designed to meet the most rigorous standards for performance, packaging, and after-treatment specific to the data center market,” said Ricardo Navarro, SVP & GM, Global Telecom and Data Centers, Generac. Engineering for the Demands of Digital Infrastructure Each of the five new generators is designed for seamless integration into complex energy ecosystems. Generac is emphasizing modularity, emissions compliance, and high-ambient operability as central to the offering, reflecting a deep understanding of the real-world challenges facing data center operators today. The systems are built around the Baudouin M55 engine platform, which is engineered for fast transient response and high operating temperatures—key for data center loads that swing sharply under AI and cloud workloads. The M55’s high-pressure common rail fuel system supports low NOx emissions and Tier 4 readiness, aligning with the most

Read More »

CoolIT and Accelsius Push Data Center Liquid Cooling Limits Amid Soaring Rack Densities

The CHx1500’s construction reflects CoolIT’s 24 years of DLC experience, using stainless-steel piping and high-grade wetted materials to meet the rigors of enterprise and hyperscale data centers. It’s also designed to scale: not just for today’s most power-hungry processors, but for future platforms expected to surpass today’s limits. Now available for global orders, CoolIT is offering full lifecycle support in over 75 countries, including system design, installation, CDU-to-server certification, and maintenance services—critical ingredients as liquid cooling shifts from high-performance niche to a requirement for AI infrastructure at scale. Capex Follows Thermals: Dell’Oro Forecast Signals Surge In Cooling and Rack Power Infrastructure Between Accelsius and CoolIT, the message is clear: direct liquid cooling is stepping into its maturity phase, with products engineered not just for performance, but for mass deployment. Still, technology alone doesn’t determine the pace of adoption. The surge in thermal innovation from Accelsius and CoolIT isn’t happening in a vacuum. As the capital demands of AI infrastructure rise, the industry is turning a sharper eye toward how data center operators account for, prioritize, and report their AI-driven investments. To wit: According to new market data from Dell’Oro Group, the transition toward high-power, high-density AI racks is now translating into long-term investment shifts across the data center physical layer. Dell’Oro has raised its forecast for the Data Center Physical Infrastructure (DCPI) market, predicting a 14% CAGR through 2029, with total revenue reaching $61 billion. That revision stems from stronger-than-expected 2024 results, particularly in the adoption of accelerated computing by both Tier 1 and Tier 2 cloud service providers. The research firm cited three catalysts for the upward adjustment: Accelerated server shipments outpaced expectations. Demand for high-power infrastructure is spreading to smaller hyperscalers and regional clouds. Governments and Tier 1 telecoms are joining the buildout effort, reinforcing AI as a

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »