Stay Ahead, Stay ONMINE

Music AI Sandbox, now with new features and broader access

Music AI Sandbox was developed by Adam Roberts, Amy Stuart, Ari Troper, Beat Gfeller, Chris Deaner, Chris Reardon, Colin McArdell, DY Kim, Ethan Manilow, Felix Riedel, George Brower, Hema Manickavasagam, Jeff Chang, Jesse Engel, Michael Chang, Moon Park, Pawel Wluka, Reed Enger, Ross Cairns, Sage Stevens, Tom Jenkins, Tom Hume and Yotam Mann. Additional contributions provided by Arathi Sethumadhavan, Brian McWilliams, Cătălina Cangea, Doug Fritz, Drew Jaegle, Eleni Shaw, Jessi Liang, Kazuya Kawakami, Kehang Han, and Veronika Goldberg.Lyria 2 was developed by Asahi Ushio, Beat Gfeller, Brian McWilliams, Kazuya Kawakami, Keyang Xu, Matej Kastelic, Mauro Verzetti, Myriam Hamed Torres, Ondrej Skopek, Pavel Khrushkov, Pen Li, Tobenna Peter Igwe and Zalan Borsos. Additional contributions provided by Adam Roberts, Andrea Agostinelli, Benigno Uria, Carrie Zhang, Chris Deaner, Colin McArdell, DY Kim, Eleni Shaw, Ethan Manilow, Hongliang Fei, Jason Baldridge, Jesse Engel, Li Li, Luyu Wang, Mauricio Zuluaga, Nemanja Spasojevic, Noah Constant, Ruba Haroun, Tayniat Khan, Volodymyr Mnih, Yan Wu and Zoe Ashwood.Special thanks to Aäron van den Oord, Mahyar Bordbar, Douglas Eck, Eli Collins, Mira Lane, Koray Kavukcuoglu and Demis Hassabis for their insightful guidance and support throughout the development process.We also acknowledge the many other individuals who contributed across Google DeepMind and Alphabet, including our colleagues at YouTube (a particular shout out to the YouTube Artist Partnerships team led by Vivien Lewit for their support partnering with the music industry).

Music AI Sandbox was developed by Adam Roberts, Amy Stuart, Ari Troper, Beat Gfeller, Chris Deaner, Chris Reardon, Colin McArdell, DY Kim, Ethan Manilow, Felix Riedel, George Brower, Hema Manickavasagam, Jeff Chang, Jesse Engel, Michael Chang, Moon Park, Pawel Wluka, Reed Enger, Ross Cairns, Sage Stevens, Tom Jenkins, Tom Hume and Yotam Mann. Additional contributions provided by Arathi Sethumadhavan, Brian McWilliams, Cătălina Cangea, Doug Fritz, Drew Jaegle, Eleni Shaw, Jessi Liang, Kazuya Kawakami, Kehang Han, and Veronika Goldberg.

Lyria 2 was developed by Asahi Ushio, Beat Gfeller, Brian McWilliams, Kazuya Kawakami, Keyang Xu, Matej Kastelic, Mauro Verzetti, Myriam Hamed Torres, Ondrej Skopek, Pavel Khrushkov, Pen Li, Tobenna Peter Igwe and Zalan Borsos. Additional contributions provided by Adam Roberts, Andrea Agostinelli, Benigno Uria, Carrie Zhang, Chris Deaner, Colin McArdell, DY Kim, Eleni Shaw, Ethan Manilow, Hongliang Fei, Jason Baldridge, Jesse Engel, Li Li, Luyu Wang, Mauricio Zuluaga, Nemanja Spasojevic, Noah Constant, Ruba Haroun, Tayniat Khan, Volodymyr Mnih, Yan Wu and Zoe Ashwood.

Special thanks to Aäron van den Oord, Mahyar Bordbar, Douglas Eck, Eli Collins, Mira Lane, Koray Kavukcuoglu and Demis Hassabis for their insightful guidance and support throughout the development process.

We also acknowledge the many other individuals who contributed across Google DeepMind and Alphabet, including our colleagues at YouTube (a particular shout out to the YouTube Artist Partnerships team led by Vivien Lewit for their support partnering with the music industry).

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Network data hygiene: The critical first step to effective AI agents

Many network teams manage some 15 to 30 different dashboards to track data across all the components in an environment, struggling to cobble together relevant information across domains and spending hours troubleshooting a single incident. In short, they are drowning in data. Artificial intelligence tools—and specifically AI agents—promise to ease

Read More »

Key takeaways from IBM Think partner event

The first week of May means flowers from April showers and that it’s time for IBM Think in Boston. The first day of the event has historically been the Partner Plus day, which is devoted to content for IBM partners, which include ISVs, technology partners and resellers. The 2025 keynote

Read More »

LandBridge Posts Higher Revenue

LandBridge Company LLC has reported $44 million in revenue for the first quarter of 2025, up from $36.5 million for the fourth quarter of 2024 and $19 million for the corresponding quarter a year prior. The company attributed the sequential increase to increases in surface use royalties of $6.8 million,

Read More »

Intel Certifies Shell Lubricant for Cooling AI Data Centers

Intel Corp. has certified Shell Plc’s lubricant-based method for cooling servers more efficiently within data centers used for artificial intelligence. The announcement on Tuesday, which follows the chipmaker’s two-year trial of the technology, offers a way to use less energy at artificial intelligence facilities, which are booming and are expected to double their electricity demand globally by 2030, consuming as much power then as all of Japan today, according to the International Energy Agency. So far, companies have largely used giant fans to reduce temperatures inside AI data centers, which generate more heat in order to run at a higher power. Increasingly, these fans consume electricity at a rate that rivals the computers themselves, something the facilities’ operators would prefer to avoid, Intel Principal Engineer Samantha Yates said in an interview. “Upgrading existing air-cooling methods with immersion fluids can reduce data center energy use by up to 48%, as well as help reduce capital and operating expenditure by up to 33%,” Global Executive Vice President of Shell Lubricants, Jason Wong, said in a written statement. The immersion cooling fluids are ready to deploy and Intel is “providing an immersion rider warranty on top of our standard warranty terms to say we believe in this so much that you will be successful,” Yates said. Shell’s technology is the first of its kind to receive official certification by a major chip manufacturer, the companies said. Big Oil has been figuring out opportunities created by the growth in AI data centers. For Shell, the cooling fluids builds on its gas-to-liquids technology that the company has been developing for its lubricants business for decades. BP Plc sees similar potential for its Castrol lubricants business that has been working on immersion cooling fluids, although the unit is currently under strategic review and may be sold. The US oil majors,

Read More »

GOP to Phase Out Biden Energy Credits to Pay for Tax Cuts

House Republicans are proposing to eliminate a tax credit for electric vehicles and phase out incentives to develop clean-energy projects to help pay for President Donald Trump’s massive tax package. The incentives put in place by former President Joe Biden’s signature climate law have been ripe targets for lawmakers looking for trillions of dollars to help pay for extending Trump’s tax cuts. The president himself has had a bullseye on them, deriding them as part of the “green new scam.” But the draft legislation released Monday by House tax writers may not be as bad for producers of clean electricity from sources such as solar and wind, who feared a more aggressive phase out. First Solar Inc., the largest US solar manufacturer, rose 11% on Monday. Sunrun Inc., the largest US residential solar company, rose nearly 17%. “The proposal is mostly a win for US solar manufacturers and developers,” said Rob Barnett, senior analyst at Bloomberg Intelligence. “The fear is that the investment and production tax credits could have been gutted sooner.” In the Republicans’ proposal, popular production and investment tax credits for clean electricity would be phased out by the end of 2031 and new requirements against using materials from certain foreign nations would be added. Under the climate bill passed by Democrats in 2022, those credits weren’t set to expire until the later part of 2032 or until carbon emissions from the US electricity sector decline to at least 75% below 2022 levels, which analysts said would take decades. A tax credit for the production of nuclear energy would also be phased out by 2031 in the Republican plan. House Republicans opted to keep other credits, such as an incentive for carbon capture that provides as much as $85 a ton and extended by four years an incentive that

Read More »

Charging Forward: Scottish government approves Alcemi’s 300 MW Kintore battery storage plans

In this week’s Charging Forward, Alcemi has secured consent from the Scottish government for its 300 MW Kintore battery storage project in Aberdeenshire. Statera Energy has lodged an appeal after its 500 MW East Claydon battery storage project was refused planning permission, and more. Eku Energy has secured £145 million in financing to build its latest UK battery energy storage system (BESS) projects. Elsewhere, the Scottish government has approved two BESS projects in Aberdeenshire and Midlothian, while Highview Power has submitted plans for a 200 MW liquid air energy storage (LAES) in Ayrshire. In addition, UK firm Connected Energy has partnered with French electric bus battery manufacturer Forsee Power to develop grid-scale storage facilities using repurposed vehicle batteries. This week’s energy storage headlines: Alcemi secures consent for 300 MW Kintore BESS Statera Energy appeals 500 MW East Claydon BESS planning refusal Eku Energy secures £145m for new grid-scale battery storage DNV projects fourfold increase in UK battery storage by 2030 Island Green Power’s 105 MW Kinmuck BESS approved Buccleuch estate plans for 200 MW Salters battery storage approved Highview Power submits plans for Ayrshire liquid air energy storage Connected Energy developing grid-scale storage from electric bus batteries Elmya Energy submits plans for Shropshire BESS International energy storage news: Octopus Energy invests in 2 GW pipeline of solar and battery storage projects in Germany UK energy storage news Alcemi secures consent for 300 MW Kintore BESS UK battery storage developer Alcemi has secured consent from the Scottish government for its 300 MW Kintore Energy Storage Facility BESS project in Aberdeenshire. The project is located approximately 3km to the east of the existing Kintore substation on land south of Tofthills Avenue. Following the approval, Alcemi estimates the Kintore project will come online by October 2029. According to the company’s website, the Kintore BESS

Read More »

House GOP proposes early phaseout of IRA clean energy tax credits

Dive Brief: Federal tax credits that benefit energy developers, manufacturers and utilities face an early phaseout in a budget proposal released Monday by a key GOP-controlled House committee. The House Ways and Means Committee’s draft reconciliation package steps down the investment and production tax credits for nuclear power, wind, solar, batteries, geothermal and other clean energy technologies after 2028, and eliminates them completely after 2031. It preserves a comparatively generous credit for carbon sequestration and extends the clean fuels production credit. Energy industry groups and customers slammed the proposal, saying it would raise electricity prices, quash a manufacturing boom spurred by the Inflation Reduction Act and erode the United States’ competitive advantage on artificial intelligence. Dive Insight: The Ways and Means budget gives clean energy developers and producers until 2028 to claim the full 45Y and 48E tax credits for clean energy investment and production. The credit values step down to 80% in 2029, 60% in 2030 and 40% in 2031 before zeroing out in 2032. A separate credit for nuclear power production would phase out on the same schedule. As originally passed, the Inflation Reduction Act of 2022 allowed taxpayers to claim the full value of all three credits into 2032. “While our industry is ready to engage constructively and find a workable path forward, the Committee’s approach simply goes too far too fast,” American Clean Power Association CEO Jason Grumet said in a statement. “With energy demand surging, this is not the time for disruption.” The Ways and Means proposal also tightens eligibility for 45Y and 48E by requiring projects to be “placed in service” to qualify for the credit. The Inflation Reduction Act based eligibility on the year projects began construction, a more generous framework in a world where the timeline for grid interconnection and long-lead electrical

Read More »

Macquarie Strategists Forecast 7.6MM Barrel USA Crude Inventory Build

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, Macquarie strategists revealed that they are forecasting that U.S. crude inventories will be up by 7.6 million barrels for the week ending May 9. “This follows a 2.0 million barrel draw in the prior week, with the crude balance again realizing tight relative to our expectations,” the Macquarie strategists noted in the report. “For this week’s crude balance, from refineries, we model crude runs higher (+0.3 million barrels per day). Among net imports, we model a very large increase, with exports down (-0.9 million barrels per day) and imports up (+0.6 million barrels per day) on a nominal basis,” they added. The Macquarie strategists warned in the report that the timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a small increase (+0.1 million barrels per day) this week,” the strategists said in the report. “Rounding out the picture, we anticipate a slightly smaller increase in SPR [Strategic Petroleum Reserve] stocks (+0.5 million barrels) this week,” they added. “Among products, we look for a draw in distillate (-0.6 million barrels), with jet stocks up (+0.7 million barrels), and gasoline nearly flat (-0.1 million barrels). We model implied demand for these three products at ~14.4 million barrels per day for the week ending May 9,” the strategists went on to state. In its latest weekly petroleum status report, which was released on May 7 and included data for the week ending May 2, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in the SPR, decreased by two million barrels from the week ending April 25 to the week ending May 2. That EIA report showed that

Read More »

Empire Wind 1 stop work order may force project’s termination soon: Equinor

The Trump administration’s order for Equinor to stop work on its 810-MW Empire Wind 1 wind energy project offshore New York will force the company to terminate the project entirely if the situation isn’t resolved “within days,” an Equinor spokesperson said Monday. The stop work order has led to “an urgent and unsustainable situation,” the spokesperson told Utility Dive. “We need a resolution from the federal government for this important project to move forward.” In the absence of a resolution “within days,” the company will be “forced to terminate the project,” he said. Equinor CEO Anders Opedal met with U.S. National Economic Council Director Kevin Hassett at the White House on May 6, but got no indication that the Trump administration is prepared to shift its stance, Bloomberg reported Monday. The current delay is “extremely expensive,” costing up to $50 million each week that the project is delayed, Equinor’s spokesperson said. Interior Secretary Doug Burgum ordered the project’s pause April 16, stating in a letter to BOEM that the project was “rushed through by the prior administration without sufficient analysis or consultation among the relevant agencies as relates to the potential effects from the project.” The letter said that construction will remain halted until “further review is completed to address these serious deficiencies.” The following day, Equinor said it would comply with the order, and that “immediate steps were taken by Empire and its contractors to initiate suspension of relevant marine activities, ensuring the safety of workers and the environment.” However, Opedal said in an April 30 first quarter earnings report that the order to halt work “is unprecedented and in our view unlawful,” and that the company would both engage directly with the Trump administration as well as consider its legal options. “This is a question of the rights and

Read More »

Tech CEOs warn Senate: Outdated US power grid threatens AI ambitions

The implications are clear: without dramatic improvements to the US energy infrastructure, the nation’s AI ambitions could be significantly constrained by simple physical limitations – the inability to power the massive computing clusters necessary for advanced AI development and deployment. Streamlining permitting processes The tech executives have offered specific recommendations to address these challenges, with several focusing on the need to dramatically accelerate permitting processes for both energy generation and the transmission infrastructure needed to deliver that power to AI facilities, the report added. Intrator specifically called for efforts “to streamline the permitting process to enable the addition of new sources of generation and the transmission infrastructure to deliver it,” noting that current regulatory frameworks were not designed with the urgent timelines of the AI race in mind. This acceleration would help technology companies build and power the massive data centers needed for AI training and inference, which require enormous amounts of electricity delivered reliably and consistently. Beyond the cloud: bringing AI to everyday devices While much of the testimony focused on large-scale infrastructure needs, AMD CEO Lisa Su emphasized that true AI leadership requires “rapidly building data centers at scale and powering them with reliable, affordable, and clean energy sources.” Su also highlighted the importance of democratizing access to AI technologies: “Moving faster also means moving AI beyond the cloud. To ensure every American benefits, AI must be built into the devices we use every day and made as accessible and dependable as electricity.”

Read More »

Networking errors pose threat to data center reliability

Still, IT and networking issues increased in 2024, according to Uptime Institute. The analysis attributed the rise in outages due to increased IT and network complexity, specifically, change management and misconfigurations. “Particularly with distributed services, cloud services, we find that cascading failures often occur when networking equipment is replicated across an entire network,” Lawrence explained. “Sometimes the failure of one forces traffic to move in one direction, overloading capacity at another data center.” The most common causes of major network-related outages were cited as: Configuration/change management failure: 50% Third-party network provider failure: 34% Hardware failure: 31% Firmware/software error: 26% Line breakages: 17% Malicious cyberattack: 17% Network overload/congestion failure: 13% Corrupted firewall/routing tables issues: 8% Weather-related incident: 7% Configuration/change management issues also attributed for 62% of the most common causes of major IT system-/software-related outages. Change-related disruptions consistently are responsible for software-related outages. Human error continues to be one of the “most persistent challenges in data center operations,” according to Uptime’s analysis. The report found that the biggest cause of these failures is data center staff failing to follow established procedures, which has increased by about 10 percentage points compared to 2023. “These are things that were 100% under our control. I mean, we can’t control when the UPS module fails because it was either poorly manufactured, it had a flaw, or something else. This is 100% under our control,” Brown said. The most common causes of major human error-related outages were reported as:

Read More »

Liquid cooling technologies: reducing data center environmental impact

“Highly optimized cold-plate or one-phase immersion cooling technologies can perform on par with two-phase immersion, making all three liquid-cooling technologies desirable options,” the researchers wrote. Factors to consider There are numerous factors to consider when adopting liquid cooling technologies, according to Microsoft’s researchers. First, they advise performing a full environmental, health, and safety analysis, and end-to-end life cycle impact analysis. “Analyzing the full data center ecosystem to include systems interactions across software, chip, server, rack, tank, and cooling fluids allows decision makers to understand where savings in environmental impacts can be made,” they wrote. It is also important to engage with fluid vendors and regulators early, to understand chemical composition, disposal methods, and compliance risks. And associated socioeconomic, community, and business impacts are equally critical to assess. More specific environmental considerations include ozone depletion and global warming potential; the researchers emphasized that operators should only use fluids with low to zero ozone depletion potential (ODP) values, and not hydrofluorocarbons or carbon dioxide. It is also critical to analyze a fluid’s viscosity (thickness or stickiness), flammability, and overall volatility. And operators should only use fluids with minimal bioaccumulation (the buildup of chemicals in lifeforms, typically in fish) and terrestrial and aquatic toxicity. Finally, once up and running, data center operators should monitor server lifespan and failure rates, tracking performance uptime and adjusting IT refresh rates accordingly.

Read More »

Cisco unveils prototype quantum networking chip

Clock synchronization allows for coordinated time-dependent communications between end points that might be cloud databases or in large global databases that could be sitting across the country or across the world, he said. “We saw recently when we were visiting Lawrence Berkeley Labs where they have all of these data sources such as radio telescopes, optical telescopes, satellites, the James Webb platform. All of these end points are taking snapshots of a piece of space, and they need to synchronize those snapshots to the picosecond level, because you want to detect things like meteorites, something that is moving faster than the rotational speed of planet Earth. So the only way you can detect that quickly is if you synchronize these snapshots at the picosecond level,” Pandey said. For security use cases, the chip can ensure that if an eavesdropper tries to intercept the quantum signals carrying the key, they will likely disturb the state of the qubits, and this disturbance can be detected by the legitimate communicating parties and the link will be dropped, protecting the sender’s data. This feature is typically implemented in a Quantum Key Distribution system. Location information can serve as a critical credential for systems to authenticate control access, Pandey said. The prototype quantum entanglement chip is just part of the research Cisco is doing to accelerate practical quantum computing and the development of future quantum data centers.  The quantum data center that Cisco envisions would have the capability to execute numerous quantum circuits, feature dynamic network interconnection, and utilize various entanglement generation protocols. The idea is to build a network connecting a large number of smaller processors in a controlled environment, the data center warehouse, and provide them as a service to a larger user base, according to Cisco.  The challenges for quantum data center network fabric

Read More »

Zyxel launches 100GbE switch for enterprise networks

Port specifications include: 48 SFP28 ports supporting dual-rate 10GbE/25GbE connectivity 8 QSFP28 ports supporting 100GbE connections Console port for direct management access Layer 3 routing capabilities include static routing with support for access control lists (ACLs) and VLAN segmentation. The switch implements IEEE 802.1Q VLAN tagging, port isolation, and port mirroring for traffic analysis. For link aggregation, the switch supports IEEE 802.3ad for increased throughput and redundancy between switches or servers. Target applications and use cases The CX4800-56F targets multiple deployment scenarios where high-capacity backbone connectivity and flexible port configurations are required. “This will be for service providers initially or large deployments where they need a high capacity backbone to deliver a primarily 10G access layer to the end point,” explains Nguyen. “Now with Wi-Fi 7, more 10G/25G capable POE switches are being powered up and need interconnectivity without the bottleneck. We see this for data centers, campus, MDU (Multi-Dwelling Unit) buildings or community deployments.” Management is handled through Zyxel’s NebulaFlex Pro technology, which supports both standalone configuration and cloud management via the Nebula Control Center (NCC). The switch includes a one-year professional pack license providing IGMP technology and network analytics features. The SFP28 ports maintain backward compatibility between 10G and 25G standards, enabling phased migration paths for organizations transitioning between these speeds.

Read More »

Engineers rush to master new skills for AI-driven data centers

According to the Uptime Institute survey, 57% of data centers are increasing salary spending. Data center job roles that saw the highest increases were in operations management – 49% of data center operators said they saw highest increases in this category – followed by junior and mid-level operations staff at 45%, and senior management and strategy at 35%. Other job categories that saw salary growth were electrical, at 32% and mechanical, at 23%. Organizations are also paying premiums on top of salaries for particular skills and certifications. Foote Partners tracks pay premiums for more than 1,300 certified and non-certified skills for IT jobs in general. The company doesn’t segment the data based on whether the jobs themselves are data center jobs, but it does track 60 skills and certifications related to data center management, including skills such as storage area networking, LAN, and AIOps, and 24 data center-related certificates from Cisco, Juniper, VMware and other organizations. “Five of the eight data center-related skills recording market value gains in cash pay premiums in the last twelve months are all AI-related skills,” says David Foote, chief analyst at Foote Partners. “In fact, they are all among the highest-paying skills for all 723 non-certified skills we report.” These skills bring in 16% to 22% of base salary, he says. AIOps, for example, saw an 11% increase in market value over the past year, now bringing in a premium of 20% over base salary, according to Foote data. MLOps now brings in a 22% premium. “Again, these AI skills have many uses of which the data center is only one,” Foote adds. The percentage increase in the specific subset of these skills in data centers jobs may vary. The Uptime Institute survey suggests that the higher pay is motivating workers to stay in the

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »