Stay Ahead, Stay ONMINE

Scaling agentic AI: Inside Atlassian’s culture of experimentation

Scaling agentic AI isn’t just about having the latest tools — it requires clear guidance, the right context, and a culture that champions experimentation to unlock real value. At VentureBeat’s Transform 2025, Anu Bharadwaj, president of Atlassian, shared actionable insights into how the company has empowered its employees to build thousands of custom agents that solve real, everyday challenges. To build these agents, Atlassian has fostered a culture rooted in curiosity, enthusiasm and continuous experimentation. [embedded content] “You hear a lot about AI top-down mandates,” Bharadwaj said. “Top-down mandates are great for making a big splash, but really, what happens next, and to who? Agents require constant iteration and adaptation. Top-down mandates can encourage people to start using it in their daily work, but people have to use it in their context and iterate over time to realize maximum value.” That requires a culture of experimentation — one where short- to medium-term setbacks aren’t penalized but embraced as stepping stones to future growth and high-impact use cases. Creating a safe environment Atlassian’s agent-building platform, Rovo Studio, serves as a playground environment for teams across the enterprise to build agents. “As leaders, it’s important for us to create a psychologically safe environment,” Bharadwaj said. “At Atlassian, we’ve always been very open. Open company, no bullshit is one of our values. So we focus on creating that openness, and creating an environment where employees can try out different things, and if it fails, it’s okay. It’s fine because you learned something about how to use AI in your context. It’s helpful to be very explicit and open about it.” Beyond that, you have to create a balance between experimentation with guardrails of safety and auditability. This includes safety measures like making sure employees are logged in when they’re trying tools, to making sure agents respect permissions, understand role-based access, and provide answers and actions based on what a particular user has access to. Supporting team-agent collaboration “When we think about agents, we think about how humans and agents work together,” Bharadwaj said. “What does teamwork look like across a team composed of a bunch of people and a bunch of agents — and how does that evolve over time? What can we do to support that? As a result, all of our teams use Rovo agents and build their own Rovo agents. Our theory is that once that kind of teamwork becomes more commonplace, the entire operating system of the company changes.” The magic really happens when multiple people work together with multiple agents, she added. Today a lot of agents are single-player, but interaction patterns are evolving. Chat will not be the default interaction pattern, Bharadwaj says. Instead, there will be multiple interaction patterns that drive multiplayer collaboration. “Fundamentally, what is teamwork all about?” she posed to the audience. “It’s multiplayer collaboration — multiple agents and multiple humans working together.” Making agent experimentation accessible Atlassian’s Rovo Studio makes agent building available and accessible to people of all skill sets, including no-code options. One construction industry customer built a set of agents to reduce their roadmap creation time by 75%, while publishing giant HarperCollins built agents that reduced manual work by 4X across their departments.   By combining Rovo Studio with their developer platform, Forge, technical teams gain powerful control to deeply customize their AI workflows — defining context, specifying accessible knowledge sources, shaping interaction patterns and more — and create highly specialized agents. At the same time, non-technical teams also need to customize and iterate, so they’ve built experiences in Rovo Studio to allow users to leverage natural language to make their customizations. “That’s going to be the big unlock, because fundamentally, when we talk about agentic transformation, it cannot be restricted to the code gen scenarios we see today. It has to permeate the entire team,” Bharadwaj said. “Developers spend 10% of their time coding. The remaining 90% is working with the rest of the team, figuring out customer issues and fixing issues in production. We’re creating a platform through which you can build agents for every single one of those functions, so the entire loop gets faster.” Creating a bridge from here to the future Unlike the previous shifts to mobile or cloud, where a set of technological or go-to-market changes occurred, AI transformation is fundamentally a change in the way we work. Bharadwaj believes the most important thing to do is to be open and to share how you are using AI to change your daily work. “As an example, I share Loom videos of new tools that I’ve tried out, things that I like, things that I didn’t like, things where I thought, oh, this could be useful if only it had the right context,” she added. “That constant mental iteration, for employees to see and try every single day, is highly important as we shift the way we work.”

Scaling agentic AI isn’t just about having the latest tools — it requires clear guidance, the right context, and a culture that champions experimentation to unlock real value. At VentureBeat’s Transform 2025, Anu Bharadwaj, president of Atlassian, shared actionable insights into how the company has empowered its employees to build thousands of custom agents that solve real, everyday challenges. To build these agents, Atlassian has fostered a culture rooted in curiosity, enthusiasm and continuous experimentation.

“You hear a lot about AI top-down mandates,” Bharadwaj said. “Top-down mandates are great for making a big splash, but really, what happens next, and to who? Agents require constant iteration and adaptation. Top-down mandates can encourage people to start using it in their daily work, but people have to use it in their context and iterate over time to realize maximum value.”

That requires a culture of experimentation — one where short- to medium-term setbacks aren’t penalized but embraced as stepping stones to future growth and high-impact use cases.

Creating a safe environment

Atlassian’s agent-building platform, Rovo Studio, serves as a playground environment for teams across the enterprise to build agents.

“As leaders, it’s important for us to create a psychologically safe environment,” Bharadwaj said. “At Atlassian, we’ve always been very open. Open company, no bullshit is one of our values. So we focus on creating that openness, and creating an environment where employees can try out different things, and if it fails, it’s okay. It’s fine because you learned something about how to use AI in your context. It’s helpful to be very explicit and open about it.”

Beyond that, you have to create a balance between experimentation with guardrails of safety and auditability. This includes safety measures like making sure employees are logged in when they’re trying tools, to making sure agents respect permissions, understand role-based access, and provide answers and actions based on what a particular user has access to.

Supporting team-agent collaboration

“When we think about agents, we think about how humans and agents work together,” Bharadwaj said. “What does teamwork look like across a team composed of a bunch of people and a bunch of agents — and how does that evolve over time? What can we do to support that? As a result, all of our teams use Rovo agents and build their own Rovo agents. Our theory is that once that kind of teamwork becomes more commonplace, the entire operating system of the company changes.”

The magic really happens when multiple people work together with multiple agents, she added. Today a lot of agents are single-player, but interaction patterns are evolving. Chat will not be the default interaction pattern, Bharadwaj says. Instead, there will be multiple interaction patterns that drive multiplayer collaboration.

“Fundamentally, what is teamwork all about?” she posed to the audience. “It’s multiplayer collaboration — multiple agents and multiple humans working together.”

Making agent experimentation accessible

Atlassian’s Rovo Studio makes agent building available and accessible to people of all skill sets, including no-code options. One construction industry customer built a set of agents to reduce their roadmap creation time by 75%, while publishing giant HarperCollins built agents that reduced manual work by 4X across their departments.  

By combining Rovo Studio with their developer platform, Forge, technical teams gain powerful control to deeply customize their AI workflows — defining context, specifying accessible knowledge sources, shaping interaction patterns and more — and create highly specialized agents. At the same time, non-technical teams also need to customize and iterate, so they’ve built experiences in Rovo Studio to allow users to leverage natural language to make their customizations.

“That’s going to be the big unlock, because fundamentally, when we talk about agentic transformation, it cannot be restricted to the code gen scenarios we see today. It has to permeate the entire team,” Bharadwaj said. “Developers spend 10% of their time coding. The remaining 90% is working with the rest of the team, figuring out customer issues and fixing issues in production. We’re creating a platform through which you can build agents for every single one of those functions, so the entire loop gets faster.”

Creating a bridge from here to the future

Unlike the previous shifts to mobile or cloud, where a set of technological or go-to-market changes occurred, AI transformation is fundamentally a change in the way we work. Bharadwaj believes the most important thing to do is to be open and to share how you are using AI to change your daily work. “As an example, I share Loom videos of new tools that I’ve tried out, things that I like, things that I didn’t like, things where I thought, oh, this could be useful if only it had the right context,” she added. “That constant mental iteration, for employees to see and try every single day, is highly important as we shift the way we work.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

JPMorgan launches carbon market blockchain app

Dive Brief: JPMorgan Chase is working to allow voluntary carbon markets to issue blockchain tokens at the registry level that represent ownership of carbon credits, permitting market participants to issue, transfer and retire credits, the bank announced Wednesday. JPMorgan is currently exploring testing processes with carbon registries from S&P Global

Read More »

IBM Power11 challenges x86 and GPU giants with security-first server strategy

The IBM Power Cyber Vault solution is designed to provide protection against cyberattacks such as data corruption and encryption with proactive immutable snapshots that are automatically captured, stored, and tested on a custom-defined schedule, IBM said. Power11 also uses NIST-approved built-in quantum-safe cryptography designed to help protect systems from harvest-now, decrypt-later attacks

Read More »

Mol Eyes Ukraine Port Access to Phase Out Russian Oil

Mol Nyrt., the Hungarian oil company that’s faced criticism for maintaining strong reliance on Russian energy, sees a pipeline from the Ukrainian port of Odesa as its best bet at diversification. The company wants to gain access to the Odesa-Brody crude pipeline that runs from Ukraine’s Black Sea port to near the nation’s border with Poland – allowing it to get seaborne supplies from a number of global producers, said Szabolcs Pal Szabo, Mol’s senior vice president for value chain management.  Flows could then be routed to Hungary via the southern Druzhba link, which currently carries Russian oil to the country and connects to the Odesa pipeline close to the Ukrainian-Polish border. “The Odesa pipeline would mean access to all sorts of alternative crude due to the sea link,” Szabo said in an email to Bloomberg. “This pipeline would serve regional, EU and Ukrainian supply security.” While Hungary was exempted from the European Union’s Russian crude oil ban in 2022 – allowing the landlocked country to keep receiving Russian crude via Druzhba – the country is under pressure to phase out those supplies amid an EU push to end energy imports from Russia by 2027.  Connecting to Ukrainian infrastructure wouldn’t be straightforward, since Odesa has been a frequent target of Russian missile attacks and the pipeline – which is currently out of commission – would require massive investments. The Black Sea around Ukraine’s coastline has also been heavily mined since Russia’s full-scale invasion in 2022. It would also require political acrobatics from Hungarian Prime Minister Viktor Orban, a pro-Russian leader who’s called on the EU to cut support for Ukraine and end its sanctions on Russia. Orban has locked in a number of energy deals with Moscow and is currently campaigning for re-election by vowing to block Ukraine’s EU accession process.    Mol wants the

Read More »

Distressed UK Lindsey Oil Refinery Restarts Fuel Supply

The UK’s Lindsey oil refinery has restarted deliveries of fuel, according to the Department for Energy Security & Net Zero. “Deliveries from the Prax Lindsey Oil Refinery have resumed,” the department said by email late Tuesday. The UK is well supplied with fuel, it said.  Wholesale supply of fuel by road stopped last week after the refinery’s owner received a surprise liquidation order. The department didn’t give specifics on when deliveries restarted. Trucks weren’t getting into the plant in north England earlier in the day, according to two people familiar with the matter. To contact the reporter on this story:Rachel Graham in London at [email protected] To contact the editors responsible for this story:Alaric Nightingale at [email protected] Nightingale, Rachel Graham WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Trump seeks tighter restrictions on wind and solar with executive order

Dive Brief: President Donald Trump issued an executive order Monday instructing the Secretary of the Treasury to publish guidance within 45 days “to ensure that policies concerning the ‘beginning of construction’ are not circumvented” by wind and solar projects that saw their eligibility for the 45Y and 48E clean energy tax credits slashed by budget legislation signed into law on July 4. “It is unclear how the Treasury will amend the ‘beginning of construction’ language while also keeping in mind that a ‘substantial portion of the subject facility has been built,’” Jefferies analysts said in a Tuesday note. This “could be an attempt to pivot back to the House version of the OBBB which had narrowed credit eligibility with ‘begin construction’ AND ‘placed in service’ language.” Prior to the EO, clean energy advocates were already grappling with the anticipated impacts of the legislation, which is expected to slash capital investment in U.S. electricity and clean fuels production by around $500 billion over the next ten years, according to a report from the REPEAT Project. Dive Insight: The executive order appears to make good on a deal Trump reportedly struck with the Freedom Caucus, in which he promised to use his executive powers to further curtail federal subsidies on wind, solar and EVs in exchange for their support, which was necessary to secure the bill’s passage in the House. The order also instructs the Secretary of the Treasury to “implement the enhanced Foreign Entity of Concern restrictions in the law” and “revise any identified regulations, guidance, policies, and practices […] to eliminate any such preferences for wind and solar facilities.” Even before the executive order, the bill’s introduction of tight eligibility standards for projects in development was expected to lead to uncertainty around capital investment. Wind and solar projects must start construction within

Read More »

PHMSA grants work for America. Let’s keep them funded.

Dave Schryver is president and CEO of the American Public Gas Association. Turning on the stove to cook dinner is often second nature — less often do we think about the natural gas system that keeps our homes and businesses running. Yet, every day, public gas utilities across the country are investing in modern energy infrastructure to enhance safety, improve efficiency and keep energy costs low for consumers. A federal grant program that has recently helped accelerate these upgrades is in jeopardy of not being renewed. Created by the Infrastructure Investment and Jobs Act, the Pipeline and Hazardous Materials Safety Administration’s (PHMSA) Natural Gas Distribution Infrastructure Safety and Modernization (NGDISM) grant program invests $200 million annually, totaling $1 billion over five years in funding for municipal and community-owned utilities seeking to repair or replace natural gas pipeline systems. These grants are at risk of losing funding beyond 2026, despite their instrumental role in strengthening communities, creating well-paying jobs, lowering energy costs and reducing emissions. Hundreds of communities across 29 states have already seen firsthand the value of PHMSA grants. Consider rural Montgomery, Louisiana, where a $1 million grant funding the replacement of five regulator stations created many local jobs. Likewise, in urban Knoxville, Tennessee, a $5 million investment to upgrade 16 miles of aging steel main added 106 jobs. From small towns to major cities, these projects are putting people to work, modernizing infrastructure and delivering long-term value — with nearly 2,900 jobs supported by projects initiated in 2024 alone. The program has also earned the support of the United Association of Union Plumbers and Pipefitters, which recognizes PHMSA grants as a vital source of stable job opportunities for its members. With more than 7 million Americans currently unemployed, we can’t afford to lose these opportunities. But this is just

Read More »

Who Is The Biggest Renewable Energy Generator?

The country that generated the most renewable energy in 2024 was China, according to the Energy Institute’s (EI) 2025 statistical review of world energy, which was released recently. China’s renewable energy generation last year came in at 3,398.8 terawatt hours (TH), the review showed. This comprised 997.0 TH from wind energy, 839.0 TH from solar energy, 1,354.3 TH from hydro energy, and 208.5 TH from “other renewables” the review outlined. The country’s renewable energy generation grew by 17.1 percent year on year, the review pointed out. China’s wind energy generation rose 12.2 percent, its solar energy generation grew 43.2 percent, its hydro energy generation grew 10.2 percent, and its “other renewables” energy generation grew 4.9 percent, year on year, the review showed. The country that generated the second most renewable energy last year was the U.S., with 1,068.7 TH, according to the EI’s review. This generation comprised 458.0 TH from wind energy, 306.2 TH from solar energy, 238.7 TH from hydro energy, and 65.7 TH from “other renewables”, the review showed. U.S. renewable energy generation grew 9.3 percent from 2023 to 2024, the review highlighted. The country’s wind energy generation rose by 7.4 percent, its solar energy generation grew 26.5 percent, its hydro energy generation decreased by 1.4 percent, and its “other renewables” energy generation dropped by 2.1 percent, year on year, the review outlined. Brazil ranked third in terms of renewable energy generation in 2024, with 651.3 TH, according to the review, which showed that this this total comprised 108.5 TH from wind energy, 71.3 TH from solar energy, 413.2 TH from hydro energy, and 58.2 TH from “other renewables”. Brazil’s renewable energy generation has grown 3.1 percent year on year, the review showed. The country’s wind energy generation increased by 12.9 percent, its solar energy generation grew 40.5

Read More »

US DOE Expands Hydropower Partnership with Norway

The U.S. Department of Energy (DOE) has extended its collaboration with Norway’s Royal Ministry of Energy on water power research and development. The extension builds on the previously signed memorandum of understanding under which the two countries planned and coordinated activities. The DOE said in a media release the cooperation aims to reduce energy costs and strengthen grid reliability and security.   “Strong partnerships drive innovation, and innovation strengthens America’s energy future”, U.S. Energy Secretary Chris Wright said. “Hydropower is a tremendous resource – one that supports reliable, affordable power across the country and holds vast potential to bolster America’s grid. “By signing this Memorandum of Understanding with Norway, we are building upon our two nations’ shared expertise and advanced marine energy technologies to support President Trump’s pro-growth energy agenda for the American people”. “Hydropower and marine energy have the potential to reduce energy costs and improve the resilience of our electric grid”, Lou Hrkman, Principal Deputy Assistant Secretary for Energy Efficiency and Renewable Energy, added. “Our collaboration with Norway – another country that is rich in water power resources – will help us expand our generation capacity, upgrade existing facilities, and cultivate the technical expertise we need to make the most of these opportunities”. In 2020, the DOE and Norway’s Royal Ministry of Energy signed a five-year MOU Annex, under which the DOE’s Water Power Technologies Office would conduct hydropower research and development with the Norwegian Research Centre for Hydropower Technology, the DOE said. The current MOU Annex broadens this collaboration to include marine energy, which could supply locally sourced power to millions of Americans in densely populated areas, the DOE said. Through the extended MOU, both parties will share key information, tools, and technologies aimed at lowering barriers to developing, testing, and advancing marine energy and new hydropower solutions. To contact

Read More »

CoreWeave acquires Core Scientific for $9B to power AI infrastructure push

Such a shift, analysts say, could offer short-term benefits for enterprises, particularly in cost and access, but also introduces new operational risks. “This acquisition may potentially lower enterprise pricing through lease cost elimination and annual savings, while improving GPU access via expanded power capacity, enabling faster deployment of Nvidia chipsets and systems,” said Charlie Dai, VP and principal analyst at Forrester. “However, service reliability risks persist during this crypto-to-AI retrofitting.” This also indicates that struggling vendors such as Core Scientific and similar have a way to cash out, according to Yugal Joshi, partner at Everest Group. “However, it does not materially impact the availability of Nvidia GPUs and similar for enterprises,” Joshi added. “Consolidation does impact the pricing power of vendors.” Concerns for enterprises Rising demand for AI-ready infrastructure can raise concerns among enterprises, particularly over access to power-rich data centers and future capacity constraints. “The biggest concern that CIOs should have with this acquisition is that mature data center infrastructure with dedicated power is an acquisition target,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “This may turn out to create challenges for CIOs currently collocating data workloads or seeking to keep more of their data loads on private data centers rather than in the cloud.”

Read More »

CoreWeave achieves a first with Nvidia GB300 NVL72 deployment

The deployment, Kimball said, “brings Dell quality to the commodity space. Wins like this really validate what Dell has been doing in reshaping its portfolio to accommodate the needs of the market — both in the cloud and the enterprise.” Although concerns were voiced last year that Nvidia’s next-generation Blackwell data center processors had significant overheating problems when they were installed in high-capacity server racks, he said that a repeat performance is unlikely. Nvidia, said Kimball “has been very disciplined in its approach with its GPUs and not shipping silicon until it is ready. And Dell almost doubles down on this maniacal quality focus. I don’t mean to sound like I have blind faith, but I’ve watched both companies over the last several years be intentional in delivering product in volume. Especially as the competitive market starts to shape up more strongly, I expect there is an extremely high degree of confidence in quality.” CoreWeave ‘has one purpose’ He said, “like Lambda Labs, Crusoe and others, [CoreWeave] seemingly has one purpose (for now): deliver GPU capacity to the market. While I expect these cloud providers will expand in services, I think for now the type of customer employing services is on the early adopter side of AI. From an enterprise perspective, I have to think that organizations well into their AI journey are the consumers of CoreWeave.”  “CoreWeave is also being utilized by a lot of the model providers and tech vendors playing in the AI space,” Kimball pointed out. “For instance, it’s public knowledge that Microsoft, OpenAI, Meta, IBM and others use CoreWeave GPUs for model training and more. It makes sense. These are the customers that truly benefit from the performance lift that we see from generation to generation.”

Read More »

Oracle to power OpenAI’s AGI ambitions with 4.5GW expansion

“For CIOs, this shift means more competition for AI infrastructure. Over the next 12–24 months, securing capacity for AI workloads will likely get harder, not easier. Though cost is coming down but demand is increasing as well, due to which CIOs must plan earlier and build stronger partnerships to ensure availability,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. He added that CIOs should expect longer wait times for AI infrastructure. To mitigate this, they should lock in capacity through reserved instances, diversify across regions and cloud providers, and work with vendors to align on long-term demand forecasts.  “Enterprises stand to benefit from more efficient and cost-effective AI infrastructure tailored to specialized AI workloads, significantly lower their overall future AI-related investments and expenses. Consequently, CIOs face a critical task: to analyze and predict the diverse AI workloads that will prevail across their organizations, business units, functions, and employee personas in the future. This foresight will be crucial in prioritizing and optimizing AI workloads for either in-house deployment or outsourced infrastructure, ensuring strategic and efficient resource allocation,” said Neil Shah, vice president at Counterpoint Research. Strategic pivot toward AI data centers The OpenAI-Oracle deal comes in stark contrast to developments earlier this year. In April, AWS was reported to be scaling back its plans for leasing new colocation capacity — a move that AWS Vice President for global data centers Kevin Miller described as routine capacity management, not a shift in long-term expansion plans. Still, these announcements raised questions around whether the hyperscale data center boom was beginning to plateau. “This isn’t a slowdown, it’s a strategic pivot. The era of building generic data center capacity is over. The new global imperative is a race for specialized, high-density, AI-ready compute. Hyperscalers are not slowing down; they are reallocating their capital to

Read More »

Arista Buys VeloCloud to reboot SD-WANs amid AI infrastructure shift

What this doesn’t answer is how Arista Networks plans to add newer, security-oriented Secure Access Service Edge (SASE) capabilities to VeloCloud’s older SD-WAN technology. Post-acquisition, it still has only some of the building blocks necessary to achieve this. Mapping AI However, in 2025 there is always more going on with networking acquisitions than simply adding another brick to the wall, and in this case it’s the way AI is changing data flows across networks. “In the new AI era, the concepts of what comprises a user and a site in a WAN have changed fundamentally. The introduction of agentic AI even changes what might be considered a user,” wrote Arista Networks CEO, Jayshree Ullal, in a blog highlighting AI’s effect on WAN architectures. “In addition to people accessing data on demand, new AI agents will be deployed to access data independently, adapting over time to solve problems and enhance user productivity,” she said. Specifically, WANs needed modernization to cope with the effect AI traffic flows are having on data center traffic. Sanjay Uppal, now VP and general manager of the new VeloCloud Division at Arista Networks, elaborated. “The next step in SD-WAN is to identify, secure and optimize agentic AI traffic across that distributed enterprise, this time from all end points across to branches, campus sites, and the different data center locations, both public and private,” he wrote. “The best way to grab this opportunity was in partnership with a networking systems leader, as customers were increasingly looking for a comprehensive solution from LAN/Campus across the WAN to the data center.”

Read More »

Data center capacity continues to shift to hyperscalers

However, even though colocation and on-premises data centers will continue to lose share, they will still continue to grow. They just won’t be growing as fast as hyperscalers. So, it creates the illusion of shrinkage when it’s actually just slower growth. In fact, after a sustained period of essentially no growth, on-premises data center capacity is receiving a boost thanks to genAI applications and GPU infrastructure. “While most enterprise workloads are gravitating towards cloud providers or to off-premise colo facilities, a substantial subset are staying on-premise, driving a substantial increase in enterprise GPU servers,” said John Dinsdale, a chief analyst at Synergy Research Group.

Read More »

Oracle inks $30 billion cloud deal, continuing its strong push into AI infrastructure.

He pointed out that, in addition to its continued growth, OCI has a remaining performance obligation (RPO) — total future revenue expected from contracts not yet reported as revenue — of $138 billion, a 41% increase, year over year. The company is benefiting from the immense demand for cloud computing largely driven by AI models. While traditionally an enterprise resource planning (ERP) company, Oracle launched OCI in 2016 and has been strategically investing in AI and data center infrastructure that can support gigawatts of capacity. Notably, it is a partner in the $500 billion SoftBank-backed Stargate project, along with OpenAI, Arm, Microsoft, and Nvidia, that will build out data center infrastructure in the US. Along with that, the company is reportedly spending about $40 billion on Nvidia chips for a massive new data center in Abilene, Texas, that will serve as Stargate’s first location in the country. Further, the company has signaled its plans to significantly increase its investment in Abu Dhabi to grow out its cloud and AI offerings in the UAE; has partnered with IBM to advance agentic AI; has launched more than 50 genAI use cases with Cohere; and is a key provider for ByteDance, which has said it plans to invest $20 billion in global cloud infrastructure this year, notably in Johor, Malaysia. Ellison’s plan: dominate the cloud world CTO and co-founder Larry Ellison announced in a recent earnings call Oracle’s intent to become No. 1 in cloud databases, cloud applications, and the construction and operation of cloud data centers. He said Oracle is uniquely positioned because it has so much enterprise data stored in its databases. He also highlighted the company’s flexible multi-cloud strategy and said that the latest version of its database, Oracle 23ai, is specifically tailored to the needs of AI workloads. Oracle

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »