Stay Ahead, Stay ONMINE

Writer unveils ‘AI HQ’ platform, betting on agents to transform enterprise work

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Enterprise AI company Writer unveiled a new platform today that it claims will help businesses finally bridge the gap between AI’s theoretical potential and real-world results. The product, called “AI HQ,” represents a significant shift toward […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Enterprise AI company Writer unveiled a new platform today that it claims will help businesses finally bridge the gap between AI’s theoretical potential and real-world results. The product, called “AI HQ,” represents a significant shift toward autonomous AI systems that can execute complex workflows across organizations.

“This is not another hype train, but a massive change coming to enterprise software,” said May Habib, Writer’s CEO and co-founder, at a press conference announcing the product. “The vast majority of the enterprise has not gotten meaningful results from generative AI, and it’s been two years. There has never before been such a gap between what the tech is capable of and what the enterprise results have been.”

AI HQ is Writer’s answer to this problem—a platform for building, activating, and supervising AI “agents” that can perform sequences of tasks traditionally requiring human intervention. These agents can make decisions, reason through problems, and take actions across different systems with little human oversight.

How Writer’s AI agents move beyond chatbots to deliver real business value

The announcement comes as many enterprises reevaluate their AI strategies. According to Habib, most AI implementations have failed to deliver substantial value, with businesses struggling to move beyond basic generative AI use cases.

“Process mapping is the new prompt engineering,” Habib said, highlighting how the company’s approach has evolved beyond simply crafting the right text prompts to designing entire workflows for AI systems.

AI HQ consists of three main components: a development environment called Agent Builder where IT and business teams collaboratively create agents; Writer Home, which provides access to over 100 pre-built agents for specific industries and functions; and observability tools for monitoring and governing agent behavior at scale.

During a product demonstration, Writer executives showed how customers are already using these technologies. In one example, an investment management firm uses Writer’s agents to automatically generate fund reports and personalized market commentary by pulling data from Snowflake, SEC filings, and real-time web searches.

Another demonstration showed a marketing workflow where an agent could analyze a strategy brief, create a project in Adobe Workfront, generate content, find or create supporting images, and prepare the material for legal review.

Enterprise AI that actually works: How Writer’s autonomous agents tackle complex business workflows

Writer’s pivot to agent-based AI reflects broader market trends. While many companies initially focused on using large language models for text generation and chat functions, businesses are increasingly exploring how AI can automate complex processes.

“Ten percent of the headcount is going to be enough,” Habib told Forbes in a recent interview about the potential workforce impact of agent technologies. This dramatic assertion underscores the transformative potential—and potential disruption—these technologies may bring to knowledge work.

Anna Griffin, Chief Marketing Officer at cybersecurity firm Commvault and an early adopter of Writer’s agent technology, spoke during the press conference about the value of connecting previously siloed systems.

“What if I could connect our Salesforce, Gainsite, Optimizely? What if I could pull together enough of the insights across these systems that we could actually work to create an experience for our customer that is seamless?” Griffin said. Her advice for others: “Think about the hardest, gnarliest problem your industry has, and start thinking about how agentic AI is going to solve that.”

The future of AI learning: Writer’s self-evolving models remember mistakes and learn without retraining

The event also featured a presentation from Waseem AlShikh, Writer’s co-founder and CTO, who unveiled research into “self-evolving models” — AI systems that can learn from their mistakes over time without additional training.

“If we expect AI to behave more like a human, we need it to learn more like a human,” AlShikh explained. He demonstrated how traditional AI models repeatedly make the same errors when faced with a maze challenge, while self-evolving models remember past failures and find better solutions.

“This unique architecture means that over time, as the model is used, it gains knowledge — a model that gets smarter the more you engage with it,” AlShikh said. Writer expects to have self-evolving models in pilot by the end of the year.

Inside Writer’s $1.9 billion valuation: How enterprise AI adoption is driving explosive growth

Writer’s aggressive expansion comes after raising $200 million in Series C funding last November, which valued the company at $1.9 billion. The funding round was co-led by Premji Invest, Radical Ventures, and ICONIQ Growth, with participation from major enterprise players including Salesforce Ventures, Adobe Ventures, and IBM Ventures.

The company has witnessed impressive growth, with a reported 160% net retention rate, meaning customers typically expand their contracts by 60% on average after initial adoption. According to a Forbes report published today, some clients have grown from initial contracts of $200,000-$300,000 to spending approximately $1 million each.

Writer’s approach differs from competitors like OpenAI and Anthropic, which have raised billions but focus more on developing general-purpose AI models. Instead, Writer has developed its own models — named Palmyra—specifically designed for enterprise use cases.

“We trained our own models even though everyone advised against it,” AlShikh told Forbes. This strategy has allowed Writer to create AI that’s more secure for enterprise deployment, as client data is retrieved from dedicated servers and isn’t used to train models, mitigating concerns about sensitive information leaks.

Navigating the $114 billion enterprise AI market: Opportunities and obstacles ahead

Writer’s ambitions face obstacles in a competitive landscape. The enterprise AI software market — projected to grow from $58 billion to $114 billion by 2027 — is attracting intense competition from established tech giants and well-funded startups alike.

Paul Dyrwal, VP of Generative AI at Marriott who appeared at Writer’s press conference, shared advice for enterprises navigating this rapidly evolving field: “Focus on fewer, higher-value opportunities rather than chasing every possibility.”

The announcement also comes amid growing concerns about AI’s impact on jobs. While Habib acknowledged that AI will change work dramatically, she painted an optimistic picture of the transition.

“Your people are instrumental to redesigning your processes to be AI-native and shaping what the future of work looks like,” she said. “We think that very soon, on a horizon of five to 10 years, we won’t be doing work as much as we will be building AI that does the work. This will create exciting new roles, new AI-related jobs that are interesting and rewarding.”

From software vendor to innovation partner: Writer’s vision for AI-native enterprise transformation

As Writer positions itself at the forefront of enterprise AI, Habib emphasized that the company sees itself as more than just a software vendor.

“We’re not a software vendor here. We see ourselves as more than that. We’re your innovation partners,” she said. “If you want to rebuild your company to be AI-native, if you want to be part of the most important enterprise transformation maybe ever, go sign up to be in the Writer agent beta right now. Together, we can dream big and build fast.”

The Agent Builder and observability tools are currently in beta, with general availability expected later this spring, while the Writer Home and library of ready-to-use agents are available to all customers starting today.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

U.S. Department of Energy Issues 202(c) Emergency Order to Safeguard Electric Grid Reliability in PJM Interconnection

WASHINGTON— The U.S. Department of Energy (DOE) today issued an emergency order under section 202(c) of the Federal Power Act. The order directs PJM Interconnection (PJM) to operate specified generation units at the Eddystone, Pennsylvania Generation Station past their planned retirement. This order follows recent statements from PJM warning that its system faces a “growing resource adequacy concern” due to load growth, the retirement of dispatchable resources, and other factors. “Maintaining access to affordable, reliable, and secure power is always our top priority, particularly during the summer months when electricity demand reaches its peak,” said U.S. Secretary of Energy Chris Wright. “Americans should never be left wondering whether they will be able to turn on their lights or air conditioning. This emergency order helps keep money in consumers’ pockets while keeping their homes and businesses fully powered. Energy shortfalls or unnecessary price increases are not options in this Administration.” DOE’s order states that PJM shall, in coordination with Constellation Energy, run specified units at the Eddystone Generating Station, when PJM deems necessary, past their planned retirement date of May 31, 2025. DOE issued the order due to resource adequacy concerns given the timing of the retirement of the generation units coinciding with sustained increased energy demand. Background: PJM has recently stated its system faces “growing resource adequacy concern” due to load growth, the retirement of dispatchable resources, and other factors. Upcoming retirements, including the planned retirement of Unit 3 and Unit 4 of the Eddystone Generating Station in Eddystone, Pennsylvania, will exacerbate these resource adequacy issues. In its February 2023 assessment, PJM also highlighted the increasing resource adequacy concerns and reliability risks in the coming years due to the potential timing mismatch between resource retirements, load growth and the pace of new generation entry. ###

Read More »

Centrica Expands to New York With Build-Out of Gas Trading Unit

Centrica Plc is opening a trading office in New York that will underpin its growth in natural gas. Subsidiary Centrica Energy will establish its first US commodity-trading office aimed at building a physical gas business, Chief Executive Officer Chris O’Shea said in an interview. The sector is betting hundreds of billions of dollars that the fuel has a place in the world’s energy mix through at least 2050. “Trading physical gas is good business, it gives us the option to do a lot more,” O’Shea, 51, said. “In order to do that, we will need boots on the ground.” The British company previously had a US business — an energy retail supplier and trader that it sold in 2020 as part of a restructuring.  The gas and power trading industry has since soared as volatility and extreme weather create dislocations to profit from. Projected increases in demand from data centers are furthering that potential. For Centrica, which can both store and transport commodities, it means being able to access and arbitrage areas that aren’t covered by derivatives. Such a business can also act as a natural hedge to financial trading, reducing risks. Underpinning the rationale is surging consumption of liquefied natural gas, a fuel touted by President Donald Trump as key to attaining US energy dominance. America is already the world’s largest LNG exporter and is pledging investment into more massive terminals. “We have a lot of expertise in physical gas trading and we are looking to be in places that would be linked to our global LNG business,” O’Shea said. New Office Markets have become far more interconnected in recent years, and the projected growth of LNG during the next decade will only accelerate this process, according to the CEO. Centrica Energy’s managing director, Cassim Mangerah, is leading the effort for the New

Read More »

Secretary Wright Announces Termination of 24 Projects, Generating Over $3 Billion in Taxpayer Savings

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced the termination of 24 awards issued by the Office of Clean Energy Demonstrations (OCED) totaling over $3.7 billion in taxpayer-funded financial assistance. After a thorough and individualized financial review of each award, DOE found that these projects failed to advance the energy needs of the American people, were not economically viable and would not generate a positive return on investment of taxpayer dollars.  Of the 24 awards cancelled, nearly 70% (16 of the 24 projects) were signed between Election Day and January 20th. The projects primarily include funding for carbon capture and sequestration (CCS) and decarbonization initiatives. By terminating these awards, DOE is generating an immediate $3.6 billion in savings for the American people. “While the previous administration failed to conduct a thorough financial review before signing away billions of taxpayer dollars, the Trump administration is doing our due diligence to ensure we are utilizing taxpayer dollars to strengthen our national security, bolster affordable, reliable energy sources and advance projects that generate the highest possible return on investment,” said Secretary Wright. “Today, we are acting in the best interest of the American people by cancelling these 24 awards.” Earlier this month, DOE issued a Secretarial Memorandum entitled, “Ensuring Responsibility for Financial Assistance,” which outlined DOE’s policy for evaluating financial assistance on a case-by-case basis to identity waste of taxpayer dollars, protect America’s national security and advance President Trump’s commitment to unleash affordable, reliable and secure energy for the American people. DOE utilized this review process to evaluate each of these 24 awards and determined that they did not meet the economic, national security or energy security standards necessary to sustain DOE’s investment. DOE’s Secretarial Policy on Ensuring Responsibility for Financial Assistance is available here.                  

Read More »

Taiwan Will Send Delegation to Alaska LNG Talks Next Week

Taiwan will send a delegation to a summit in Alaska to discuss procuring liquefied natural gas from a long-delayed project championed by US President Donald Trump. “We already got the invitation” from the US, Taiwan’s Deputy Foreign Minister Chen Ming-chi said in an interview with Bloomberg News on Thursday, referring the Alaska Sustainable Energy Conference from June 3 to 5. A high-ranking official will lead the delegation, he said. Taiwan is interested in purchasing LNG from the US project, as well as investing in the necessary pipeline and related infrastructure for the facility, Chen said. He declined to disclose the officials who will lead the delegation. Proponents of the $44 billion Alaska LNG export project are trying to use the gathering as a way to rally support and financing for the facility, which has became a focus for the White House. The plant has been proposed in various forms for decades, but has struggled to secure binding long-term contracts and investment. Chen said Taiwan’s investment amount for the project needs to be discussed further between the two sides, and additional negotiation will be required. Taiwan’s CPC Corp. signed a non-binding letter of intent in March to invest in Alaska LNG’s pipeline and purchase fuel from the project. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Dips After US-China Trade Tensions Flare

Oil edged down after a choppy session as traders parsed mixed messaging on the status of trade talks between the US and China. West Texas Intermediate futures swung in a roughly $2 range before settling down fractionally near $61 a barrel. Futures had sunk after US President Donald Trump said China had violated its trade agreement and threatened to broaden restrictions on its tech sector, reviving concerns that a tariff war between the world’s two largest economies would hurt oil demand. Crude later pared losses when Trump signaled openness to speaking with Chinese President Xi Jinping. Meanwhile, OPEC+ was said to consider an output increase of more than 411,000 barrels a day in July in a push for market share. The revival of idled output by OPEC and its allies at a faster-than-expected pace has bolstered expectations that a glut will form this year. “Global oil market fundamentals remain somewhat loose now and should loosen up much more later this year, with growing non-OPEC supply and relatively mild, but persistent stock builds,” Citigroup analysts including Francesco Martoccia said in a note. Geopolitical risks from Russia to Iran continue to provide price support against an otherwise softening physical backdrop, they added. Meanwhile, commodity trading advisers, which tend to exacerbate price swings, increased short positions to sit at 91% short in Brent on Friday, compared with roughly 70% short on May 29, according to data from Bridgeton Research Group. Still, some metrics are pointing to near-term strength in the oil market. WTI’s front-month futures were trading about 93 cents more per barrel than the contract for the next month, the biggest premium since early January. Libya’s eastern government threatened to curb oil production and exports in protest after a militia stormed the state oil company’s headquarters. A shutdown could result in a

Read More »

Congress votes to rescind California vehicle emissions waiver

Dive Brief: The U.S. Senate passed three joint resolutions May 22 nullifying California’s ability to set emissions standards for passenger cars, light duty vehicles and trucks that are stricter than national standards set by the U.S. Environmental Protection Agency. Auto and petroleum industry lobbyists targeted California’s Advanced Clean Car II regulations, adopted in 2022, which require all new passenger cars, trucks and SUVs sold in the state to be zero-emission vehicles by the 2035 model year. Federal law set in 1990 allows 17 additional states and the District of Columbia to follow California’s regulations. California Gov. Gavin Newsom, a Democrat, announced the state’s intention to file a lawsuit blocking the congressional resolutions, which await the signature of President Donald Trump to become law. Dive Insight: California’s ability to set its own vehicle emissions standards stem from the 1967 Air Quality Act, passed at a time when smog and poor air quality often permeated the Los Angeles basin. While air quality in California has improved over the years, experts fear a setback from the Senate’s action. “Public health could potentially suffer as a consequence,” said Michael Kleeman, a professor at the University of California, Davis, Department of Civil and Environmental Engineering. “This is, plain and simple, a vote against clean air to breathe,” said Aaron Kressig, transportation electrification manager at Western Resource Advocates, in an emailed statement. He warned of potential lost days at school or work and premature deaths.    “Over 150 million people in the United States are already exposed to unhealthy levels of air pollution,” Steven Higashide, director of the Clean Transportation Program at the Union of Concerned Scientists, said in an emailed statement. “The standards are based on the best available science, and were finalized with extensive public input.”  Along with public health concerns, the debate around California’s

Read More »

Cisco bolsters DNS security package

The software can block domains associated with phishing, malware, botnets, and other high-risk categories such as cryptomining or new domains that haven’t been reported previously. It can also create custom block and allow lists and offers the ability to pinpoint compromised systems using real-time security activity reports, Brunetto wrote. According to Cisco, many organizations leave DNS resolution to their ISP. “But the growth of direct enterprise internet connections and remote work make DNS optimization for threat defense, privacy, compliance, and performance ever more important,” Cisco stated. “Along with core security hygiene, like a patching program, strong DNS-layer security is the leading cost-effective way to improve security posture. It blocks threats before they even reach your firewall, dramatically reducing the alert pressure your security team manages.” “Unlike other Secure Service Edge (SSE) solutions that have added basic DNS security in a ‘checkbox’ attempt to meet market demand, Cisco Secure Access – DNS Defense embeds strong security into its global network of 50+ DNS data centers,” Brunetto wrote. “Among all SSE solutions, only Cisco’s features a recursive DNS architecture that ensures low-latency, fast DNS resolution, and seamless failover.”

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Indian startup Refroid launches India’s first data center CDUs

They use heat exchangers and pumps to regulate the flow and temperature of fluid delivered to equipment for cooling, while isolating the technology cooling system loop from facility systems. The technology addresses limitations of traditional air cooling, which industry experts say cannot adequately handle the heat generated by modern AI processors and high-density computing applications. Strategic significance for India Industry analysts view the development as a critical milestone for India’s data center ecosystem. “India generates 20% of global data, yet contributes only 3% to global data center capacity. This imbalance is not merely spatial — it’s systemic,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The emergence of indigenously developed CDUs signals a strategic pivot. Domestic CDU innovation is a defining moment in India’s transition from data centre host to technology co-creator.” Neil Shah, VP for research and partner at Counterpoint Research, noted that major international players like Schneider, Vertiv, Asetek, Liquidstack, and Zutacore have been driving most CDU deployments in Indian enterprises and data centers. “Having a local indigenous CDU tech and supplier designed with Indian weather, infrastructure and costs in mind expands options for domestic data center demand,” he said. AI driving data center cooling revolution India’s data center capacity reached approximately 1,255 MW between January and September 2024 and was projected to expand to around 1,600 MW by the end of 2024, according to CBRE India’s 2024 Data Center Market Update. Multiple market research firms have projected the India data center market to grow from about $5.7 billion in 2024 to $12 billion by 2030. Bhavaraju cited aggressive projections for the sector’s expansion, with AI workloads expected to account for 30% of total workloads by 2030. “All of them need liquid cooling,” he said, noting that “today’s latest GPU servers – GB200 from Nvidia

Read More »

Platform approach gains steam among network teams

Revisting the platform vs. point solutions debate The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. “Organizations are increasingly adopting platform-based approaches to their enterprise connectivity infrastructure to overcome complexity and unlock new business value. When enhanced by AI, enterprise platforms can increase productivity, enrich end-user experiences, enhance security, and ultimately drive new opportunities for innovation.” In IDC’s Worldwide AI in Networking Special Report, 78% of survey respondents agreed or strongly agreed with the statement: “I am moving to an AI-powered platform approach for networking.” Gartner predicts that 70% of enterprises will select a broad platform for new multi-cloud networking software deployments by 2027, an increase from 10% in early 2024. The breakdown of silos between network and security operations will be driven by organizations implementing zero-trust principles as well as the adoption of AI and AIOps. “In the future, enterprise networks will be increasingly automated, AI-assisted and more tightly integrated with security across LAN, data center and WAN domains,” according to Gartner’s 2025 Strategic Roadmap for Enterprise Networking. While all of the major networking vendors have announced cloud-based platforms, it’s still relatively early days. For example, Cisco announced a general framework for Cisco

Read More »

Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trump’s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power. Gogia said OpenAI’s selection of Oracle “is not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that complements its ambition to serve diverse regulatory environments and availability zones.” Power demands create infrastructure dilemma The facility’s power requirements raise serious questions about AI’s sustainability. Gogia noted that the 1.2-gigawatt demand — “on par with a nuclear facility” — highlights “the energy unsustainability of today’s hyperscale AI ambitions.” Shah warned that the power envelope keeps expanding. “As AI scales up and so does the necessary compute infrastructure needs exponentially, the power envelope is also consistently rising,” he said. “The key question is how much is enough? Today it’s 1.2GW, tomorrow it would need even more.” This escalating demand could burden Texas’s infrastructure, potentially requiring billions in new power grid investments that “will eventually put burden on the tax-paying residents,” Shah noted. Alternatively, projects like Stargate may need to “build their own separate scalable power plant.” What this means for enterprises The scale of these facilities explains why many organizations are shifting toward leased AI computing rather than building their own capabilities. The capital requirements and operational complexity are beyond what most enterprises can handle independently.

Read More »

New Intel Xeon 6 CPUs unveiled; one powers rival Nvidia’s DGX B300

He added that his read is that “Intel recognizes that Nvidia is far and away the leader in the market for AI GPUs and is seeking to hitch itself to that wagon.” Roberts said, “basically, Intel, which has struggled tremendously and has turned over its CEO amidst a stock slide, needs to refocus to where it thinks it can win. That’s not competing directly with Nvidia but trying to use this partnership to re-secure its foothold in the data center and squeeze out rivals like AMD for the data center x86 market. In other words, I see this announcement as confirmation that Intel is looking to regroup, and pick fights it thinks it can win. “ He also predicted, “we can expect competition to heat up in this space as Intel takes on AMD’s Epyc lineup in a push to simplify and get back to basics.” Matt Kimball, vice president and principal analyst, who focuses on datacenter compute and storage at Moor Insights & Strategy, had a much different view about the announcement. The selection of the Intel sixth generation Xeon CPU, the 6776P, to support Nvidia’s DGX B300 is, he said, “important, as it validates Intel as a strong choice for the AI market. In the big picture, this isn’t about volumes or revenue, rather it’s about validating a strategy Intel has had for the last couple of generations — delivering accelerated performance across critical workloads.”  Kimball said that, In particular, there are a “couple things that I would think helped make Xeon the chosen CPU.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »