Stay Ahead, Stay ONMINE

2025 has already brought us the most performant AI ever: What can we do with these supercharged capabilities (and what’s next)?

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The latest AI large language model (LLM) releases, such as Claude 3.7 from Anthropic and Grok 3 from xAI, are often performing at PhD levels — at least according to certain benchmarks. This accomplishment marks the […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The latest AI large language model (LLM) releases, such as Claude 3.7 from Anthropic and Grok 3 from xAI, are often performing at PhD levels — at least according to certain benchmarks. This accomplishment marks the next step toward what former Google CEO Eric Schmidt envisions: A world where everyone has access to “a great polymath,” an AI capable of drawing on vast bodies of knowledge to solve complex problems across disciplines.

Wharton Business School Professor Ethan Mollick noted on his One Useful Thing blog that these latest models were trained using significantly more computing power than GPT-4 at its launch two years ago, with Grok 3 trained on up to 10 times as much compute. He added that this would make Grok 3 the first “gen 3” AI model, emphasizing that “this new generation of AIs is smarter, and the jump in capabilities is striking.”

For example, Claude 3.7 shows emergent capabilities, such as anticipating user needs and the ability to consider novel angles in problem-solving. According to Anthropic, it is the first hybrid reasoning model, combining a traditional LLM for fast responses with advanced reasoning capabilities for solving complex problems.

Mollick attributed these advances to two converging trends: The rapid expansion of compute power for training LLMs, and AI’s increasing ability to tackle complex problem-solving (often described as reasoning or thinking). He concluded that these two trends are “supercharging AI abilities.”

What can we do with this supercharged AI?

In a significant step, OpenAI launched its “deep research” AI agent at the beginning of February. In his review on Platformer, Casey Newton commented that deep research appeared “impressively competent.” Newton noted that deep research and similar tools could significantly accelerate research, analysis and other forms of knowledge work, though their reliability in complex domains is still an open question.

Based on a variant of the still unreleased o3 reasoning model, deep research can engage in extended reasoning over long durations. It does this using chain-of-thought (COT) reasoning, breaking down complex tasks into multiple logical steps, just as a human researcher might refine their approach. It can also search the web, enabling it to access more up-to-date information than what is in the model’s training data.

Timothy Lee wrote in Understanding AI about several tests experts did of deep research, noting that “its performance demonstrates the impressive capabilities of the underlying o3 model.” One test asked for directions on how to build a hydrogen electrolysis plant. Commenting on the quality of the output, a mechanical engineer “estimated that it would take an experienced professional a week to create something as good as the 4,000-word report OpenAI generated in four minutes.”  

But wait, there’s more…

Google DeepMind also recently released “AI co-scientist,” a multi-agent AI system built on its Gemini 2.0 LLM. It is designed to help scientists create novel hypotheses and research plans. Already, Imperial College London has proved the value of this tool. According to Professor José R. Penadés, his team spent years unraveling why certain superbugs resist antibiotics. AI replicated their findings in just 48 hours. While the AI dramatically accelerated hypothesis generation, human scientists were still needed to confirm the findings. Nevertheless, Penadés said the new AI application “has the potential to supercharge science.”

What would it mean to supercharge science?

Last October, Anthropic CEO Dario Amodei wrote in his “Machines of Loving Grace” blog that he expected “powerful AI” — his term for what most call artificial general intelligence (AGI) — would lead to “the next 50 to 100 years of biological [research] progress in 5 to 10 years.” Four months ago, the idea of compressing up to a century of scientific progress into a single decade seemed extremely optimistic. With the recent advances in AI models now including Anthropic Claude 3.7, OpenAI deep research and Google AI co-scientist, what Amodei referred to as a near-term “radical transformation” is starting to look much more plausible.

However, while AI may fast-track scientific discovery, biology, at least, is still bound by real-world constraints — experimental validation, regulatory approval and clinical trials. The question is no longer whether AI will transform science (as it certainly will), but rather how quickly its full impact will be realized.

In a February 9 blog post, OpenAI CEO Sam Altman claimed that “systems that start to point to AGI are coming into view.” He described AGI as “a system that can tackle increasingly complex problems, at human level, in many fields.”  

Altman believes achieving this milestone could unlock a near-utopian future in which the “economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families and can fully realize our creative potential.”

A dose of humility

These advances of AI are hugely significant and portend a much different future in a brief period of time. Yet, AI’s meteoric rise has not been without stumbles. Consider the recent downfall of the Humane AI Pin — a device hyped as a smartphone replacement after a buzzworthy TED Talk. Barely a year later, the company collapsed, and its remnants were sold off for a fraction of their once-lofty valuation.

Real-world AI applications often face significant obstacles for many reasons, from lack of relevant expertise to infrastructure limitations. This has certainly been the experience of Sensei Ag, a startup backed by one of the world’s wealthiest investors. The company set out to apply AI to agriculture by breeding improved crop varieties and using robots for harvesting but has met major hurdles. According to the Wall Street Journal, the startup has faced many setbacks, from technical challenges to unexpected logistical difficulties, highlighting the gap between AI’s potential and its practical implementation.

What comes next?

As we look to the near future, science is on the cusp of a new golden age of discovery, with AI becoming an increasingly capable partner in research. Deep-learning algorithms working in tandem with human curiosity could unravel complex problems at record speed as AI systems sift vast troves of data, spot patterns invisible to humans and suggest cross-disciplinary hypotheses​.

Already, scientists are using AI to compress research timelines — predicting protein structures, scanning literature and reducing years of work to months or even days — unlocking opportunities across fields from climate science to medicine.

Yet, as the potential for radical transformation becomes clearer, so too do the looming risks of disruption and instability. Altman himself acknowledged in his blog that “the balance of power between capital and labor could easily get messed up,” a subtle but significant warning that AI’s economic impact could be destabilizing.

This concern is already materializing, as demonstrated in Hong Kong, as the city recently cut 10,000 civil service jobs while simultaneously ramping up AI investments. If such trends continue and become more expansive, we could see widespread workforce upheaval, heightening social unrest and placing intense pressure on institutions and governments worldwide.

Adapting to an AI-powered world

AI’s growing capabilities in scientific discovery, reasoning and decision-making mark a profound shift that presents both extraordinary promise and formidable challenges. While the path forward may be marked by economic disruptions and institutional strains, history has shown that societies can adapt to technological revolutions, albeit not always easily or without consequence.

To navigate this transformation successfully, societies must invest in governance, education and workforce adaptation to ensure that AI’s benefits are equitably distributed. Even as AI regulation faces political resistance, scientists, policymakers and business leaders must collaborate to build ethical frameworks, enforce transparency standards and craft policies that mitigate risks while amplifying AI’s transformative impact. If we rise to this challenge with foresight and responsibility, people and AI can tackle the world’s greatest challenges, ushering in a new age with breakthroughs that once seemed impossible.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Coterra’s net income surges, Kimmeridge calls for leadership change

Coterra Energy Inc. yesterday reported third-quarter 2025 net income of $322 million up sharply from $252 million from the year-earlier quarter. Year-to-date net income was nearly $1.35 billion, a 64% increase from the first 9 months of 2024. For third-quarter 2025, total barrels of oil equivalent (boe), natural gas production, and oil production were all near the high-end of the company’s guidance ranges, beating their respective mid-points by roughly 2.5%. Incurred capital expenditures from drilling, completion, and other fixed asset additions (non-GAAP) totaled $658 million, near the mid-point of Coterra’s guidance range of $625-675 million. The company turned in-line 48 net wells during the quarter. In the Permian, 38 net wells were turned in-line, below guidance of 40-50 net wells. Anadarko and Marcellus turned in-line six and four net wells, respectively, in line with guidance. Total equivalent production averaged 785,000 boe/d, near the high end of guidance (740,000-790,000 boe/d). But private investment firm Kimmeridge, describing itself as a significant Coterra shareholder, today released an open letter to Coterra’s board calling for “decisive action to address the company’s failures of governance and lack of strategic focus following the failed merger of Cabot Oil & Gas and Cimarex Energy,” up to and including a change of leadership. Coterra was created by the 2021 merger of these two companies. “Coterra’s history has been tainted by a boardroom unwilling to acknowledge its own missteps,” said Mark Viviano, managing partner at Kimmeridge. “Coterra now trades at a significant discount to both Permian and gas-focused peers, underscoring the market’s rejection of a merger that prioritized self-preservation over strategic merit. Kimmeridge maintains that Coterra’s path forward hinges on new leadership and a renewed focus on the Delaware basin. The Board should immediately appoint a non-executive chair who is independent and unassociated with the merger to restore objectivity

Read More »

Diamondback production and output ‘leveling off’ late this year and into 2026

Van’t Hof told analysts on the conference call that the demand picture looks strong these days and that “supply is the hot debate right now.” In a letter accompanying Diamondback’s third-quarter earnings report, he added that the company’s leaders are more aligned with OPEC’s forecast that oversupply through mid-2026 will be less than 500,000 b/d than they are with the International Energy Agency’s outlook of a nearly 4 million b/d surplus. Diamondback, which produced nearly 504,000 b/d of oil in Q3 from its roughly 750,000 net acres in the Permian basin, is content to hold its production levels steady but still be prepared to either boost or bring down output should market conditions change significantly. “We firmly believe there is no need for incremental oil barrels until there is a proper price signal,” Van’t Hof wrote in his letter. In the 3 months that ended Sept. 30, Diamondback’s total production came in at nearly 943,000 boe/d, up from about 920,000 boe/d in the second quarter. The company’s average price/bbl moved up to $64.60 from $63.23 in the spring but was still 12% below the figure from 2024’s third quarter. Its combined price ticked up slightly to $39.73/boe from $39.61 in Q2. Those data points translated into net income of $1.09 billion on total revenues of more than $3.9 billion. Looking to the current quarter, Van’t Hof and his team are forecasting oil output of 505,000 to 515,000 b/d. (That figure will dip to about 505,000 b/d after the company completes an asset sale to its Viper Energy mineral and royalty subsidiary.) They expect total production to be between 927,000 and 963,000 boe/d. Shares of Diamondback (Ticker: FANG) were down nearly 2% to $138.69 in early-afternoon trading Nov. 4, with broader market indices all down more than 1%. Diamondback stock is

Read More »

Uniper Posts $52B Nine-Month Sales Revenue

Uniper SE has reported EUR 44.83 billion ($51.89 billion) in sales for the first nine months of 2025, down from EUR 48.26 billion for January-September 2024 partly due to a portfolio decrease from asset sales. Net profit adjusted for nonrecurring items for the first three quarters of 2025 was EUR 268 million, compared to EUR 1.32 billion for the same period last year. Earnings per share for January-September 2025 landed at EUR 1.35, down from EUR 1.92, the German power and gas utility reported on its website. Before adjustment, net income was EUR 568 million, down from EUR 841 million year-on-year. Adjusted EBITDA for January-September 2025 totaled EUR 641 million, compared to EUR 2.18 billion for the 2024 comparable period. Adjusted EBIT came at EUR 235 million, compared to EUR 1.72 billion for January-September 2024. Green Generation adjusted EBITDA fell year-over-year from EUR 738 million to EUR 540 million. “The price level in northern Sweden remains lower than in the prior-year period, mainly because of high reservoir levels in the first half of 2025”, offsetting a higher power output in the country, Uniper said. The shutdown of the Oskarshamn 3 nuclear power station from the start of the second quarter of 2025 also “adversely affected earnings” from Sweden, Uniper said. The plant was restarted up November 2, it said. “Earnings at Uniper’s hydropower business in Germany were slightly lower, too. Pumped-storage power plants’ contribution to earnings was smaller, whereas that of run-of-river power plants, which benefited from more favorable market conditions, was larger”, Uniper added. Flexible Generation adjusted EBITDA dropped from EUR 1.06 billion to EUR 459 million. “Adverse factors included a decline in earnings on hedging transactions on the fossil trading margin and a smaller generation portfolio”, Uniper said. “The latter especially reflects the decommissioning of Ratcliffe power plant in

Read More »

Where Did Chevron’s Oil and Gas Production Come From in 3Q?

Chevron Corporation revealed a breakdown of its oil and gas production in the third quarter of this year in its latest results statement, which was posted on the company’s website recently. According to this statement, the company’s net oil equivalent production came in at 4.086 million barrels per day in the third quarter. Chevron’s statement showed that this output was almost evenly distributed across its U.S. upstream segment and its international upstream segment. In the third quarter, Chevron’s net oil equivalent production from its U.S. upstream segment was 2.040 million barrels per day and its net oil equivalent production from its international upstream segment was 2.046 million barrels per day, the statement highlighted. Of the U.S. upstream net oil equivalent output, liquids production made up 1.496 million barrels per day and natural gas production made up 3.265 billion cubic feet per day, according to the statement. The company’s international upstream net oil equivalent production comprised 1.099 million barrels per day of liquids production and 5.674 billion cubic feet per day of natural gas production, the statement revealed. Chevron’s total net oil equivalent production was 3.396 million barrels per day in the second quarter and 3.364 million barrels per day in the third quarter of last year. The company’s U.S. upstream net oil equivalent production came in at 1.695 million barrels per day in the second quarter and 1.605 million barrels per day in the third quarter of last year, the statement highlighted. Chevron’s international upstream net oil equivalent output was 1.701 million barrels per day in the second quarter and 1.759 million barrels per day in the third quarter of 2024, according to the statement. Chevron reported upstream earnings of $3.302 billion in the third quarter in its latest results statement, which showed that the company’s upstream earnings stood at

Read More »

North America Goes Back to Adding Rigs

North America added six rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on November 7. The total U.S. rig count increased by two week on week and the total Canada rig count increased by four during the same period, taking the total North America rig count up to 739, comprising 548 rigs from the U.S. and 191 rigs from Canada, the count outlined. Of the total U.S. rig count of 548, 527 rigs are categorized as land rigs, 19 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 414 oil rigs, 128 gas rigs, and six miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 478 horizontal rigs, 59 directional rigs, and 11 vertical rigs. Week on week, the U.S. offshore and inland water rig counts remained unchanged, and the country’s land rig count increased by two, Baker Hughes highlighted. The U.S. oil rig count remained unchanged, its gas rig count increased by three, and its miscellaneous rig count dropped by one, week on week, the count showed. The U.S. horizontal and vertical rig counts remained unchanged week on week, while the country’s directional rig count increased by two during the period, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Louisiana added two rigs, Alaska and California each added one rig, and Texas and Wyoming each dropped one rig. A major state variances subcategory included in the rig count showed that, week on week, the Haynesville basin added one rig and the Cana Woodford, Eagle Ford, and Granite Wash basins each dropped one rig week on week. Canada’s total rig count of 191

Read More »

Oil Rises on Shutdown Hopes

Oil rose as a push to end the US government shutdown buoyed wider markets, with crude traders also looking toward a data-heavy week that will yield insights into whether a long-awaited global surplus is forming. West Texas Intermediate rose around 0.6% to settle above $60 a barrel after two weekly declines, while Brent closed around $64. In the US, the White House expressed support for a bipartisan deal to reopen the US government after its longest-ever shutdown. Markets took the progress as a breakthrough, with tech shares driving the equities rally. Crude has dropped in five of the past six weeks as jitters over surplus supply gained greater traction. The Organization of the Petroleum Exporting Countries and its allies have been loosening output curbs in an apparent effort to gain market share, while drillers from outside the alliance, including the US, have also been adding barrels. OPEC is due to release its monthly analysis on Wednesday, with the International Energy Agency issuing an annual energy outlook the same day, followed by its regular monthly snapshot on Thursday. US sanctions also remain in focus after the Trump administration last month targeted Russia’s Rosneft PJSC and Lukoil PJSC in a bid to raise pressure on the Kremlin to end its war in Ukraine. Governments across Europe and the Middle East are rushing to ensure Lukoil’s sprawling oil operations can keep running after the US sanctions and a quashed bid by energy merchant Gunvor Group for its assets last week. Iraq is said to have transferred operations at Lukoil’s West Qurna 2 field to two state firms in an effort to ensure production continues. Earlier in the day Lukoil declared force majeure, allowing it to exercise the right to skip contractual obligations on the field, according to a person familiar with the matter.

Read More »

Harnessing Gravity: RRPT Hydro Reimagines Data Center Power

At the 2025 Data Center Frontier Trends Summit, amid panels on AI, nuclear, and behind-the-meter power, few technologies stirred more curiosity than a modular hydropower system without dams or flowing rivers. That concept—piston-driven hydropower—was presented by Expanse Energy Corporation President and CEO Ed Nichols and Chief Electrical Engineer Gregory Tarver during the Trends Summit’s closing “6 Moonshots for the 2026 Data Center Frontier” panel. Nichols and Tarver joined the Data Center Frontier Show recently to discuss how their Reliable Renewable Power Technology (RRPT Hydro) platform could rewrite the economics of clean, resilient power for the AI era. A New Kind of Hydropower Patented in the U.S. and entering commercial readiness, RRPT Hydro’s system replaces flowing water with a gravity-and-buoyancy engine housed in vertical cylinders. Multiple pistons alternately sink and rise inside these cylinders—heavy on the downward stroke, buoyant on the upward—creating continuous motion that drives electrical generation. “It’s not perpetual motion,” Nichols emphasizes. “You need a starter source—diesel, grid, solar, anything—but once in motion, the system sustains itself, converting gravity’s constant pull and buoyancy’s natural lift into renewable energy.” The concept traces its roots to a moment of natural awe. Its inventor, a gas-processing engineer, was moved to action by the 2004 Boxing Day tsunami, seeking a way to “containerize” and safely harvest the vast energy seen in that disaster. Two decades later, that spark has evolved into a patented, scalable system designed for industrial deployment. Physics-Based Power: Gravity Down, Buoyancy Up Each RRPT module operates as a closed-loop hydropower system: On the downstroke, pistons filled with water become dense and fall under gravity, generating kinetic energy. On the upstroke, air ballast tanks lighten the pistons, allowing buoyant forces to restore potential energy. By combining gravitational and buoyant forces—both constant, free, and renewable—RRPT converts natural equilibrium into sustained mechanical power.

Read More »

Buyer’s guide to AI networking technology

Extreme Networks: AI management over AI hardware Extreme deliberately prioritizes AI-powered network management over building specialized hyperscale AI infrastructure, a pragmatic positioning for a vendor targeting enterprise and mid-market.Named a Leader in IDC MarketScape: Worldwide Enterprise Wireless LAN 2025 (October 2025) for AI-powered automation, flexible deployment options and expertise in high-density environments. The company specializes in challenging wireless environments including stadiums, airports and historic venues (Fenway Park, Lambeau Field, Dubai World Trade Center, Liverpool FC’s Anfield Stadium). Key AI networking hardware 8730 Switch: 32×400GbE QSFP-DD fixed configuration delivering 12.8 Tbps throughput in 2RU for IP fabric spine/leaf designs. Designed for AI and HPC workloads with low latency, robust traffic management and power efficiency. Runs Extreme ONE OS (microservices architecture). Supports integrated application hosting with dedicated CPU for VM-based apps. Available Q3 2025. 7830 Switch: High-density 100G/400G fixed-modular core switch delivering 32×100Gb QSFP28 + 8×400Gb QSFP-DD ports with two VIM expansion slots. VIM modules enable up to 64×100Gb or 24×400Gb total capacity with 12.8 Tbps throughput in 2RU. Powered by Fabric Engine OS. Announced May 2025, available Q3 2025. Wi-Fi 7 access points: AP4020 (indoor) and AP4060 (outdoor with external antenna support, GA September 2025) completing premium Wi-Fi 7 portfolio. Extreme Platform ONE:Generally available Q3 2025 with 265+ customers. Integrates conversational, multimodal and agentic AI with three agents (AI Expert, AI Canvas, Service AI Agent) cutting resolution times 98%. Includes embedded Universal ZTNA and two-tier simplified licensing. ExtremeCloud IQ: Cloud-based network management integrating wireless, wired and SD-WAN with AI/ML capabilities and digital twin support for testing configurations before deployment. Extreme Fabric: Native SPB-based Layer 2 fabric with sub-second convergence, automated macro and micro-segmentation and free licensing (no controllers required). Multi-area fabric architecture solves traditional SPB scaling limitations. Analyst Rankings: Market leadership in AI networking Foundry Each of the vendors has its

Read More »

Microsoft’s In-Chip Microfluidics Technology Resets the Limits of AI Cooling

Raising the Thermal Ceiling for AI Hardware As Microsoft positions it, the significance of in-chip microfluidics goes well beyond a novel way to cool silicon. By removing heat at its point of generation, the technology raises the thermal ceiling that constrains today’s most power-dense compute devices. That shift could redefine how next-generation accelerators are designed, packaged, and deployed across hyperscale environments. Impact of this cooling change: Higher-TDP accelerators and tighter packing. Where thermal density has been the limiting factor, in-chip microfluidics could enable denser server sleds—such as NVL- or NVL-like trays—or allow higher per-GPU power budgets without throttling. 3D-stacked and HBM-heavy silicon. Microsoft’s documentation explicitly ties microfluidic cooling to future 3D-stacked and high-bandwidth-memory (HBM) architectures, which would otherwise be heat-limited. By extracting heat inside the package, the approach could unlock new levels of performance and packaging density for advanced AI accelerators. Implications for the AI Data Center If microfluidics can be scaled from prototype to production, its influence will ripple through every layer of the data center, from the silicon package to the white space and plant. The technology touches not only chip design but also rack architecture, thermal planning, and long-term cost models for AI infrastructure. Rack densities, white space topology, and facility thermals Raising thermal efficiency at the chip level has a cascading effect on system design: GPU TDP trajectory. Press materials and analysis around Microsoft’s collaboration with Corintis suggest the feasibility of far higher thermal design power (TDP) envelopes than today’s roughly 1–2 kW per device. Corintis executives have publicly referenced dissipation targets in the 4 kW to 10 kW range, highlighting how in-chip cooling could sustain next-generation GPU power levels without throttling. Rack, ring, and row design. By removing much of the heat directly within the package, microfluidics could reduce secondary heat spread into boards and

Read More »

Designing the AI Century: 7×24 Exchange Fall ’25 Charts the New Data Center Industrial Stack

SMRs and the AI Power Gap: Steve Fairfax Separates Promise from Physics If NVIDIA’s Sean Young made the case for AI factories, Steve Fairfax offered a sobering counterweight: even the smartest factories can’t run without power—and not just any power, but constant, high-availability, clean generation at a scale utilities are increasingly struggling to deliver. In his keynote “Small Modular Reactors for Data Centers,” Fairfax, president of Oresme and one of the data center industry’s most seasoned voices on reliability, walked through the long arc from nuclear fusion research to today’s resurgent interest in fission at modular scale. His presentation blended nuclear engineering history with pragmatic counsel for AI-era infrastructure leaders: SMRs are promising, but their road to reality is paved with physics, fuel, and policy—not PowerPoint. From Fusion Research to Data Center Reliability Fairfax began with his own story—a career that bridges nuclear reliability and data center engineering. As a young physicist and electrical engineer at MIT, he helped build the Alcator C-MOD fusion reactor, a 400-megawatt research facility that heated plasma to 100 million degrees with 3 million amps of current. The magnet system alone drew 265,000 amps at 1,400 volts, producing forces measured in millions of pounds. It was an extreme experiment in controlled power, and one that shaped his later philosophy: design for failure, test for truth, and assume nothing lasts forever. When the U.S. cooled on fusion power in the 1990s, Fairfax applied nuclear reliability methods to data center systems—quantifying uptime and redundancy with the same math used for reactor safety. By 1994, he was consulting for hyperscale pioneers still calling 10 MW “monstrous.” Today’s 400 MW campuses, he noted, are beginning to look a lot more like reactors in their energy intensity—and increasingly, in their regulatory scrutiny. Defining the Small Modular Reactor Fairfax defined SMRs

Read More »

Top network and data center events 2025 & 2026

Denise Dubie is a senior editor at Network World with nearly 30 years of experience writing about the tech industry. Her coverage areas include AIOps, cybersecurity, networking careers, network management, observability, SASE, SD-WAN, and how AI transforms enterprise IT. A seasoned journalist and content creator, Denise writes breaking news and in-depth features, and she delivers practical advice for IT professionals while making complex technology accessible to all. Before returning to journalism, she held senior content marketing roles at CA Technologies, Berkshire Grey, and Cisco. Denise is a trusted voice in the world of enterprise IT and networking.

Read More »

Google’s cheaper, faster TPUs are here, while users of other AI processors face a supply crunch

Opportunities for the AI industry LLM vendors such as OpenAI and Anthropic, which still have relatively young code bases and are continuously evolving them, also have much to gain from the arrival of Ironwood for training their models, said Forrester vice president and principal analyst Charlie Dai. In fact, Anthropic has already agreed to procure 1 million TPUs for training and its models and using them for inferencing. Other, smaller vendors using Google’s TPUs for training models include Lightricks and Essential AI. Google has seen a steady increase in demand for its TPUs (which it also uses to run interna services), and is expected to buy $9.8 billion worth of TPUs from Broadcom this year, compared to $6.2 billion and $2.04 billion in 2024 and 2023 respectively, according to Harrowell. “This makes them the second-biggest AI chip program for cloud and enterprise data centers, just tailing Nvidia, with approximately 5% of the market. Nvidia owns about 78% of the market,” Harrowell said. The legacy problem While some analysts were optimistic about the prospects for TPUs in the enterprise, IDC research director Brandon Hoff said enterprises will most likely to stay away from Ironwood or TPUs in general because of their existing code base written for other platforms. “For enterprise customers who are writing their own inferencing, they will be tied into Nvidia’s software platform,” Hoff said, referring to CUDA, the software platform that runs on Nvidia GPUs. CUDA was released to the public in 2007, while the first version of TensorFlow has only been around since 2015.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »