Stay Ahead, Stay ONMINE

Microsoft makes powerful Phi-4 model fully open source on Hugging Face

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Even as its big investment partner OpenAI continues to announce more powerful reasoning models such as the latest o3 series, Microsoft is not sitting idly by. Instead, it’s pursuing the development of more powerful small models […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Even as its big investment partner OpenAI continues to announce more powerful reasoning models such as the latest o3 series, Microsoft is not sitting idly by. Instead, it’s pursuing the development of more powerful small models released under its own brand name.

As announced by several current and former Microsoft researchers and AI scientists today on X, Microsoft is releasing its Phi-4 model as a fully open-source project with downloadable weights on Hugging Face, the AI code-sharing community.

“We have been completely amazed by the response to [the] phi-4 release,” wrote Microsoft AI principal research engineer Shital Shah on X. “A lot of folks had been asking us for weight release. [A f]ew even uploaded bootlegged phi-4 weights on HuggingFace…Well, wait no more. We are releasing today [the] official phi-4 model on HuggingFace! With MIT licence (sic)!!”

Weights refer to the numerical values that specify how an AI language model, small or large, understands and outputs language and data. The model’s weights are established by its training process, typically through unsupervised deep learning, during which it determines what outputs should be provided based on the inputs it receives. The model’s weights can be further adjusted by human researchers and model creators adding their own settings, called biases, to the model during training. A model is generally not considered fully open-source unless its weights have been made public, as this is what enables other human researchers to take the model and fully customize it or adapt it to their own ends.

Although Phi-4 was actually revealed by Microsoft last month, its usage was initially restricted to Microsoft’s new Azure AI Foundry development platform.

Now, Phi-4 is available outside that proprietary service to anyone who has a Hugging Face account, and comes with a permissive MIT License, allowing it to be used for commercial applications as well.

This release provides researchers and developers with full access to the model’s 14 billion parameters, enabling experimentation and deployment without the resource constraints often associated with larger AI systems.

A shift toward efficiency in AI

Phi-4 first launched on Microsoft’s Azure AI Foundry platform in December 2024, where developers could access it under a research license agreement.

The model quickly gained attention for outperforming many larger counterparts in areas like mathematical reasoning and multitask language understanding, all while requiring significantly fewer computational resources.

The model’s streamlined architecture and its focus on reasoning and logic are intended to address the growing need for high performance in AI that remains efficient in compute- and memory-constrained environments. With this open-source release under a permissive MIT License, Microsoft is making Phi-4 more accessible to a wider audience of researchers and developers, even commercial ones, signaling a potential shift in how the AI industry approaches model design and deployment.

What makes Phi-4 stand out?

Phi-4 excels in benchmarks that test advanced reasoning and domain-specific capabilities. Highlights include:

• Scoring over 80% in challenging benchmarks like MATH and MGSM, outperforming larger models like Google’s Gemini Pro and GPT-4o-mini.

• Superior performance in mathematical reasoning tasks, a critical capability for fields such as finance, engineering and scientific research.

• Impressive results in HumanEval for functional code generation, making it a strong choice for AI-assisted programming.

In addition, Phi-4’s architecture and training process were designed with precision and efficiency in mind. Its 14-billion-parameter dense, decoder-only transformer model was trained on 9.8 trillion tokens of curated and synthetic datasets, including:

• Publicly available documents rigorously filtered for quality.

• Textbook-style synthetic data focused on math, coding and common-sense reasoning.

• High-quality academic books and Q&A datasets.

The training data also included multilingual content (8%), though the model is primarily optimized for English-language applications.

Its creators at Microsoft say that the safety and alignment processes, including supervised fine-tuning and direct preference optimization, ensure robust performance while addressing concerns about fairness and reliability.

The open-source advantage

By making Phi-4 available on Hugging Face with its full weights and an MIT License, Microsoft is opening it up for businesses to use in their commercial operations.

Developers can now incorporate the model into their projects or fine-tune it for specific applications without the need for extensive computational resources or permission from Microsoft.

This move also aligns with the growing trend of open-sourcing foundational AI models to foster innovation and transparency. Unlike proprietary models, which are often limited to specific platforms or APIs, Phi-4’s open-source nature ensures broader accessibility and adaptability.

Balancing safety and performance

With Phi-4’s release, Microsoft emphasizes the importance of responsible AI development. The model underwent extensive safety evaluations, including adversarial testing, to minimize risks like bias, harmful content generation, and misinformation.

However, developers are advised to implement additional safeguards for high-risk applications and to ground outputs in verified contextual information when deploying the model in sensitive scenarios.

Implications for the AI landscape

Phi-4 challenges the prevailing trend of scaling AI models to massive sizes. It demonstrates that smaller, well-designed models can achieve comparable or superior results in key areas.

This efficiency not only reduces costs but lowers energy consumption, making advanced AI capabilities more accessible to mid-sized organizations and enterprises with limited computing budgets.

As developers begin experimenting with the model, we’ll soon see if it can serve as a viable alternative to rival commercial and open-source models from OpenAI, Anthropic, Google, Meta, DeepSeek and many others.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia launches blueprints to help jump start AI projects

Give Nvidia credit, it’s not just talking up big ideas, it’s helping its customers achieve them. The vendor recently issued designs for AI factories after hyping up the idea for several months. Now it has come out with AI blueprints, essentially pre-built templates that give developers a jump start on

Read More »

Examining disk space on Linux

$ alias bysize=”ls -lhS” Using the fdisk command The fdisk command can provide useful stats on your disk partitions. Here’s an example: $ sudo fdiskfdisk: bad usageTry ‘fdisk –help’ for more information.$ sudo fdisk -lDisk /dev/sda: 14.91 GiB, 16013942784 bytes, 31277232 sectorsDisk model: KINGSTON SNS4151Units: sectors of 1 * 512

Read More »

2025 global network outage report and internet health check

Two notable outages On December 30, Neustar, a U.S. based technology service provider headquartered in Sterling, VA, experienced an outage that impacted multiple downstream providers, as well as Neustar customers within multiple regions, including the U.S., Mexico, Taiwan, Singapore, Canada, the U.K., Spain, Romania, Germany, Luxembourg, France, Costa Rica, Ireland,

Read More »

SC Says Oil Market Sentiment Appears to Have Improved Significantly

Oil market sentiment appears to have improved significantly over the past month, particularly among hedge funds. That’s what analysts at Standard Chartered Bank, including the company’s commodities research head Paul Horsnell, said in a report sent to Rigzone late Tuesday by Horsnell, adding that their crude oil money-manager positioning index “has risen for three successive weeks”. “In the latest data it rose 15.0 week on week to a 24-week high of -2.1. Our positioning index for the ICE Brent contract is now positive; it rose 17.8 week on week to a 30-week high of +6.0,” the analysts added in the report. “It is perhaps too early to conclude that the mood shift will stick; however, we have noticed a distinct lessening of the reach of the once-dominant extremely bearish macroeconomic and oil balance consensus over the past month,” they went on to state. In the report, the Standard Chartered Bank analysts said the improvement in sentiment has accompanied a gradual trend higher in prices. “Front-month Brent has managed a run of nine consecutive intra-day highs (the longest such run since the start of the contract in 1988 is 12), reaching a 12-week high of $77.50 per barrel intra-day on 6 January before settling weakly at $76.30 per barrel that day and then climbing back above $77 per barrel in early trading on 7 January,” they analysts noted in the report. “The forward curve has steepened and shifted higher, with the first-to-second month Brent spread rising to $0.59 per barrel at settlement on 6 January. Further down the curve Brent for delivery five years out rose by $0.57 per barrel week on week to $68.13 per barrel,” they added. The Standard Chartered Bank analysts also stated in the report that volatility remains muted. “30-day realized, annualized Brent volatility stood at 18.5

Read More »

Eni Raises Installed RE Capacity in Spain to Nearly 950 MW

Eni Plenitude SpA Società Benefit built about 400 megawatts (MW) of power generation capacity from renewable energy (RE) sources in Spain last year, growing its installed RE capacity in the country to almost 950 MW, parent company Eni SpA has reported. The total figure, which represents wind and solar, is more than double compared to Eni’s installed RE capacity in Spain in 2023, the Italian state-backed energy major said in an online statement. The Spanish additions in 2024 have increased Eni’s global installed RE capacity to four GW, a goal it outlined in its 2024–27 plan published March 14, 2024. The Caparacena solar project in Granada accounted for 150 MW of last year’s installations in Spain. The project comprises three photovoltaic plants with a capacity of nearly 50 MW each. The project has a 400-kilovolt (kv) substation enabling connection to the national transmission grid and another 200-kv substation and line shared with other operators, Eni said. Plenitude, the RE arm of Italian state-backed Eni, also completed plants with a combined capacity of nearly 250 MW in the Renopool solar park in Extremadura and the Guillena solar park in Andalusia. A further 820 MW is under construction in Spain, expected to be completed 2025 and 2026, Eni said. “During 2024 Plenitude has registered significant organic growth in Spain with the completion of several renewable projects realized also thanks to the good relationships we have developed locally”, said Mariangiola Mollicone, Eni head of renewables in Western Europe and managing director of Plenitude in Spain. “Spain is a strategic country for our company and we want to have a positive impact in the areas where we operate, not only economically, but also on the communities that host us”, Mollicone added. Early last year Plenitude agreed to join a partnership between BlueFloat Energy International SLU and

Read More »

Work with the energy supply chain or risk driving it away

While the new year is intended to represent the opportunity for new beginnings, there is unfortunately no shaking the clouds over our domestic supply chain. There is symbolic relevance for the north-east of Scotland in GB Energy being headquartered in Aberdeen, but for the companies which make up tens of thousands of jobs in the region, the lack of economic certainty is dangling like the proverbial guillotine. At the top of the food chain, the oil and gas operators are hindered by the government’s Energy Profits Levy and the trickledown effect it has on the companies that are the lifeblood of the region’s economy could prove existential. Robert Gordon University estimate that up to 95,000 jobs are at stake across the country as we transition from fossil fuels to renewables. © Supplied by Craig InternationalCraig House, Craig International’s global head-quarters in Aberdeen. Let’s forget this tagline of Aberdeen as the UK’s oil and gas capital – it is the city leading the nation’s green revolution and is embracing change. But it is being badly let down by those in power. The government’s rhetoric touts commitments to net-zero emissions and a just transition for energy communities but, in practice, it has failed to deliver targeted support to facilitate this shift. Then there’s the National Insurance hike and you don’t need to go far before you’re talking to peers who recognise that employing someone abroad rather than in the UK can be a far more cost-effective option. Supply chain businesses in Aberdeen – and beyond, because let’s not forget the impending importance of Inverness and the Highlands – are uniquely vulnerable. They operate on razor-thin margins, serving as the backbone of both oil and gas projects and emerging renewable ventures. Uncertainty in policy has deterred investment in new oil and gas licences,

Read More »

Aberdeen’s Rovtech acquires VALOR ROV in seven-figure deal

Aberdeen-based Rovtech will acquire the VALOR remotely operated vehicle (ROV) business from Seatronics in a “transformative” seven-figure deal. Rovtech said the “strategic acquisition” will enhance its position as a “global technology leader” in manufacturing robotics, tooling and harsh environment equipment. The company serves the nuclear energy and subsea sectors, with its ROVs designed to withstand radiation, extreme heat and deep ocean pressures. Rovtech said the addition of the VALOR (versatile and lightweight observation ROV) will boost its portfolio, with the technology offering industry leading capabilities at a “far lower weight and cost” compared to rivals. The deal will also double Rovtech’s workforce across its two sites in Aberdeen and Barrow-in-Furness from 10 to 20 employees. © Supplied by Rovtech/True NorthThe VALOR ROV. Image: Rovtech Rovtech chief executive John Polson said the VALOR ROV has provided a “step change in what has traditionally been the limitations of an observation class ROV”. “Its lightweight design, power, manoeuvrability, and unmatched ability to integrate several state-of-the-art sensors, make it an indispensable tool for modern subsea operations, and we are thrilled to integrate this business into Rovtech,” Polson said. “With this acquisition, we’re not just expanding our capabilities – we’re looking to set the standard for efficiency and innovation across the nuclear and subsea energy sectors.” Polson said Rovtech is now “well-positioned” as a provider of “some of the world’s most advanced and reliable ROVs on the market”. Rovtech and Ventex Formerly based in Cumbria, Rovtech was acquired by Aberdeen climate tech venture studio Ventex in October last year. Launched in Aberdeen last year, Ventex is focused on repurposing existing energy supply chain firms to support emerging renewables markets. Ventex managing partner Steve Gray said the acquisition of VALOR ROV will create a “new leader in robotics for subsea, nuclear and other harsh environments”. ©

Read More »

Aberdeen’s KCA Deutag secures $513 million in drilling contracts

Aberdeen-headquartered drilling services firm KCA Deutag has secured $513 million (£419m) in land and offshore drilling contracts. KCA Deutag said the deals include new contracts and extensions for projects across the Middle East, Africa, Latin America and the UK. In land drilling, KCA said it has secured extensions in Saudi Arabia, Oman, Congo and Colombia as well as a new contract in Iraq. The total contracts are worth $497m (£406m), which KCA said amounts to over 35 rig years of additional work. Meanwhile in the offshore sector, KCA said it secured $16m (£13m) in contract extensions in the UK North Sea. KCA Deutag president of land Simon Drew said: ““These awards and achievements reflect our commitment to outstanding performance and show the trust our customers have in our ability to deliver safe and incident free drilling operations. © KCA DeutagA drilling rig topdrive motor manufactured by KCA Deutag subsidiary Bentec “We are proud of and grateful to all our country teams. These recent contract wins and extensions highlight their dedication to excellence and the delivery of safe and effective operations.” The drilling deals come after American competitor Helmerich & Payne (H&P) acquired KCA Deutag in July last year in a deal worth nearly $2 billion (£1.6bn). At the time, H&P said it plans to establish the Portlethen-based KCA as a “global leader in onshore drilling”. Drew said the merger with H&P will combine “the strengths of our people together with our geographical footprint, to create an organisation with an unrivalled global network, service capability and technology offering”. Despite the takeover, KCA is “expected to remain” at its headquarters the north-east of Scotland. KCA Deutag Formed in a merger between KCA Drilling and Deutag AG in 2001, KCA Deutag has a significant land drilling presence in the Middle East alongside operations

Read More »

Copelouzos Sees Start of Power Link between Greece, Egypt in 2030

Greece’s Copelouzos Group is speeding up development of its €4.2 billion ($4.3 billion) power link with Egypt, looking to reach a final investment decision next year and its startup in 2030, the executive in charge of the project said. Known as Gregy, the proposed undersea transmission line has attracted “strong interest” from European banks for potential financing, Ioannis Karydas, chief executive officer for renewables, energy storage and interconnections, said in an interview in Athens.  Greece has seen increasing investments in energy infrastructure, from LNG terminals to pipeline links and power lines. That’s turning the Mediterranean nation into a gateway for imports into Europe as the continent seeks independence from Russian gas and plans to cut net carbon emissions to zero by 2050.  Gregy has entered the European Union’s list of energy projects of common and mutual interest, which makes it eligible for a grant for as much as 50 percent of the cost of construction. Greece’s power grid operator IPTO also said in November it would consider investing in it.   The link, which will consist of 4 cables stretching almost 1,000 kilometers (620 miles) across the eastern Mediterranean, aims to bring electricity from a 9.5 gigawatt portfolio of wind and solar projects that Copelouzos will develop in Egypt as a separate project, Karydas said. One third of the electricity will be for Greek industry, one third for transfers to Bulgaria, Italy and onward to other European countries, and another third for Greek production of fuels such as hydrogen and green methanol for ships, according to Karydas. The project has the potential to replace 4.5 billion cubic meters of natural gas annually, he added.  The wind and solar parks in Egypt will require as much as €8 billion of investments, and Copelouzos aims for an international consortium of companies to develop

Read More »

AWS to invest $11 billion in Georgia to expand infra for gen AI

However, AWS did point out that it plans to make its Thailand data center “flexible enough to efficiently run GPUs (graphics processing units) for traditional workloads or AI and machine learning models.” And AWS isn’t the only cloud services provider that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  Last week, Microsoft president Brad Smith said that the company was on track to invest around $80 billion this fiscal year to build out AI-enabled data centers. Separately, AWS said that it had launched a new region in Thailand with three availability zones. Typically, AWS Regions are composed of Availability Zones that place infrastructure in separate and distinct geographic locations. Thailand is the company’s fourteenth Region in Asia Pacific, joining existing Regions in Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, and Tokyo, as well as the Beijing and Ningxia China Regions. AWS has announced plans to build out 15 more Availability zones and five more Regions in Germany, Taiwan, Mexico, the Kingdom of Saudi Arabia, and New Zealand. 

Read More »

Cisco in 2025: Lots of hard work ahead

Hypershield is comprised of AI-based software, virtual machines, and other technology that will ultimately be baked into networking components such as switches, routers and servers. It promises to let organizations autonomously segment their networks when threats are a problem, gain rapid exploit protection without having to patch or revamp firewalls, and automatically upgrade software without interrupting computing resources, Cisco said. Networking, AI and platformization goals Looking ahead, Cisco needs to refocus on enterprise networking and work to make the data center an all-inclusive home for AI applications, industry watchers say. Security technologies must continue to be a priority as well. “2025 will be an important year for Cisco as the company executes ambitious internal changes while looking to capitalize on a dynamic external environment driven by the AI opportunity,” said Brandon Butler, senior research manager, enterprise networks, with IDC.  Revamped leadership will play a role: In August 2024, Cisco announced plans to reconfigure its networking, security and collaboration business units as part of a restructuring that included a 7% global workforce reduction and established Jeetu Patel as chief product officer. “As for the internal changes, the ascension of Jeetu Patel to executive vice president and chief product officer is a significant move for the company. Patel has an opportunity to more closely unify Cisco’s broad product portfolio while ensuring it aligns with top growth areas,” Butler said. A key part of this strategy will be Cisco’s vision for a platform approach to networking and security, which enables more unified experiences and management across Cisco’s products and allows integrated features, like AI, observability and security, to be baked into each one, Butler said.

Read More »

Point2 aims to cut data center power consumption through smart cabling

The P1B121 is suitable for a range of data center configurations, including in-rack and adjacent rack setups such as top-of-rack switch-to-server connectivity, rack-to-rack connectivity, and accelerator-to-accelerator compute fabric connectivity. The 112G PAM4 Smart Retimer requires only 3.0W of power consumption per chip, so 6 W total for each cable. That’s half of the 25 W of traditional networking cables. It reduces cable power and cooling demands while achieving an impressive chip latency of 3ns, which is 20 times lower than DSP-based PAM4 Retimers currently available. That can add up, Kuo notes, as a rack can have anywhere from 30 to 150 cables in it. Now multiply each cable by 12 W instead of 25 W and you’ve got a significant savings. There is also savings on weight. To compensate for signal loss, some cable makers simply use more copper, making cabling thicker. Having retimer chips allows you to extend the cable link without having to go to a thicker gauge copper wiring. The Point2 retimer supports the current speeds of 400 Gb/s as well as the upcoming 800 Gb products coming to market and the 1.6 Tb in the coming years, said Kuo. Point2 customers are designing cables now and will be delivering them in the first half of 2025, he added.

Read More »

How adding capacity to a network could reduce IT costs

Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it. Congestion is one, but the other is “serialization delay.” This complex-sounding term means that you can’t switch a packet if you don’t have it all, and so every data packet is delayed until it’s all received. The length of that delay is determined by the speed of the connection it arrives on, so fast interfaces always offer better latency, and the delay a given packet experiences is the sum of the serialization delay of each interface it passes through. Application designs, component costs and AI reshape views on network capacity You might wonder why enterprises are starting to look at this capacity-solves-problems point now, versus years or decades earlier. They say there’s both a demand and supply-side answer. On the demand side, increased componentization of applications, including the division of component hosting between data center and cloud, has radically increased the complexity of application workflows. Monolithic applications have simple workflows—input, process, output. Componentized ones have to move messages among the components, and each

Read More »

Scorecard: Looking Back at Data Center Frontier’s 2024 Industry Predictions

2.  Rethinking Power on Every Level  PREDICTION:  Utilities are struggling to upgrade transmission networks to support the surging requirement for electricity to power data centers. CBRE recently said that data center construction completion timelines have been extended by 24 to 72 months due to power supply delays. Although the constraints in Northern Virginia have made headlines, power availability has quickly become a global challenge, impacting major markets in Europe and Asia as well as U.S. hubs like Ashburn, Santa Clara, and sections of Dallas and Suburban Chicago. Last year we predicted the rise of on-site power generation, but we’ve yet to truly see this at scale. But data center operators are working on a range of new approaches to power. Expect to see innovations in power continue as data centers seek better visibility into their power sourcing. MASSIVE HIT:  This prediction was a huge “Hit,” as evidenced by 2024 data from leading commercial real estate firms CBRE, JLL, and Cushman & Wakefield, and other sources. Throughout the year, data center operators reported facing significant challenges in securing adequate power from utilities, leading to increased interest in adoption of on-site power generation solutions, as reflected by many industry discussions this year. The bottom line on this prediction might be the release of this year’s DOE-backed report indicating that U.S. data center power demand could nearly triple in the next three years, potentially consuming up to 12% of the country’s electricity, underscoring the urgency for alternative power solutions. In terms of the largest data center markets, VPM and others noted how Dominion Energy is projecting unprecedented energy demand from data centers in Virginia, posing significant challenges for accommodating this industry growth in the coming decades. In a noteable effort to shore up that gap, Dominion Energy, American Electric Power (AEP), and FirstEnergy

Read More »

How 2024, the Year That Re-Energized Nuclear Power, Foretells Ongoing ‘New Nuclear’ Developments for Data Centers in 2025

In a world increasingly focused on advanced nuclear technologies and their integration with energy-intensive sectors like data centers, nuclear power could change the way that the world gets its electricity and finally take its place as a clean, renewable, source of power. Evidence of this shift toward nuclear energy and data centers’ role in it came in abundance last year, as the U.S. nuclear energy sector was observed undergoing a sea change with regard to the data center industry. We saw Microsoft, Constellation, AWS, Talen, and Meta with major data center nuclear energy announcements in the Second Half of 2024. With the surge in nuclear stakes has also come a wave of landmark PPAs representing the “new nuclear” industry’s ascendance. To wit, in the latter half of 2024, the data center industry witnessed significant developments concerning “new nuclear” energy integration, specifically in the area of plans for forthcoming nuclear small modular reactor (SMR) deployments by cloud hyperscalers.  Some of the most notable announcements included: Amazon’s Investment in Nuclear Small Modular Reactors (SMRs): October 2024 saw Amazon reveal partnerships with Dominion Energy and X-energy to develop and deploy 5 gigawatts (GW) of nuclear energy, in a bid for future powering of its data centers with carbon-free energy. Google’s SMR Pact with Kairos Power: Also in October 2024, Google announced plans to collaborate with Kairos Power to build up to seven SMRs, providing up to 500 megawatts of power. The first unit is expected to come online by 2030, with the entire project slated for completion by 2035. Oracle’s Gigawatt-Scale SMR Plans: In September 2024, Oracle announced plans to construct a gigawatt-scale data center powered by three small modular reactors (SMRs). Company Founder and CTO Larry Ellison revealed that building permits for these reactors have been secured, and that the project was currently in its design phase. The company said

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »