Stay Ahead, Stay ONMINE

Six Ways to Control Style and Content in Diffusion Models

Stable Diffusion 1.5/2.0/2.1/XL 1.0, DALL-E, Imagen… In the past years, Diffusion Models have showcased stunning quality in image generation. However, while producing great quality on generic concepts, these struggle to generate high quality for more specialised queries, for example generating images in a specific style, that was not frequently seen in the training dataset. We […]

Stable Diffusion 1.5/2.0/2.1/XL 1.0, DALL-E, Imagen… In the past years, Diffusion Models have showcased stunning quality in image generation. However, while producing great quality on generic concepts, these struggle to generate high quality for more specialised queries, for example generating images in a specific style, that was not frequently seen in the training dataset.

We could retrain the whole model on vast number of images, explaining the concepts needed to address the issue from scratch. However, this doesn’t sound practical. First, we need a large set of images for the idea, and second, it is simply too expensive and time-consuming.

There are solutions, however, that, given a handful of images and an hour of fine-tuning at worst, would enable diffusion models to produce reasonable quality on the new concepts.

Below, I cover approaches like Dreambooth, Lora, Hyper-networks, Textual Inversion, IP-Adapters and ControlNets widely used to customize and condition diffusion models. The idea behind all these methods is to memorise a new concept we are trying to learn, however, each technique approaches it differently.

Diffusion architecture

Before diving into various methods that help to condition diffusion models, let’s first recap what diffusion models are.

Diffusion process visualisation. Image by the Author.

The original idea of diffusion models is to train a model to reconstruct a coherent image from noise. In the training stage, we gradually add small amounts of Gaussian noise (forward process) and then reconstruct the image iteratively by optimizing the model to predict the noise, subtracting which we would get closer to the target image (reverse process).

The original idea of image corruption has evolved into a more practical and lightweight architecture in which images are first compressed to a latent space, and all manipulation with added noise is performed in low dimensional space.

To add textual information to the diffusion model, we first pass it through a text-encoder (typically CLIP) to produce latent embedding, that is then injected into the model with cross-attention layers.

Dreambooth visualisation. Trainable blocks are marked in red. Image by the Author.

The idea is to take a rare word; typically, an {SKS} word is used and then teach the model to map the word {SKS} to a feature we would like to learn. That might, for example, be a style that the model has never seen, like van Gogh. We would show a dozen of his paintings and fine-tune to the phrase “A painting of boots in the {SKS} style”. We could similarly personalise the generation, for example, learn how to generate images of a particular person, for example “{SKS} in the mountains” on a set of one’s selfies.

To maintain the information learned in the pre-training stage, Dreambooth encourages the model not to deviate too much from the original, pre-trained version by adding text-image pairs generated by the original model to the fine-tuning set.

When to use and when not
Dreambooth produces the best quality across all methods; however, the technique could impact already learnt concepts since the whole model is updated. The training schedule also limits the number of concepts the model can understand. Training is time-consuming, taking 1–2 hours. If we decide to introduce several new concepts at a time, we would need to store two model checkpoints, which wastes a lot of space.

Textual Inversion, papercode

Textual inversion visualisation. Trainable blocks are marked in red. Image by the Author.

The assumption behind the textual inversion is that the knowledge stored in the latent space of the diffusion models is vast. Hence, the style or the condition we want to reproduce with the Diffusion model is already known to it, but we just don’t have the token to access it. Thus, instead of fine-tuning the model to reproduce the desired output when fed with rare words “in the {SKS} style”, we are optimizing for a textual embedding that would result in the desired output.

When to use and when not
It takes very little space, as only the token will be stored. It is also relatively quick to train, with an average training time of 20–30 minutes. However, it comes with its shortcomings — as we are fine-tuning a specific vector that guides the model to produce a particular style, it won’t generalise beyond this style.

LoRA visualisation. Trainable blocks are marked in red. Image by the Author.

Low-Rank Adaptions (LoRA) were proposed for Large Language Models and were first adapted to the diffusion model by Simo Ryu. The original idea of LoRAs is that instead of fine-tuning the whole model, which can be rather costly, we can blend a fraction of new weights that would be fine-tuned for the task with a similar rare token approach into the original model.

In diffusion models, rank decomposition is applied to cross-attention layers and is responsible for merging prompt and image information. The weight matrices WO, WQ, WK, and WV in these layers have LoRA applied.

When to use and when not
LoRAs take very little time to train (5–15 minutes) — we are updating a handful of parameters compared to the whole model, and unlike Dreambooth, they take much less space. However, small-in-size models fine-tuned with LoRAs prove worse quality compared to DreamBooth.

Hyper-networks, paper, code

Hyper-networks visualisation. Trainable blocks are marked in red. Image by the Author.

Hyper-networks are, in some sense, extensions to LoRAs. Instead of learning the relatively small embeddings that would alter the model’s output directly, we train a separate network capable of predicting the weights for these newly injected embeddings.

Having the model predict the embeddings for a specific concept we can teach the hypernetwork several concepts — reusing the same model for multiple tasks.

When to use and not
Hypernetworks, not specialising in a single style, but instead capable to produce plethora generally do not result in as good quality as the other methods and can take significant time to train. On the pros side, they can store many more concepts than other single-concept fine-tuning methods.

IP-adapter visualisation. Trainable blocks are marked in red. Image by the Author.

Instead of controlling image generation with text prompts, IP adapters propose a method to control the generation with an image without any changes to the underlying model.

The core idea behind the IP adapter is a decoupled cross-attention mechanism that allows the combination of source images with text and generated image features. This is achieved by adding a separate cross-attention layer, allowing the model to learn image-specific features.

When to use and not
IP adapters are lightweight, adaptable and fast. However, their performance is highly dependent on the quality and diversity of the training data. IP adapters generally tend to work better with supplying stylistic attributes (e.g. with an image of Mark Chagall’s paintings) that we would like to see in the generated image and could struggle with providing control for exact details, such as pose.

ControlNet visualisation. Trainable blocks are marked in red. Image by the Author.

ControlNet paper proposes a way to extend the input of the text-to-image model to any modality, allowing for fine-grained control of the generated image.

In the original formulation, ControlNet is an encoder of the pre-trained diffusion model that takes, as an input, the prompt, noise and control data (e.g. depth-map, landmarks, etc.). To guide the generation, the intermediate levels of the ControlNet are then added to the activations of the frozen diffusion model.

The injection is achieved through zero-convolutions, where the weights and biases of 1×1 convolutions are initialized as zeros and gradually learn meaningful transformations during training. This is similar to how LoRAs are trained — intialised with 0’s they begin learning from the identity function.

When to use and not
ControlNets are preferable when we want to control the output structure, for example, through landmarks, depth maps, or edge maps. Due to the need to update the whole model weights, training could be time-consuming; however, these methods also allow for the best fine-grained control through rigid control signals.

Summary

  • DreamBooth: Full fine-tuning of models for custom subjects of styles, high control level; however, it takes long time to train and are fit for one purpose only.
  • Textual Inversion: Embedding-based learning for new concepts, low level of control, however, fast to train.
  • LoRA: Lightweight fine-tuning of models for new styles/characters, medium level of control, while quick to train
  • Hypernetworks: Separate model to predict LoRA weights for a given control request. Lower control level for more styles. Takes time to train.
  • IP-Adapter: Soft style/content guidance via reference images, medium level of stylistic control, lightweight and efficient.
  • ControlNet: Control via pose, depth, and edges is very precise; however, it takes longer time to train.

Best practice: For the best results, the combination of IP-adapter, with its softer stylistic guidance and ControlNet for pose and object arrangement, would produce the best results.

If you want to go into more details on diffusion, check out this article, that I have found very well written accessible to any level of machine learning and math. If you want to have an intuitive explanation of the Math with cool commentary check out this video or this video.

For looking up information on ControlNets, I found this explanation very helpful, this article and this article could be a good intro as well.

Liked the author? Stay connected!

Have I missed anything? Do not hesitate to leave a note, comment or message me directly on LinkedIn or Twitter!

The opinions in this blog are my own and not attributable to or on behalf of Snap.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco switch aimed at building practical quantum networks

Cisco today unveiled a prototype switch it says will significantly accelerate the timeline for practical, distributed, quantum-computing-based networks. Cisco’s Universal Quantum Switch is designed to connect quantum systems from different vendors, such as IBM, IonQ, Google and Rigetti, in all major qubit encoding technologies, at room temperature, and over standard

Read More »

It’s the end of set-and-forget security

For IT pros, this translates into: Designing topologies and routing policies that support near real‑time, partial restores of critical services without hard cutovers. Ensuring backup traffic, recovery workflows, and security tooling share telemetry so SecOps can correlate “what changed on the wire” with “what was restored.” Treating recovery points and

Read More »

Google bets on workload-specific TPUs with 8t and 8i launch

For model providers, such as OpenAI and Anthropic, the choice of chips, Dai pointed out, enables a clearer separation between training and serving fleets, while still allowing reuse of common tools and code paths, in turn lowering total costs, improving fleet efficiency, and simplifying model lifecycle transitions. In fact, Google

Read More »

US BLM to offer 400,000 acres for oil and gas leasing under ANWR’s coastal plain in June

The US Bureau of Land Management (BLM) will offer oil and gas leases on 400,000 acres under the Alaska National Wildlife Refuge (ANWR)’s coastal plain on June 5, the first in a series of at least four sales required under the One Big Beautiful Bill Act (OBBBA), which the Trump administration now calls the Working Families Tax Cut act. Recent attempts to lease land for oil and gas development in the 1.5-million-acre coastal plain (the “1002 Area”) of ANWR have generated little interest, with the most recent federal lease sale in January 2025 yielding zero bids and no revenue for federal or state taxpayers. This sale was the second auction mandated by another bill, the 2017 Tax Cuts and Jobs Act. The first sale under that law, held in January 2021, offered 1.1 million acres but yielded only $14.4 million in high bids, less than 1% of the roughly $1 billion originally estimated. BLM noted, however, that a recent federal lease sale in the National Petroleum Reserve in Alaska generated strong participation, which could portend a stronger showing for the upcoming ANWR sale. “The record-breaking success of last month’s lease sale in Alaska’s National Petroleum Reserve sent a clear signal: There is robust and continuing demand for Alaskan energy, underscoring the need for more opportunities like the Coastal Plain sale,” Acting BLM Director Bill Groffy said in a statement. “By expanding these opportunities, we strengthen our national energy security, support high-paying jobs for Alaskans, and help ensure Americans have access to affordable energy.” The Mar. 18 NPR-A sale resulted in 187 leases and $163.7 million in total receipts. Oil and gas development in ANWR remains contentious because of its ecologically sensitive environment and ongoing lawsuits from indigenous groups and environmental organizations. Majors, including ExxonMobil, ConocoPhillips, and bp have left the area

Read More »

Oil prices decline as Strait traffic resumes

Friday’s move has the May 2026 WTI NYMEX futures are trading below the 8-, 13-, and 21-day Moving Averages with a Low that breached the Lower-Bollinger Band limit. Volume is down to 80,000 as May expires next week and traders turn their attention to June. The Relative Strength Indicator (RSI), a momentum indicator, has fallen back into neutral territory at 42. Resistance is now pegged at $93.70 (8-day MA) while near-term Support is $82.45 (Bollinger Band). As has been the pattern for several weeks now, traders have to be cautious with their Friday positions as the market is closed until Sunday evening and the US/Iran talks continue on Saturday.   Looking ahead Questions now remain in terms of the duration of the Israeli ceasefire with Lebanon which Iran has tied to the opening of the Strait of Hormuz. Should Israel violate the ceasefire, it would put Iran’s IRGC back in direct conflict with US naval forces in the area should the former attempt to close the Strait again. US/Iran negotiations are scheduled to continue this weekend in Islamabad. Once again, markets will be closed until Sunday evening so the outcome of those talks will be key to market direction on the Open. Should peace hold, there will need to be a very detailed assessment of the long-term damage to all oil and gas infrastructure in the region. The tanker tracking map below indicates loaded oil vessels are exiting the Strait of Hormuz. Natural gas, fundamental analysis May NYMEX natural gas futures have now been on a 5-week downtrend on mild weather and a larger-than-expected storage injections despite healthy LNG export volumes. The week’s High was Monday’s $2.72/MMbtu while the Low was Tuesday’s $2.56, a tight range which indicates market direction uncertainty.   Natural gas demand this week has been estimated at about

Read More »

Phillips 66, Kinder Morgan move forward with Western Gateway pipeline with secured shipper interest

Phillips 66 Co. and Kinder Morgan Inc. have secured sufficient shipper interest to advance the proposed Western Gateway refined products pipeline project to supply fuel to ‌Arizona and California, the companies said in a joint release Apr. 20. Following a second open season to secure long-term shipper commitments, the companies will “move the project forward, subject to the execution of definitive transportation service agreements, joint venture agreements, and respective board approvals,” the companies said. “Customer response during the open season underscores the importance of Western Gateway in addressing long term refined products logistics needs in the region,” said Phillips 66 chairman and chief executive officer Mark Lashier. “By utilizing existing pipeline assets across multiple states along the route, we’re uniquely well-positioned to support a refined products transportation solution,” said Kim Dang, Kinder Morgan chief executive officer. Western Gateway pipeline specs The planned 200,000-b/d Western Gateway project is designed as a 1,300-mile refined products system with a new-build pipeline from Borger, Tex. to Phoenix, Ariz., combined with Kinder Morgan’s existing SFPP LP pipeline from Colton, Calif. to Phoenix, Ariz., which will be reversed to enable east-to-west product flows into California. It will be fed from supplies connected to Borger as well as supplies already connected to SFPP’s system in El Paso, Tex. The Gold Pipeline, operated by Phillips 66, which currently flows from Borger to St. Louis, will be reversed to enable refined products from midcontinent refineries to flow toward Borger and supply the Western Gateway pipeline. Western Gateway will also have connectivity to Las Vegas, Nev. via Kinder Morgan’s 566-mile CALNEV Pipeline. The Western Gateway Pipeline is targeting completion by 2029.  Phillips 66 will build the entirety of the new pipeline and will operate the line from Borger, Tex., to El Paso, Tex. Kinder Morgan will operate the line from El

Read More »

Viva Energy reports on Geelong refinery status following fire

Viva Energy Group Ltd. has stabilized operations at its 120,000-b/d Geelong refinery in Victoria, Australia, which continues operating at reduced rates following a mid-April fire in the site’s gasoline complex. In an Apr. 20 update to the market, Viva Energy confirmed the Apr. 15 fire specifically occurred in the complex’s alkylation unit and was not fully extinguished until the morning of Apr. 16. While the refinery’s crude distillation units and reformer continue operating, the site’s residue catalytic cracking unit (RCCU) remains temporarily offline as part of ongoing stabilization efforts, according to the company. In the near term, Viva Energy said it expects the refinery’s diesel and jet fuel production to average about 80% normal capacity, with gasoline output reduced to about 60% capacity. The company anticipates production constraints to ease in the coming weeks, subject to inspection and restart of the RCCU, which would allow the refinery’s combined output diesel, jet fuel, and gasoline to exceed 90% of nameplate capacity until all necessary repairs are completed. With sufficient fuel inventories already on hand, Viva Energy said it remains well-positioned to maintain normal fuel supplies to customers during the production shortfalls. “The whole Viva Energy team understands how important our refinery is to the energy security of the country, especially at the current time. We will progressively restore production once we are confident that it is safe to do so, and do not expect any disruptions to fuel availability or price increases for Viva Energy’s customers as a result of this incident,” Scott Wyatt, Viva Energy’s chief executive officer, said in a separate statement. While the company confirmed an assessment of damage to the alkylation unit and associated systems is under way, estimated timelines for full repairs and financial impacts resulting from the fire have yet to be determined. Alongside prioritizing

Read More »

Oil prices plunge following full reopening of the Strait of Hormuz to commercial vessels

Oil prices plunged on Apr. 17, as geopolitical tensions in the Middle East showed signs of easing, following the full reopening of the Strait of Hormuz to commercial vessels. Global crude markets reacted sharply after Iran confirmed that the Strait of Hormuz is now “completely open” to commercial shipping during an ongoing ceasefire tied to regional conflict negotiations. The announcement marked a major turning point after weeks of disruption that had severely constrained global oil flows. Stay updated on oil price volatility, shipping disruptions, LNG market analysis, and production output at OGJ’s Iran war content hub. Brent crude fell by more than 10%, dropping to around $88–89/bbl, while US West Texas Intermediate (WTI) declined to the low $80s—both benchmarks hitting their lowest levels in over a month. The sell-off reflects a rapid unwinding of the geopolitical risk premium that had built up during the conflict. The reopening follows a fragile, 10-day ceasefire involving Israel and Lebanon, alongside tentative progress in US–Iran negotiations. While the waterway is now open, the US has maintained a naval blockade on Iranian ports, signaling that broader geopolitical risks have not fully dissipated. The return of tanker traffic through the Gulf could gradually restore millions of barrels per day to global markets, easing the tight conditions that had driven recent price volatility. However, some uncertainty remains over how quickly shipping activity will normalize and whether the ceasefire will hold. Despite the sharp price decline, the oil market remains structurally fragile. Weeks of disruption have depleted inventories and altered trade flows, and it may take time for supply chains to fully recover. Additionally, any breakdown in ceasefire talks could quickly reverse the current trend. Beyond energy markets, the development rippled across global financial systems. Equity markets surged, with major US indices posting strong gains as lower oil

Read More »

EIA: US crude inventories up 1.9 million bbl

US crude oil inventories for the week ended Apr. 17, excluding the Strategic Petroleum Reserve, increased by 1.9 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 465.7 million bbl, US crude oil inventories are about 3% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 4.6 million bbl from last week and are about 0.5% below the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.4 million bbl last week and are about 8% below the 5-year average for this time of year. Propane-propylene inventories increased by 2.1 million bbl from last week and are 69% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.0 million b/d for the week, which was 55,000 b/d less than the previous week’s average. Refineries operated at 89.1% of capacity. Gasoline production increased, averaging 10.1 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.1 million b/d, up 787,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.0 million b/d, 0.4% less than the same 4-week period last year. Total motor gasoline imports averaged 587,000 b/d. Distillate fuel imports averaged 190,000 b/d.

Read More »

AI Infrastructure Brief: Power, Capital, and the Feeling That Something Is Tightening

It was one of those weeks where the headlines kept coming in terms of deals, campuses, gigawatts, billions.  Taken together they indicated a quieter signal beneath the noise: the AI infrastructure buildout is accelerating, but the system supporting it is beginning to show its seams.  Not cracking, not breaking. But tightening. Power, Everywhere, All at Once Start with power, because everything now starts with power. Bloom Energy and Oracle expanded their partnership toward 2.8 gigawatts of deployment – an almost casual number at this point, except it isn’t. It’s the kind of figure that used to define regional grids, now repurposed for compute. Elsewhere: And then there was the U.S. Air Force, quietly exploring Alaska bases as potential AI data center sites; because if the grid won’t come to you, you start looking for where it already exists? Even Microsoft’s expansion in Cheyenne fits the pattern: go where the power can be made to work. At the same time, Maine’s legislature passed what’s being described as the first-in-the-nation ban on data centers; a move that may or may not hold, because it’s temporary and includes exemptions. But doesn’t need to last forever. The signal for 2026 is enough: the social license layer is no longer hypothetical. Capital Is Still Flowing But It’s Wearing a Suit Now If power is the constraint, capital is still the accelerant; but it’s currently trending as more self-aware: And then the demand-side gravity: These are no exploratory partnerships. They are pre-committed consumption curves, locked in ahead of capacity that is still being negotiated, permitted, and in some cases imagined. Capital hasn’t pulled back. But it has started asking quieter questions. Speed Is the New Differentiator (or the New Risk) AWS has reportedly launched something called “Project Houdini,” aimed at accelerating data center construction timelines, which

Read More »

How AI is changing copper, fiber networking

In a side-by-side comparison using 1.6 Tb/s ports, optical cables can consume up to 20 watts of power, vs. virtually none for copper. That gap has major implications at scale. In massive AI installations with thousands of connections, optical power draw can quickly add up to a meaningful share of a facility’s total energy usage. Despite its efficiency, copper has a hard physical limitation: distance. As data rates increase, the maximum length of passive copper cables shrinks dramatically. At common speeds—such as 1Gb/s—copper Ethernet cables can span long distances without issue. But at the speeds used inside AI systems, the story changes. At roughly 200 Gb/s per lane, passive copper connections are limited to only a few meters, typically around two to three meters. Beyond that, signal integrity breaks down and fiber becomes inevitable, said Shainer. This constraint shapes how modern data centers are built. Copper is ideal for scale‑up networking, such as connecting GPUs within the same rack, where distances are short. Scale‑out networking—linking racks across rows, halls, or entire buildings—requires fiber optics. Fiber also matches copper in raw speed potential. Both media can support extremely high data rates, but fiber maintains those speeds over vastly longer distances. The tradeoff is higher cost, greater fragility, and significantly higher power consumption. Copper cables are physically tough and difficult to damage. Fiber cables contain delicate glass strands that can break if bent or mishandled.

Read More »

Almost 40% of data center projects will be late this year, 2027 looks no better

Add to that the significant parts and components shortage as well as the growing revolt by both nearby residents living near proposed data center sites as well as state and local governments. OpenAI told the Financial Times,  “Our historic data center build-out is on schedule and we will accelerate from here. In partnership with Oracle, SB Energy and a broader ecosystem of partners, we are delivering rapid progress in Abilene, Shackelford County and Milam County in Texas,” while Oracle said,  “Each data center we’re developing for OpenAI is moving forward on time, and construction is proceeding according to plan.” Two construction executives working on OpenAI-linked projects said there were not enough specialist workers, from electricians to pipe fitters, to meet demand across the build-out as companies race to construct clusters of increasingly large and complex facilities. Data center construction is facing growing headwinds from all quarters. Umm the high hardware demands of AI’s data centers has resulted in a significant shortage of not only GPUs but also memory and storage. Hard drive makers are sold out through the end of this year and into next year and memories going for hundreds if not thousands of dollars. Power is another issue. GPUs especially our power hungry and the demands of data centers have gone through the roof. With the current grid unable to support the demands, data center providers are looking to provide their own power, namely through modular nuclear data centers. Nuclear power has come back into vogue after being on the outs for so many years. Then there’s the revolt of both citizens and governments. What started out as individual groups in cities and states opposing data centers has now moved on to the state of Maine putting a pause on all data center construction through next year, and

Read More »

Data centers are costing local governments billions

Tax benefits for hyperscalers and other data center operators are costing local administrations billions of dollars. In the US, three states are already giving away more than $1 billion in potential tax revenue, while 14 are failing to declare how much data center subsidies are costing taxpayers, according to Good Jobs First. The campaign group said the failure to declare the tax subsidies goes against US Generally Accepted Accounting Principles (GAAP) and that they should, since 2017, be declared as lost revenue. “Tax-abatement laws written long ago for much smaller data centers, predating massive artificial intelligence (AI) facilities, are now unexpectedly costing governments billions of dollars in lost tax revenue,” Good Jobs First said. “Three states, Georgia, Virginia, and Texas, already lose $1 billion or more per year,” it reported in its new study, “Data Center Tax Abatements: Why States and Localities Must Disclose These Soaring Revenue Losses.”

Read More »

Equinix offering targets automated AI-centric network operations

Another component, Fabric Application Connect, functions as a private, dedicated connectivity marketplace for AI services. It lets enterprises access inference, training, storage, and security providers over private connections, bypassing the public Internet and limiting data exposure during AI development and deployment. Operational visibility is provided through Fabric Insights, an AI-powered monitoring layer that analyzes real-time network telemetry to detect anomalies and predict potential issues before they impact workloads. Fabric Insights integrates with security information and event management (SIEM) platforms such as Splunk and Datadog and feeds data directly into Fabric Super-Agent to support automated remediation. Fabric Intelligence operates on top of Equinix’s global infrastructure footprint, which includes hundreds of data centers across dozens of metropolitan markets. The platform is positioned as part of Equinix Fabric, a connectivity portfolio used by thousands of customers worldwide to link cloud providers, enterprises, and network services. Fabric Intelligence is available now to preview.

Read More »

Blue Owl Builds a Capital Platform for the Hyperscale AI Era

Capital as a Service: The Hyperscaler Shift This is not just another project financing. It points to a model in which hyperscalers can externalize a significant portion of the capital required for AI campuses while retaining operational control. Under the Hyperion structure, Meta provides construction and property management, while Blue Owl supplies capital at scale alongside infrastructure expertise. Reuters described the transaction as Meta’s largest private capital deal to date, with the campus projected to exceed 2 gigawatts of capacity. For Blue Owl, it marks a shift in role: from backing developers serving hyperscalers to working directly with a hyperscaler to structure ownership more efficiently at scale. Hyperion also helps explain why this model is gaining traction. Hyperscalers are now deploying capital at a pace that makes flexibility a strategic priority. Structures like the Meta–Blue Owl JV allow them to continue expanding infrastructure without fully absorbing the balance-sheet impact of each new campus. Analyst commentary cited by Reuters suggested the arrangement could help Meta mitigate risk and avoid concentrating too much capital in land, buildings, and long-lived infrastructure, preserving capacity for additional facilities and ongoing AI investment. That is the service Blue Owl is effectively providing. Not just capital, but balance-sheet flexibility at a time when AI infrastructure demand is stretching even the largest technology companies. With major tech firms projected to spend hundreds of billions annually on AI infrastructure, that capability is becoming central to how the next generation of campuses gets built. The Capital Baseline Resets In early 2026, hyperscalers effectively reset the capital baseline for the sector. Alphabet projected $175 billion to $185 billion in annual capex, citing continued constraints across servers, data centers, and networking. Amazon pointed to roughly $200 billion, up from $131 billion the prior year, while noting persistent demand pressure in AWS. Meta

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »