Stay Ahead, Stay ONMINE

BTC 🏃🏽‍♂️👆🏽 ➡️ Hashprice Relief 😮‍💨 + BITF-RIOT Chill & Swan’s Legal 🎢 Kicks Off

W39 ’24 | 9.23-9.29.24 | Issue XCIX | Block Height 863404 Welcome to the latest issue of the Vibe Check, your weekly source at the intersection of Bitcoin, Energy, and Bitcoin Mining. Grab a ☕ and start the week with all the metrics and stories that shake and bake the Bitcoin Mining industry. Subscribe and […]

W39 ’24 | 9.23-9.29.24 | Issue XCIX | Block Height 863404

Welcome to the latest issue of the Vibe Check, your weekly source at the intersection of Bitcoin, Energy, and Bitcoin Mining.

Grab a ☕ and start the week with all the metrics and stories that shake and bake the Bitcoin Mining industry.

Subscribe and share with your friends, colleagues, and family!


W39 ‘24 Vibe Check

  • The Overview Vibe

  • Weekly Industry Metrics

  • Headlines & News

  • The Media Vibe

  • Energy Corner

  • The Meme Vibe


The Ohio Blockchain Council is running it back this year with AMPLIFY on Oct 3rd in Columbus, Ohio. Join leaders from the Ohio Bitcoin mining, energy, and digital infrastructure sectors for a day of education, networking, and opportunity! Get your Tickets HERE and use “VibeCheck” for 21% off. For speaking or sponsorship opportunities, reach out to [email protected].

The Overview Vibe

Bitcoin rallied over to ~$65,000 this week as ETF in-flows topped 10K BTC and Binance Founder CZ was released from jail. While hashrate flowed up and settled marginally down at 631 EH/s, the ~4% downward difficulty adjustment provided a levered increase of ~9% in hashprice. Hashprice tagged ~$48/PH/Day, marking almost two months high. Miners can look to capitalize on this short-term dynamic. Having ASICs on during downward difficulty adjustments and BTC price run-ups can help heal the scars from the last quarter of low hashprice.

Wall Street projected bullish vibes with BNY Mellon being granted an SEC expectation to custody of non-ETF digital assets. BlackRock changed its agreement with Coinbase to mandate on-chain withdraws every 12 hours, derisking any funny business on behalf of their custodian. Payments giant Paypal launched buy/sell/hodl capabilities for its business account, opening the rails for BTC treasury strategies to any business user looking to get their Micheal Saylor on.

The public sector stayed calm before monthly updates due next week, with BITF and RIOT announcing a settlement to their spicy summer takeover novela. The settlement tldr is 1) Amy Freedman (RIOT’s board nominee) is appointed to the board ASAP (with Andrés Finkielsztain stepping down), 2) RIOT withdrawals its June 24, 2024 requisition for a special board meeting, and 3) RIOT is provided with certain rights to purchase more shares. It seems that for now, both firms will chill baby chill and focus on their hashrate growth.

Following up on their ASIC hosting partnership with Bitmain last week, Hut 8 announced their GPU-As-A-Service business, deploying 1000 NVIDIA H100 GPUs at a third-party data center in Chicago. This is in line with Cathedra’s Drew Armstrong’s post on “Hey don’t build a tier 3 data if you are getting into HPC”. Bitdeer released the results of their SEAL02 chip, boasting ~13J/T efficiency. Sheeessh! Check out pennyether’s commentary here.

Across the private sector, Swan Bitcoin dropped a lawsuit against former employees and consultants of their Bitcoin mining business. Swan alleges that they stole software IP (lol, DASHBOARDS ARE NOT IP IN BITCOIN MINING, tell them, Marshall, lol) and their overall mining business. They imply that the mining management team and Tether conspired together to cut them out. While I am sure there is yet so much to learn in the public forum of the courts, the lawsuit makes Swan look pretty salty in a deal that they had marginal value add-in, if any at all. Remember kids, work on being objectively value add and chill. The moment you aren’t doing either, too bad so sad.

So I forgot to sub one of my players on my fantasy football team this week and I am def gonna lose 🙁 I hope y’all had a great weekend, called your mom, and stayed hydrated! Ping me if y’all are at Amplify in Ohio this week, would love to see y’all there!


Bitcoin/Mining Metrics

  • BTC price2: ~$65,863.80 @ EOW. +4.83% WOW.

  • Hash price1: $47.55 PH/Day @ EOW. +9.92% WOW.

  • Network Hashrate (SMA 7 Day)1: ~631 EH/s @ EOW. -0.094% WOW. 

  • Difficulty1: 88.4 T @ EOW. -4.40% WOW.

  • ASIC Retail Price (s19/m30 family)1: $6.05/TH/s @ EOW. 0.00% WOW.

Sources: Hashrate Index1, Bitbo2

Weekly Hashprice – Block Height 863397 – Hashrate Index

Weekly Hashrate – Block Height 863397 – Hashrate Index

Mempool Stats – Block Height 863397 – mempool.space

Mining Stats – Block Height 863397 – mempool.space

Headlines & News

Featured

  • Riot, Bitfarms Reach Settlement in Hostile Bitcoin Mining Takeover Bid – TheMinerMag.

  • Swan Bitcoin sues former employees, alleges theft of bitcoin mining business – BlockSpace Media.

  • Bitdeer Completes Testing of its Latest SEAL02 Bitcoin Mining Chip – Press Release.

  • Binance Founder Changpeng ‘CZ’ Zhao Is a Free Man – CoinDesk.

General PubCo Updates

  • IREN achieves 20 EH/s milestone – Press Release.

  • Hut 8 GPU-as-a-Service Vertical Goes Live with Inaugural Deployment – Press Release.

  • LM Funding America, Inc. estimates that the 135.7 Bitcoin holdings on August 31, 2024, were valued at approximately $8.7 million in their monthly updates – Press Release.

Capital Markets & M&A

  • Bitfarms Co-Founder Sells Shares After Stepping Down from Board – TheMinerMag.

Regulatory/Legal Updates

  • Rhodium Seeks Court Order for Bitcoin Mining Sales in Chapter 11 – TheMinerMag.

  • Revolve Labs Withdraws Bitcoin Mining Proposal After Minnesota Residents’ Outcry – TheMinerMag.

Hardware/Tech Updates

Research/Reports/Newsletters


The Media Vibe


Energy Corner


The Meme Vibe


Subscribe and keep your eye out for the development of the Vibe Check throughout 2024!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Equinor starts production at Bacalhau field offshore Brazil

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #c19a06; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style

Read More »

AI’s dark side shows in Gartner’s top predictions for IT orgs

Increasing legal claims against AI-induced safety problems related to autonomous vehicle or medical accidents are also a mounting concern, Plummer stated. By the end of 2026, “death by AI” legal claims will exceed 1,000 globally due to insufficient AI risk guardrails, Plummer stated. “As regulatory scrutiny intensifies, organizations will face pressure

Read More »

AI dominates Gartner’s top strategic technology trends for 2026

“AI supercomputing platforms integrate CPUs, GPUs, AI ASICs, neuromorphic and alternative computing paradigms, enabling organizations to orchestrate complex workloads while unlocking new levels of performance, efficiency and innovation. These systems combine powerful processors, massive memory, specialized hardware, and orchestration software to tackle data-intensive workloads in areas like machine learning, simulation,

Read More »

IBM signs up Groq for speedy AI inferencing option

The technology involved in the partnership will let customers use watsonx capabilities in a familiar way and allow them to use their preferred tools while accelerating inference with GroqCloud, IBM stated. “This integration will address key AI developer needs, including inference orchestration, load balancing, and hardware acceleration, ultimately streamlining the

Read More »

IEA: Global oil market to see huge oversupply

In its most recent Oil Market Monthly Report, the International Energy Agency (IEA) lowered its projections for oil demand growth for this year, while increasing its expectations for supply growth, indicating a significant supply overhang. IEA highlights that global oil inventories are already surging, particularly oil stored in tankers on water. IEA data shows that global oil demand actually expanded by 750,000 b/d year-on-year (y-o-y) in third-quarter 2025, led by a rebound in petrochemical feedstock use following the tariff-induced slowdown in second-quarter 2025. However, the agency expects oil consumption to stay subdued through the rest of 2025 and in 2026, as weaker macroeconomic conditions and rising transport electrification weigh on demand. Annual demand growth is now projected at around 700,000 b/d for both years, down from 740,000 b/d expected for 2025 in IEA’s September report. This growth is well below the 980,000 b/d pace seen in 2024 and markedly lower than the 1.3 million b/d average during the 2010s. Demand for 2025 and 2026 is now projected to be 103.8 million b/d and 104.5 million b/d, respectively. Total global oil supply increased by 760,000 b/d month-on-month (m-o-m) in September to reach 108 million b/d, driven by a 1 million b/d surge in OPEC+ output, primarily from the Middle East. In this month’s report, IEA forecasts that global oil supply will increase by 3 million b/d to reach 106.1 million b/d in 2025, followed by an additional rise of 2.4 million b/d in 2026. This projection exceeds the September forecast of 2.7 million b/d supply growth for 2025 and 2.1 million b/d growth next year.  Non-OPEC+ producers account for 1.6 million b/d of this year’s growth and 1.2 million b/d next year, led by the US, Brazil, Canada, Guyana, and Argentina. OPEC+ output is expected to add 1.4 million b/d in 2025 and

Read More »

bp-led Shah Deniz consortium lets $700 million in compression project contracts

The bp plc-led Shah Deniz consortium has awarded three offshore contracts to advance the Shah Deniz compression (SDC) project in Azerbaijan. The $2.9-billion SDC project was sanctioned earlier this year. The aim is to access and produce low pressure gas reserves in Shah Deniz gas field and maximize resource recovery. In first-half 2025, the field produced around 14 billion cu m of gas and about 16 million bbl of condensate in total from the Shah Deniz Alpha and Shah Deniz Bravo platforms. Production capacty of existing Shah Deniz infrastructure is about 77.2 million cu m/day of gas. The contracts, with a combined value of about $700 million, have been awarded to the Saipem-BOS Shelf joint venture. The scope of work under the contracts includes: Transportation and installation of the entire SDC platform – a new 19,000-tonne platform to be installed in the Caspian Sea. Engineering, procurement, construction, and installation of subsea structures, including about 26 km of new offshore pipelines, to connect the SDC platform with existing Shah Deniz infrastructure. All onshore construction activities will be carried out at Baku Deep Water Jacket Factory, operated by BOS Shelf. Offshore construction and installation will be executed using the Khankendi subsea construction vessel owned by the Shah Deniz consortium, and the Israfil Huseynov pipelay barge owned by the Azerbaijan Caspian Shipping Co. (ASCO). Both vessels will be operated by Saipem. Offshore activities are expected to begin with pin pile installation in third-quarter 2026, with completion targeted for 2029. Shah Deniz compression project The SDC project is expected to enable around 50 billion cu m of additional gas and about 25 million bbl of additional condensate production and export from Shah Deniz field, bp said in a project overview on its website. The project involves installation of an electrically-powered unmanned compression platform in 85

Read More »

Valeura Energy finds JV partner for Thrace deep gas play testing

Valeura Energy Inc. has entered into a new joint venture to explore for and develop hydrocarbons in the deep rights formations of the Thrace basin of northwest Türkiye. The agreement was entered into through a Valeura subsidiary, together with partner Pinnacle Turkey Inc., with a subsidiary of Transatlantic Petroleum LLC. “Despite our strategic pivot toward the Asia-Pacific region, we have maintained our conviction that the deep gas play we discovered in northwest Türkiye offers significant potential to add value to the company,” said Sean Guest, president and chief executive officer of Valeura, in a release Oct. 15. “Our drilling program from 2017 to 2019 demonstrated that there are multiple tcf of gas in place across Valeura’s lands in a deep tight gas play.  We drilled three wells into this play and tested 12 separate zones – every one of which flowed gas.  It is my hope that with a reinvigorated push to test the play, we will see this evolve into a commercial success,” he continued. Thrace deep gas play, Türkiye Valeura has held various blocks and operated in Türkiye for almost 15 years.  The company continues to hold the deep rights (below 2,500 m or a pressure gradient of 0.6 psi/ft, whichever is shallower) in various exploration licenses and production leases covering a total of 955 sq km (gross) in the Thrace basin, just west of Istanbul.  The current exploration phase for most of the acreage (lands held under exploration license) expires June 27, 2026, but discussions are under way with the government for a 2-year appraisal period extension, the company said.

Read More »

EIA: US crude inventories up 3.5 million bbl

US crude oil inventories for the week ended Oct. 10, excluding the Strategic Petroleum Reserve, increased by 3.5 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). While the federal government remains shutdown, the data was released 1 day later than usual due to the EIA’s regular holiday schedule. At 423.8 million bbl, US crude oil inventories are about 4% below the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 300,000 bbl from last week and are slightly below the 5-year average for this time of year. Finished gasoline inventories and blending components inventories both decreased last week. Distillate fuel inventories decreased by 4.5 million bbl last week and are about 7% below the 5-year average for this time of year. Propane-propylene inventories increased by 1.9 million bbl from last week and are 11% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 15.1 million b/d for the week ended Oct. 10, about 1.2 million b/d less than the previous week’s average. Refineries operated at 85.7% of capacity. Gasoline production decreased, averaging 9.4 million b/d. Distillate fuel production decreased by 577,000 b/d, averaging 4.6 million b/d. US crude oil imports averaged 5.5 million b/d, down 878,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged 6.1 million b/d, 2.4% less than the same 4-week period last year. Total motor gasoline imports averaged 532,000 b/d. Distillate fuel imports averaged 160,000 b/d.

Read More »

Falling Saudi oil demand highlights power generation progress

During the first 7 months of 2025, combined demand for direct crude burn, fuel oil, and gasoil declined by nearly 100,000 b/d year on year, most noticeably in the summer months, when electricity demand typically peaks. The reduction came despite a 1.6% rise in cooling degree days (CDDs) and continued demographic pressures, with the working-age population expected to grow by roughly 6% this year. “While the availability of prompt data relating to electricity and natural gas is comparatively limited, the most likely driver of the fall in oil use is rising power output from other sources, especially gas. Increasing natural gas supply and utilization has long been a focus for the Saudi energy sector and the Jafurah project, with production beginning later this year, is expected to significantly boost gas (and NGLs) output during the rest of this decade,” IEA said. IEA expects this to enable a major reduction in oil use for electricity production, in a resumption of the substantial declines achieved during the late 2010s. “While monthly data can be volatile, the figures reported for June and July suggest that this progress may be outpacing the medium-term trajectory included in our Oil 2025 report, which already saw Saudi Arabian oil demand dropping by more than any country by 2030.” August and September temperatures were broadly consistent with recent seasonal norms, and CDDs were essentially flat year on year. In recent years, use of power plant input products has been less than half as responsive to underlying cooling requirements than it was during 2010-2016, and ‘base load’ winter deliveries appear to have fallen by more than 100,000 b/d since the pandemic. “Barring an unusually hot October and November, it is likely that total 2025 Saudi oil consumption will drop slightly, despite strong rises in GDP and population. With accelerating

Read More »

ExxonMobil Guyana advances to Phase 2 for Hammerhead FPSO

ExxonMobil Guyana Ltd. has let a Phase 2 contract to MODEC Inc. for a floating production storage, and offloading (FPSO) vessel for the Hammerhead project. The contract is for a full engineering, procurement, construction, and installation (EPCI) scope of work and follows the Phase One front-end engineering and design (FEED) contract. In April 2025, MODEC received a limited notice to proceed (LNTP) enabling it to commence FPSO design activities to support the earliest possible startup in 2029, subject to required government approvals. Phase One has since been completed and MODEC is advancing Phase Two. The Hammerhead FPSO will have the capacity to produce 150,000 b/d of oil, along with associated gas and water. It will be moored at a water depth of about 1,025 m using a spread mooring system. Hammerhead will be MODEC’s second FPSO for use in Guyana, following Errea Wittu, which is being built for ExxonMobil Guyana’s Uaru project. As with Uaru , MODEC will provide ExxonMobil with operations and maintenance services for the FPSO for 10 years from first oil.

Read More »

AMD Scales the AI Factory: 6 GW OpenAI Deal, Korean HBM Push, and Helios Debut

What 6 GW of GPUs Really Means The 6 GW of accelerator load envisioned under the OpenAI–AMD partnership will be distributed across multiple hyperscale AI factory campuses. If OpenAI begins with 1 GW of deployment in 2026, subsequent phases will likely be spread regionally to balance supply chains, latency zones, and power procurement risk. Importantly, this represents entirely new investment in both power infrastructure and GPU capacity. OpenAI and its partners have already outlined multi-GW ambitions under the broader Stargate program; this new initiative adds another major tranche to that roadmap. Designing for the AI Factory Era These upcoming facilities are being purpose-built for next-generation AI factories, where MI450-class clusters could drive rack densities exceeding 100 kW. That level of compute concentration makes advanced power and cooling architectures mandatory, not optional. Expected solutions include: Warm-water liquid cooling (manifold, rear-door, and CDU variants) as standard practice. Facility-scale water loops and heat-reuse systems—including potential district-heating partnerships where feasible. Medium-voltage distribution within buildings, emphasizing busway-first designs and expanded fault-current engineering. While AMD has not yet disclosed thermal design power (TDP) specifications for the MI450, a 1 GW campus target implies tens of thousands of accelerators. That scale assumes liquid cooling, ultra-dense racks, and minimal network latency footprints, pushing architectures decisively toward an “AI-first” orientation. Design considerations for these AI factories will likely include: Liquid-to-liquid cooling plants engineered for step-function capacity adders (200–400 MW blocks). Optics-friendly white space layouts with short-reach topologies, fiber raceways, and aisles optimized for module swaps. Substation adjacency and on-site generation envelopes negotiated during early land-banking phases. Networking, Memory, and Power Integration As compute density scales, networking and memory bottlenecks will define infrastructure design. Expect fat-tree and dragonfly network topologies, 800 G–1.6 T interconnects, and aggressive optical-module roadmaps to minimize collective-operation latency, aligning with recent disclosures from major networking vendors.

Read More »

Study Finds $4B in Data Center Grid Costs Shifted to Consumers Across PJM Region

In a new report spanning 2022 through 2024, the Union of Concerned Scientists (UCS) identifies a significant regulatory gap in the PJM Interconnection’s planning and rate-making process—one that allows most high-voltage (“transmission-level”) interconnection costs for large, especially AI-scale, data centers to be socialized across all utility customers. The result, UCS argues, is a multi-billion-dollar pass-through that is poised to grow as more data center projects move forward, because these assets are routinely classified as ordinary transmission infrastructure rather than customer-specific hookups. According to the report, between 2022 and 2024, utilities initiated more than 150 local transmission projects across seven PJM states specifically to serve data center connections. In 2024 alone, 130 projects were approved with total costs of approximately $4.36 billion. Virginia accounted for nearly half that total—just under $2 billion—followed by Ohio ($1.3 billion) and Pennsylvania ($492 million) in data-center-related interconnection spending. Yet only six of those 130 projects, about 5 percent, were reported as directly paid for by the requesting customer. The remaining 95 percent, representing more than $4 billion in 2024 connection costs, were rolled into general transmission charges and ultimately recovered from all retail ratepayers. How Does This Happen? When data center project costs are discussed, the focus is usually on the price of the power consumed, or megawatts multiplied by rate. What the UCS report isolates, however, is something different: the cost of physically delivering that power: the substations, transmission lines, and related infrastructure needed to connect hyperscale facilities to the grid. So why aren’t these substantial consumer-borne costs more visible? The report identifies several structural reasons for what effectively functions as a regulatory loophole in how development expenses are reported and allocated: Jurisdictional split. High-voltage facilities fall under the Federal Energy Regulatory Commission (FERC), while retail electricity rates are governed by state public utility

Read More »

OCP Global Summit 2025 Highlights: Advancing Data Center Densification and Security

With the conclusion of the 2025 OCP Global Summit, William G. Wong, Senior Content Director at DCF’s sister publications Electronic Design and Microwaves & RF, published a comprehensive roundup of standout technologies unveiled at the event. For Data Center Frontier readers, we’ve revisited those innovations through the lens of data center impact, focusing on how they reshape infrastructure design and operational strategy. This year’s OCP Summit marked a decisive shift toward denser GPU racks, standardized direct-to-chip liquid cooling, 800-V DC power distribution, high-speed in-rack fabrics, and “crypto-agile” platform security. Collectively, these advances aim to accelerate time-to-capacity, reduce power-distribution losses at megawatt rack scales, simplify retrofits in legacy halls, and fortify data center platforms against post-quantum threats. Rack Design and Cooling: From Ad-Hoc to Production-Grade Liquid Cooling NVIDIA’s Vera Rubin compute tray, newly offered to OCP for standardization, packages Rubin-generation GPUs with an integrated liquid-cooling manifold and PCB midplane. Compared with the GB300 tray, Vera Rubin represents a production-ready module delivering four times the memory and three times the memory bandwidth: a 7.5× performance factor at rack scale, with 150 TB of memory at 1.7 PB/s per rack. The system implements 45 °C liquid cooling, a 5,000-amp liquid-cooled busbar, and on-tray energy storage with power-resilience features such as flexible 100-amp whips and automatic-transfer power-supply units. NVIDIA also previewed a Kyber rack generation targeted for 2027, pivoting from 415/480 VAC to 800 V DC to support up to 576 Rubin Ultra GPUs, potentially eliminating the 200-kg copper busbars typical today. These refinements are aimed at both copper reduction and aisle-level manageability. Wiwynn’s announcements filled in the practicalities of deploying such densities. The company showcased rack- and system-level designs across NVIDIA GB300 NVL72 (72 Blackwell Ultra GPUs with 800 Gb/s ConnectX-8 SuperNICs) for large-scale inference and reasoning, and HGX B300 (eight GPUs /

Read More »

Storage constraints add to AI data center bottleneck

AI deployment uses multiple storage layers, and each one has different requirements, says Dell’Oro’s Fung. For storing massive amounts of unstructured, raw data, cold storage on HDDs makes more sense, he says. SSDs make sense for warm storage, such as for pre-processing data and for post-training and inference. “There’s a place for each type of storage,” he says. Planning ahead According to Constellation’s Mehta, data center managers and other storage buyers should prepare by treating SSD procurement like they do GPUs. “Multi-source, lock in lanes early, and engineer to standards so vendor swaps don’t break your data path.” He recommends qualifying at least two vendors for both QLC and TLC and starting early. TrendForce’s Ao agrees. “It is better to build inventory now,” he says. “It is difficult to lock-in long term deals with suppliers now due to tight supply in 2026.” Based on suppliers’ availability, Kioxia, SanDisk, and Micron are in the best position to support 128-terabyte QLC enterprise SSD solutions, Ao says. “But in the longer term, some module houses may be able to provide similar solutions at a lower cost,” Ao adds. “We are seeing more module houses, such as Phison and Pure Storage, supporting these solutions.” And it’s not just SSD for fast storage and HDD for slow storage. Memory solutions are becoming more complex in the AI era, says Ao. “For enterprise players with smaller-scale business models, it is important to keep an eye on Z-NAND and XL-Flash for AI inference demand,” he says. These are memory technologies that sit somewhere between the SSDs and the RAM working memory. “These solutions will be more cost-effective compared to HBM or even HBF [high bandwidth flash],” he says.

Read More »

AI gold rush sparks backlash against Core Scientific acquisition

Meanwhile, in a release issued last week, CoreWeave stated, “it has been unequivocal — to Core Scientific and publicly — that we will not modify our offer. Our offer is best and final.” Alvin Nguyen, senior analyst at Forrester Research, said what happens next with the overall data center market “depends on when AI demand slows down (when the AI bubble bursts).” He added, “if AI demand continues, prices continue to go up, and data centers change in terms of preferred locations (cooler climates, access to water, lots of space, more remote), use of microgrids/energy production, expect [major] players to continue to dominate.” However, said Nguyen, “if that slowdown is soon, then prices will drop, and the key players will need to either unload property or hold onto them until AI demand builds back up.” Generational shift occurring Asked what the overall effect of AI will be on CIOs in need of data center capacity, he said, “the new AI mega-factories alter data center placement: you don’t put them near existing communities because they demand too much power, water, land, you build them somewhere remote, and communities will pop up around them.” Smaller data centers, said Nguyen, “will still consume power and water in contention with their neighbors (industrial, commercial, and residential), potential limiting their access or causing costs to rise. CIOs and Network World readers should evaluate the trade offs/ROI of not just competing for data center services, but also for being located near a new data center.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »