Stay Ahead, Stay ONMINE

Ai4 2025 Navigates Rapid Change in AI Policy, Education

The pace of innovation in artificial intelligence is fundamentally reshaping the landscape of education, and the changes are happening rapidly. At the forefront of this movement stand developers, policy makers, educational practitioners, and associated experts at the recent Ai4 2025 conference (Aug. 11-13) in Las Vegas, where leading voices such as Geoffrey Hinton “The Godfather […]

The pace of innovation in artificial intelligence is fundamentally reshaping the landscape of education, and the changes are happening rapidly. At the forefront of this movement stand developers, policy makers, educational practitioners, and associated experts at the recent Ai4 2025 conference (Aug. 11-13) in Las Vegas, where leading voices such as Geoffrey Hinton “The Godfather of AI,” top executives from Google and U.S. Bank, and representatives from multiple government agencies gathered to chart the future of AI development. Importantly, educators and academic institutions played a central role, ensuring that the approach to AI in schools is informed by those closest to the classroom.

Key discussions at Ai4 and recent educator symposia underscored both the promise and peril of swift technological change. Generative AI, with its lightning-fast adoption since the advent of tools like ChatGPT, is opening new possibilities for personalized learning, skills development, and operational efficiency. But participants were quick to note that acceleration brings good and bad consequences. On one hand, there’s excitement about practical classroom implementations and the potential for students to engage with cutting-edge technology. On the other, concerns about governance, ethics, safety, and the depth of genuine learning remain at the forefront.

This urgency to “do this right” is echoed by teachers, unions, and developers who are united by the challenges and opportunities on the ground. Their voices highlight the need for agreement on education policy and associated regulations to keep pace with technological progress, create frameworks for ethical and responsible use, and ensure that human agency remains central in shaping the future of childhood and learning. In this rapidly evolving environment, bringing all stakeholders to the table is no longer optional; it is essential for steering AI in education toward outcomes that benefit both students and society.

Global Context: America, China, and the AI Race

Competition in artificial intelligence today is not simply a matter of technological prowess; it’s a contest over security, economic strength, and the values embedded in the code that will govern our very world. America approaches this frontier with characteristic boldness, investing not only in innovation and capacity but also in sustaining its edge as the dominant player.

Look to the figures: In 2024, US private investment in AI reached $109.1 billion, dwarfing China’s $9.3 billion. Forty notable US AI models debuted against China’s fifteen, and America retained control of 75% of global AI supercomputing power, essential for both civilian and national security applications. Washington’s latest AI Action Plan poured resources into deregulating data center growth, expanding domestic chipmaking, and boosting the all-important talent pipeline. All the while, export controls throttle China’s ambitions by bottlenecking access to high-end chips.1

But the race is close and the risks real. China, for its part, has pivoted to industrial-scale mobilization, with massive semiconductor funds, a thickening domestic talent pool, and a relentless focus on practical implementation: self-driving cars, smart cities, and data science. Where American innovation is often commercialized by private-sector champions, China leverages government coordination and open-source diffusion. Their models are cost-efficient, sometimes only months behind their American rivals, and their adoption abroad, sometimes unburdened by the tight regulatory or ethical constraints that characterize Western deployments, shows just how quickly digital sovereignty can shift.2

The specter of AI nationalism looms. When software and strategic algorithms are guided primarily by the interests and character of one nation, the risk multiplies: fractured standards, weakened interoperability, and a digital divide reminiscent of Cold War lines. Europe, meanwhile, pushes for harmonized regulation, but its fragmented approach cannot match the sheer scale of American or Chinese efforts. The global market hangs in balance, shaped by the decisions of two leading titans, partially influenced by regulatory efforts of other leading players.

This innovative landscape calls for a little clear-eyed realism, especially to temper the extreme headline reactivity across the market. The purpose of government in all domains are the preservation of liberty, security, and economic vitality; AI is no exception. More than just a profit center or innovation novelty, the technology clearly presents a matter of national security. If the AI race is led by a country with priorities and values at odds with our own, where speech is curtailed, privacy is compromised, and justice is subservient to the state, not to mention potential for terrorism, there is no guarantee that the extreme potential for technological progress will equate to human flourishing.

Today, America remains at the forefront, not because regulation-dictated excellence, but because the free interplay of capital, talent, ideas, and ambition continues to deliver results. Maintaining this lead demands vigilance against creeping complacency and self-sabotage. The world is watching, and so are the future generations for whom this arms race in code, hardware, and human capital will be either shield or shackle.

Charting the Future: America’s AI Action Plan and Its Regulatory Impact on Education

America’s AI Action Plan, unveiled in July 2025, is the federal government’s most sweeping initiative to redefine the country’s technological and regulatory leadership. Structured around three pillars (Accelerating AI Innovation, Building American AI Infrastructure, and Leading International AI Diplomacy and Security), the plan directs every federal agency to dismantle regulatory barriers that could slow adoption of advanced AI, particularly in fields like education, healthcare, and manufacturing. This deregulatory push is not just about reducing red tape; it is an overt strategy to safeguard US primacy in the global AI race and ensure national competitiveness far into the future.

Key actions in the plan include aggressive rollbacks of federal regulations that might impede AI rollout in classrooms, as well as making federal funding contingent on states’ willingness to support innovation-friendly environments. The plan incentivizes the private sector and academic institutions to develop open, American-led AI models and propagate these systems domestically and with allied nations.

Educational policy is affected in several ways. With more flexible, less prescriptive oversight, states and districts are empowered to fast-track adoption of AI-driven curricula, diagnostics, and administrative systems, while also triggering a surge in investment for teacher training and AI research. This conversation took center stage at AI4 which took place in Las Vegas, NV mid-August.

For education, the stakes are quite high. These policy choices accelerate the integration of AI into classrooms, cementing America’s lead in developing, testing, and deploying educational technologies, as well as and giving students and teachers access to tools that foster personalized learning, critical thinking, and skills for tomorrow’s economy.

At the same time, the shift to minimal regulation places a premium on the wisdom and judgment of educators, local policymakers, and technology developers to set meaningful guardrails, champion best practices, and maintain focus on student welfare, equity, and opportunity. The outcome: a dynamic, pluralistic landscape where American innovation drives the future of learning, shaped by those closest to its challenges and its promise.

The Classroom Transformed: AI, Childhood, and the Future of Learning

Generative AI tools like ChatGPT and OpenAI’s education-specific platforms are rapidly reshaping the educational landscape, ushering in an unprecedented transformation in how teaching and learning occur. Adoption of these technologies has skyrocketed, driven by enthusiasm for their ability to personalize learning, automate routine tasks, and expand access to knowledge. This rapid uptake has created both hope and uncertainty within schools, sparking immediate and passionate responses from educators, parents, and policymakers alike.

From an industry-focused and objectivist perspective, this transformational moment embodies the purest ideals of innovation and individual empowerment. The market-driven rollout of AI in education equips teachers with powerful new tools, enabling them to enhance student engagement and deepen learning without relying on cumbersome federal mandates or bureaucratic paralysis. It respects the expertise and agency of educators—the humans in the loop—recognizing that no machine can replace the nuanced understanding, mentorship, and empathy that skilled teachers provide.

At the same time, critics rightly raise concerns around safety, ethics, and the development of critical thinking skills in students. The key challenge facing schools is not whether to adopt AI, but how to do so responsibly. Building ethical guardrails that protect children’s privacy, promote safe and respectful use, and prevent reliance on AI as a crutch rather than a catalyst for creativity is essential. Teachers are focused less on fears of plagiarism or cheating and more on ensuring that students continue to learn how to problem-solve, write, and think deeply, even as AI becomes a ubiquitous classroom assistant.

AI’s impact on special education exemplifies its promise and complexity. For children with individualized education programs (IEPs) or special needs, AI-powered tools can provide personalized instruction such as read-aloud functions that many schools cannot deliver at scale. This opens new avenues for accessibility and inclusion, enabling technology to bridge gaps that have long hindered educational equity.

Ultimately, the classrooms of the future will not be dominated by machines but enriched by a partnership between human insight and AI’s computational power. The future of learning depends on preserving teacher agency, ensuring that technology amplifies rather than replaces the human touch, and embracing innovation while guarding against overregulation that stifles progress. This balance is crucial to preparing children not just to use AI, but to thrive creatively and critically alongside it.

Bringing Everyone to the Table

The rapid evolution of AI in education demands a collaborative approach that brings together a diverse group of stakeholders, most notably educators, organized labor, technology developers, policymakers, and parents. Among these, the American Federation of Teachers (AFT) has played a particularly pivotal role, advocating strongly for responsible AI adoption that protects both students and teachers.

Recently Microsoft and OpenAI partnered with the AFT to train teachers on use of AI, signaling an openness to partnership in the face of technological change and highlighting the practical benefits of K-12 and higher education teachers working side by side with leading technology companies to deliver a united approach to shaping AI use in the education system.

Teachers demonstrated deep expertise and pragmatic insights that impressed technology creators, proving that educator input is not only valuable but essential in designing tools that genuinely meet classroom needs. This co-design dynamic ensures that AI platforms are not imposed top-down but crafted with frontline realities in mind, creating better, more effective educational technologies.

Beyond formal unions and developers, broad community buy-in is vital. Hundreds of educators have expressed strong interest in initiatives like the AI Institute in New York, signaling enthusiasm and readiness to engage with AI-driven tools. Parents, too, are key partners and stakeholders, advocating for safety, ethical standards, and safeguards that protect children’s well-being.

This collaborative ecosystem underscores the power of voluntary partnerships and mutual respect over heavy-handed government mandates. When educators, unions, parents, and the tech sector each contribute their perspectives, the resulting solutions embody both innovation and practicality, fostering not just the adoption of AI but a shared vision for its responsible integration into the future of learning.

Building Guardrails: Safety, Ethics, and Privacy in AI for Education

As AI tools become woven into daily classroom life, protections for children are paramount. Districts and states across America are moving quickly to publish official guidelines focused on safety, data privacy, and responsible use, often mandating transparency about how AI systems process student information and requiring educator oversight for any automated grading or content creation. These policies reflect a clear understanding: technology must serve human interests, not supplant them.

However, the regulatory landscape is fragmented. With no single national standard, state and district rules vary widely, and oversight gaps emerge—especially at the boundaries between neutrality in technology, preventing bullying, and ensuring fair treatment for all students. Parents and educators remain vigilant, demanding humane, ethical practices that prioritize children’s welfare amidst rapid technological change, while grappling with the complications of local versus federal authority and the persistent challenge of keeping the human element at the heart of learning.

Innovation, Industry Leadership, and Regulation

History has shown that technology moves faster than regulation. In artificial intelligence, this dynamic is magnified: each advance in generative AI, data infrastructure, and educational tools outpaces committee meetings and lengthy legislative debates. Industry-led regulation is not merely a theoretical principle but a practical necessity, proven effective in domains ranging from pharmaceuticals to internet standards.

Companies closest to the rapid development cycles possess the technical expertise and agility to craft guidelines that work in real time, adapting to complex new risks as they emerge—precisely what we are seeing with Microsoft’s work directly with the AFT.

Industry self-regulation is not without controversy. Advocates highlight its flexibility, cost-effectiveness, and ability to prevent government overreach that could stifle competition or drive innovation offshore. In the absence of swift government action, major AI players (Microsoft, Google, OpenAI, X/Grok AI) have embraced voluntary ethics standards, transparency commitments, and collaborative governance bodies.

Lessons from other sectors reveal that self-regulation works best when paired with clear economic incentives and public accountability. Critics argue that hybrid models, blending industry leadership with targeted oversight for public safety, offer a balance that protects individual rights without slowing innovation. 

The reality is that Congress lags on comprehensive lawmaking: copyright gaps, committee turf wars, and debates over state versus federal reach. Nevertheless, the most adaptive and effective standards arise from the marketplace itself, guided by direct stakeholder input and demand for trustworthy solutions.

Federal Oversight, State Authority, and Regulation Gaps

The role of government in regulating AI is a battleground of American federalism. The Tenth Amendment provides that any and all powers not delegated to the federal government, nor withheld from the states, are “reserved to the States respectively, or to the people.” This principle has repeatedly been upheld by the Trump Administration as well as the Supreme Court, in even more divisive matters than that of artificial intelligence.

Recent political maneuvers, such as the Trump Administration’s push for a decade-long federal moratorium on new state and local AI laws, show the solemnity of this stance. While the House has supported sweeping preemption, the Senate moved to strike the measure, reaffirming that the Constitution gives states broad authority in emerging tech domains unless Congress clearly legislates otherwise. State experimentation remains essential, as states adapt AI policies to local needs such as education, health, and privacy.

Congressional committees, meanwhile, confront real roadblocks in crafting effective regulation. With AI evolving at breakneck speed, copyright law is impacted, leaving legal gaps around the ownership of machine-generated works, and thorny questions about intellectual property rights.

Multiple government committees stake their turf—Commerce, Judiciary, Education—each seeking influence over AI policy but constrained by legacy processes. In the absence of unified national statutes, industry remains the pragmatic venue for governance, as each knows the intimate needs of their own domains and use cases.

Flexible, context-aware industry self-regulation fills the void where government regulation is not pertinent or will not suffice. The fast pace and constant change in AI calls for standards made by innovators with significant skin in the game. Self-regulatory frameworks respond to technical risks and opportunities more rapidly, guided by technical expertise and market realities.

However, the most resilient solutions will blend adaptive industry standards for compliance and best practices with targeted oversight to protect the diverse interests of all parties without stifling creative progress or introducing bureaucratic drag.

Economic Implications: Taxation, Labor, and the Business Impact

The fact remains that all regulatory efforts have significant tax and business implications. AI companies face complex state and local tax obligations, especially when expanding or operating in multiple jurisdictions, with evolving standards for income, franchise, and sales tax.

AI’s acceleration has the potential to upend labor markets, academic priorities, and the tax code itself. As artificial intelligence redefines what jobs require, curriculum shifts have followed. Universities see soaring demand for computer science and data faculty while traditional English or business administration roles wane in relevance.

Unions and education advocates press for workforce development and retraining, keenly aware that even as some positions vanish, new technical and analytical jobs are appearing for those prepared to seize them. That labor churn has driven schools and industry alike to scale up hiring and professional development, positioning the US as a global leader in talent for the AI-powered economy.

On the fiscal side, regulatory oversight triggers a new wave of questions such as whether or how AI should be taxed and who bears that burden. Companies large and small face increasing complexity in navigating state and local taxes, with rules evolving for income, franchise, and sales taxes as AI operations expand across jurisdictions. There is now open debate about targeted taxes for autonomous AI systems themselves.

Consider the case of using AI to book travel at a reduced cost. What is the impact to the Department of Transportation’s aviation security fees, passenger booking charges, and facility use tolls? . Other possibilities to consider include payroll surcharges for jobs replaced by automation or differential tax rates for digital intellectual property.

In all cases, the question is the same: does the cost fall on producers, consumers, workers, or communities, and does it incentivize responsible innovation or stifle progress?

Building a Future of Creativity, Justice, and Freedom

The opportunity before us is to create a future that puts human ingenuity, justice, and liberty at the very center of how artificial intelligence is built and deployed. It is a call to design systems not just for efficiency or profit, but for the flourishing of society, the empowerment of children, and the continued leadership of America in innovation.

Randi Weingarten, President of the American Federation of Teachers, opened the AI4 conference with words that capture the spirit that must guide our efforts: “build for a future of creativity, of freedom, of justice, and a society that works for all. Build as if you were building for your own children and their futures. Because if you build for that and all of America, we can make it the most just, most fair, most innovative, most creative, with the most entrepreneurs in the world.”

As developers, educators, and policymakers, the charge is clear: reconcile innovation, safety, and regulation in a way that never loses sight of the dignity and rights of every individual. As with ChatGPT queries and legislative efforts alike, we must remember that the “first draft is not the last,” to humbly embrace future iterations and ongoing collaboration for responsible technical advancement.

At a time when regulation threatens both freedom and progress, Weingarten’s reminder resounds for this generation and the next: “For every freedom-loving libertarian in the room: fight the surveillance state, keep being the land of the free.”

References:

1.        https://www.ai-hive.net/post/comparative-analysis-of-us-and-china-ai-infrastructure-and-development-a-2025-perspective

2.        https://www.chinausfocus.com/finance-economy/us-and-chinese-ai-strategies-competing-global-approaches

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Quinas readies UltraRam, flash memory with DRAM speed

For starters, the memory is built on what is called a” III-V technology,” a class of semiconductor materials that are composed of elements from groups III and V of the periodic table, the company stated. These materials have unique properties that make them ideal for use in electronic devices such

Read More »

7 Wi-Fi certifications to bolster wireless networking skills

Organization: Certified Wireless Network Professionals (CWNP) Price: $149.99 for the CWTS-102 exam How to prepare: CWNP offers several resources to prepare including: a live training class, a self-paced training kit, a study and reference guide, an electronic practice test, an eLearning module, an eLearning bundle, and a test and go

Read More »

Oil Slides on OPEC+ Output Speculation

Oil fell after a report that the OPEC+ alliance will consider a fresh round of production increases when the group meets over the weekend. West Texas Intermediate crude fell 2.5% to settle just below $64, erasing its gain from Tuesday. The OPEC+ development was compounded by softer-than-expected US economic data that dented longer-term consumption expectations. Saudi Arabia and its partners have yet to decide how to proceed after completing the fast-tracked addition of 2.5 million barrels a day that was finalized at its previous gathering, several delegates said on Wednesday. Earlier, Reuters reported that the group will consider further raising output at a meeting on Sunday. When OPEC+ last met, Bloomberg reported that the group would consider all options on output, and that remains the case, according to one of the delegates. Prices clawed back some losses after US President Donald Trump told reporters that he hasn’t yet advanced to “phase two” or “phase three” after he penalized India for purchasing Russian oil. Moscow’s flows have been in focus amid US efforts to achieve peace in Ukraine by targeting India, while Kyiv ramps up drone strikes on Russian energy infrastructure. The cartel has surprised traders several times this year by embarking on a strategy to reclaim market share, despite the expectation that supplies outside of the group will surge toward the end of 2025. Those increases are expected to leave the oil market heavily oversupplied in the fourth quarter. Any fresh OPEC+ additions could exacerbate that surplus, though there are questions about the exact volume the group might add. “OPEC has clearly shifted course this year, and while I don’t dismiss their interest in capturing market share or aligning with Trump, they risk flying too close to the sun by rolling back cuts into seasonally softer demand,” said Rebecca Babin,

Read More »

Shell Shelves Dutch Biofuels Plant

Shell Plc has shelved its biofuels plant in the Netherlands as it continues to shed low-carbon businesses to boost profitability. The project in Rotterdam, which was put on hold last year pending a cost review, was to have been one of Europe’s biggest plants for renewable diesel and sustainable aviation fuel.  Shell has decided to “prioritize our capital towards those projects that deliver both the needs of our customers and value for our shareholders,” Machteld de Haan, the firm’s head of downstream, renewables and energy solutions, said in a statement, reiterating previous pledges to back projects that will boost investor returns.  The business of making cleaner transport and aviation fuels from vegetables oil and waste has lost its shine since a wave of investments announced earlier in the decade. Industry pioneer Neste Oyj’s stock has plunged since 2021, partly due to disappointing demand for sustainable aviation fuel. Since Wael Sawan took over as chief executive officer in 2023, the company has been exiting low-carbon projects and under-performing units. It withdrew from a US wind project earlier this year, resulting in a write-off of almost $1 billion.  Shell’s industry peers like BP Plc and TotalEnergies SE have also pulled back on biofuel investments.   WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Saudi Arabia’s $5.5B Bond Sets Course for Record Issuance

Saudi Arabia sold $5.5 billion of international bonds on Tuesday to help plug its budget deficit, putting it on course for a record year of issuance as it continues to spend heavily on Crown Prince Mohammed bin Salman’s economic-diversification projects. The two-part Sukuk, or Islamic debt, sale was made up of a $2.25 billion five-year note and a $3.25 billion 10-year bond. The shorter tranche priced with a spread of 65 basis points over US Treasuries, while the longer one was sold at 75 basis points. Investors had placed around $17.5 billion of orders, underscoring the strong demand for Saudi debt even as the Gulf nation ramps ups issuance in the face of lower oil prices and high spending that’s squeezing the government’s finances. Saudi Arabia has now sold almost $20 billion in dollar- and euro-denominated debt this year, according to data compiled by Bloomberg. That cements its status as one of the busiest issuers in emerging markets. It is also well above the 2024 full-year tally and within a whisker of the annual record of $21.5 billion set in 2017. The latest sale adds to a pick up in syndicated loan activity and a fresh wave of local bank issuance as Saudi companies, the government and sovereign wealth fund come up with extra financing to back the crown prince’s Vision 2030 agenda. Those financing needs are increasingly pressing, with Brent oil prices down about 8% this year to around $69 a barrel. With prices subdued, the Saudi government projects a fiscal shortfall of about 2.3% of gross domestic product this year. It has widely telegraphed its intention to issue bonds to fill the hole. That’s in addition to measures such as privatizing some assets. While the kingdom’s ratio of debt to GDP is low by global standards at under 30%, the International Monetary Fund sees

Read More »

It’s time for customer-oriented approaches to generator interconnection

Travis Kavulla is vice president of regulatory affairs at NRG Energy and a past president of the National Association of Regulatory Utility Commissioners. Eric Blank is chairman of the Colorado Public Utilities Commission. Hardly a week goes by in the power sector these days without recriminations arising from a new era of tight supply conditions relative to soaring demand in the power sector. One of the hottest points of contention is the broken process by which new power plants are interconnected to the power grid. In order to gain grid access, a would-be power plant files an interconnection request to a transmission utility, and, having staked that claim, it takes a place in line — a line that has grown and grown, with years of delay for interconnecting projects to gain access to the grid, with the amount of projects in that line growing in all markets to many multiples of the given market’s total demand. These Gold Rush rules were appropriate at the origins of the industry’s restructuring when substantial latent capacity in the power grid was being underutilized, or where additional grid capacity could be opened up for only modest incremental cost. The transformational “open access” orders issued by the Federal Energy Regulatory Commission during this era unleashed an era of investment in power generators by standardizing utility transmission tariffs and making it clear that anyone who wanted to make a go of it in the power-generation business could do so. But like most Gold Rushes, this early bonanza that was meant to quickly tap a resource has ended up in a bureaucratic snarl. The grid today is largely tapped out, yet there are too many stakes in the ground that will never prove out. The messy backlog of the generator interconnection queue that has been left in

Read More »

US DOE Earmarks $35MM to Support Emerging Energy Tech

The United States Department of Energy (DOE) will channel more than $35 million toward developing emerging energy technologies. The DOE said in a media release that the funds will be divided among 42 projects related to grid security, artificial intelligence, nuclear energy, and advanced manufacturing, and located at DOE national laboratories, plants, and sites. The selected projects will leverage over $21 million in cost share from private and public partners, bringing total funding to more than $57.5 million, according to DOE. The funds are provided through DOE’s Technology Commercialization Fund (TCF) program, managed through the Office of Technology Commercialization’s Core Laboratory Infrastructure for Market Readiness (CLIMR) Lab Call. The program, according to DOE, strengthens America’s economic and national security by supporting public-private partnerships that maximize taxpayer investments, advance American innovation, and ensure the U.S. stays ahead in global competitiveness. “The Energy Department’s National Labs play an important role in ensuring the United States leads the world in innovation”, DOE Secretary Chris Wright said. “These projects have the potential to accelerate technological breakthroughs that will define the future of science and help secure America’s energy future”. This year’s selections span across 19 DOE national labs, plants, and sites, DOE said, highlighting Lawrence Berkeley National Laboratory’s launch of America’s Cradle to Commerce (AC2C), which builds on the Cradle to Commerce (C2C) program. It provides wraparound support for lab-to-market innovation. In just 18 months, C2C has proven impact with more than $15M raised by participating startups and five commercial pilots launched, DOE said.    Pacific Northwest National Laboratory plans to enhance and broaden the free Visual Intellectual Property Search (VIPS) tool with the VIPS 2.0 project. The new platform will enable smooth searches across a wide range of National Lab innovations available for licensing or open-source sharing, DOE said. Meanwhile, Argonne National Laboratory

Read More »

California Geothermal Lease Sales Net Over $2.7MM

The U.S. Bureau of Land Management announced, in a statement posted on its website recently, that Bureau of Land Management geothermal lease sales in California netted over $2.7 million. The Bureau noted in that statement that it accepted winning bids on 13 parcels across 22,685 public acres in Imperial, Lassen, and Modoc counties for $2,711,858 in total receipts for a geothermal lease sale. The Bureau said in the statement that it may issue leases once review and payment are complete. “The sale generated an average of $117 per acre offered, supporting American prosperity by increasing potential for domestic energy production,” the Bureau stated. “For each parcel leased, 50 percent of the bid, rental receipts, and subsequent royalties will go to the state of California, 25 percent will go to the county where the lease is located, and the remaining 25 percent will go to the U.S. Treasury,” it added. “Geothermal lease sales support domestic energy production and American energy independence, while contributing to the nation’s economic and military security,” the Bureau continued. “Consistent with Executive Order 14154, ‘Unleashing American Energy’, the BLM’s geothermal lease sales help meet the energy needs of U.S. citizens and solidify the nation as a global energy leader long into the future and achieve American Energy Dominance,” it went on to state. The Bureau noted in the statement that leasing is the first step in the process to develop federal geothermal resources. The organization added that it ensures geothermal development meets the requirements set forth by the National Environmental Policy Act of 1969 and other applicable legal authorities. In its statement, the Bureau described geothermal as “an abundant resource, especially in the West, where the BLM has authority to manage geothermal resource leasing, exploration, and development on approximately 245 million surface acres of public lands and the 700 million acres

Read More »

From Cloud to Concrete: How Explosive Data Center Demand is Redefining Commercial Real Estate

The world will generate 181 ZB of data in 2025, an increase of 23.13% year over year and 2.5 quintillion bytes (a quintillion byte is also called an exabyte, EB) created daily, according to a report from Demandsage. To put that in perspective: One exabyte is equal to 1 quintillion bytes, which is 1,000,000,000,000,000,000 bytes. That’s 29 TB every second, or 2.5 million TB per day. It’s no wonder data centers have become so crucial for creating, consuming, and storing data — and no wonder investor interest has skyrocketed.  The surging demand for secure, scalable, high-performance retail and wholesale colocation and hyperscale data centers is spurred by the relentless, global expansion of cloud computing and demand for AI as data generation from businesses, governments, and consumers continues to surge. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built.  As a result, investors increasingly view these facilities not just as technology assets, but as a unique convergence of real estate, utility infrastructure, and mission-critical systems. Capitalizing on this momentum, private equity and real estate investment firms are rapidly expanding into the sector through acquisitions, joint ventures, and new funds—targeting opportunities to build and operate facilities with a focus on energy efficiency and scalability.

Read More »

Ai4 2025 Navigates Rapid Change in AI Policy, Education

The pace of innovation in artificial intelligence is fundamentally reshaping the landscape of education, and the changes are happening rapidly. At the forefront of this movement stand developers, policy makers, educational practitioners, and associated experts at the recent Ai4 2025 conference (Aug. 11-13) in Las Vegas, where leading voices such as Geoffrey Hinton “The Godfather of AI,” top executives from Google and U.S. Bank, and representatives from multiple government agencies gathered to chart the future of AI development. Importantly, educators and academic institutions played a central role, ensuring that the approach to AI in schools is informed by those closest to the classroom. Key discussions at Ai4 and recent educator symposia underscored both the promise and peril of swift technological change. Generative AI, with its lightning-fast adoption since the advent of tools like ChatGPT, is opening new possibilities for personalized learning, skills development, and operational efficiency. But participants were quick to note that acceleration brings good and bad consequences. On one hand, there’s excitement about practical classroom implementations and the potential for students to engage with cutting-edge technology. On the other, concerns about governance, ethics, safety, and the depth of genuine learning remain at the forefront. This urgency to “do this right” is echoed by teachers, unions, and developers who are united by the challenges and opportunities on the ground. Their voices highlight the need for agreement on education policy and associated regulations to keep pace with technological progress, create frameworks for ethical and responsible use, and ensure that human agency remains central in shaping the future of childhood and learning. In this rapidly evolving environment, bringing all stakeholders to the table is no longer optional; it is essential for steering AI in education toward outcomes that benefit both students and society. Global Context: America, China, and the AI Race

Read More »

Two Lenses on One Market: JLL and CBRE Show Data Centers in a Pinch

The two dominant real estate research houses, JLL and CBRE, have released midyear snapshots of the North American data center market, and both paint the same picture in broad strokes: demand remains insatiable, vacancy has plunged to record lows, and the growth of AI and hyperscale deployments is reshaping every aspect of the business. But their lenses capture different angles of the same story: one emphasizing preleasing and capital flows, the other highlighting hyperscale requirements and regional shifts. Vacancy Falls Through the Floor JLL sets the stage with a stark headline: colocation vacancy is nearing 0%. The JLL Midyear 2025 North America Data Center report warns that this scarcity “is constraining economic growth and undermining national security,” underscoring the role of data centers as critical infrastructure. CBRE’s North American Data Center Trends H1 2025 numbers back this up, recording an all-time low North America vacancy rate of 1.6%, the tightest in more than a decade. Both agree that market loosening is years away — JLL projecting vacancy hovering around 2% through 2027, CBRE noting 74.3% of new capacity already spoken for. The takeaway seems clear: without preleasing, operators and tenants alike are effectively shut out of core markets. Absorption and Preleasing Drive Growth JLL drills down into the mechanics. With virtually all absorption tied to preleasing, the firm points to Northern Virginia (647 MW) and Dallas (575 MW) as the twin engines of growth in H1, joined by Chicago, Austin/San Antonio, and Atlanta. CBRE’s absorption math is slightly different, but the conclusion aligns: Northern Virginia again leads the nation, with 538.6 MW net absorption and a remarkable 80% surge in under-construction capacity. CBRE sharpens the view by noting that the fiercest competition is at the top end: single-tenant requirements of 10 MW or more are setting pricing records as hyperscalers

Read More »

Data Center Frontier Trends Summit 2025: AI, Power Constraints, and Moonshots Take the Stage in Reston

Aug. 28, RESTON, Va. — It’s the last day of the second-annual Data Center Frontier Trends Summit, marking the conclusion of a gathering of significant players in the data center world and their customers, all of whom are looking to get a better handle on the data center industry as it grapples with AI-fueled power demands, grid constraints, and an urgent need for infrastructure innovation. Taking place in the heart of Northern Virginia’s Data Center Alley, acknowledged as the world’s premier data center hotspot, the conference in Reston, VA saw a significant increase in attendance in its second year, going from just over 300 attendees in its inaugural year to close to five hundred attendees this year. Not unexpected, many of the attendees were newcomers to the event, attracted by the strong list of speakers focused on critical topics to the industry, with an emphasis on power and artificial intelligence. Many conversation with the attendees had them identifying specific topics that were primary motivators to attend, while one attendee simply told us “after reading the presentation descriptions and the speakers list, how could we not attend?” From the opening keynote “Playbook Interrupted” presented by Chris Downie, CEO of Flexential, the tone and message of the conference was made clear. Touching on topics such as AI’s insatiable resource appetite, tightening energy policies, and power scarcity, Chris made it clear that todays’ data centers are breaking old frameworks and demanding new strategies for growth and resilience. The message was clear; times have changed and industry executives needed to be ready to change with them. Staying ahead of the curve was going to be more difficult, but just as important. It was tyime to develop a new playbook for your business operations. Getting to the Core of It With the demand for AI centric

Read More »

NTT Data, Google Cloud Forge Alliance to Expand AI, Cloud Modernization

The partnership is designed to accelerate development of repeatable, scalable solutions, according to reps. NTT Data’s GenAI framework called “Takumi” is at the heart of this development, designed to help clients move from idea to deployment by integrating with Google Cloud’s AI stack supporting rapid prototyping and GenAI use-case creation. This initiative expands NTT Data’s Smart AI Agent Ecosystem, which unites strategic technology partnerships, specialized assets, and an AI-ready talent engine to help users deploy and manage AI at scale.  New Business Group NTT Data has established a dedicated global Google Cloud Business Group comprising thousands of engineers, architects, and advisory consultants. This team will collaborate with Google Cloud teams to help clients adopt and scale AI-powered cloud technologies. The company also is investing in training and certification programs so teams across sales, presales, and delivery can sell, migrate, and implement AI-powered cloud solutions. NTT Data says it will certify 5,000 engineers in Google Cloud technology, a step that underscores the scale of resources both firms are committing to meet surging enterprise demand. Both companies are co-investing in global sales and go-to-market campaigns designed to fast-track adoption across priority industries. A Landmark Moment for NTT—and the Industry Marv Mouchawar, Head of Global Innovation at NTT Data, said the partnership is a significant milestone in the company’s mission to drive innovation and digital transformation across industries. “By combining NTT Data’s deep expertise in AI, cloud-native modernization and enterprise solutions with Google Cloud’s advanced technologies, we are helping businesses accelerate their AI-powered cloud adoption globally and unlock new opportunities for growth,” she noted. For the data center industry, this partnership is notable not just as a technology alignment but as a signal of where digital infrastructure is headed. Hyperscale cloud providers continue to expand their reach through partnerships with major service providers,

Read More »

Joule Capital Taps Caterpillar to Power Giant AI, HPC Data Center in Utah

Joule Capital Partners, Caterpillar Inc., and Wheeler Machinery Co. have announced an agreement to power Joule’s High Performance Compute Data Center Campus in Utah. Joule, an infrastructure development firm, plans to create the largest single campus in Utah spanning 4,000 acres and set to launch in 2026. The goal is to create one of the most advanced data centers in the world.  Designed specifically for AI-driven workloads, the campus will have 4 GW total capacity, using integrated combined cooling heat and power (CCHP) solutions. It will be powered by Caterpillar’s G3520K generator sets, or gensets—a single piece of equipment that combines an engine with an electrical generator to produce electricity—and support equipment.  The distributed generation system produces electricity and captures waste heat to power and cool next-generation, high-density server systems. The solution includes 1.1 GWh of battery energy storage in addition to backup power generation served by various fuel sources.  Beyond the gensets, the integrated system will include controls, switchgear, inverters, energy storage solutions, CCHP and more, providing a complete power solution for the Joule data center.  With global headquarters in Irving, Texas, Caterpillar provides end-to-end power solutions for data centers, including primary and backup generators, sustainable energy options, microgrids, and service and support. The data center market has become a major driver of its growth. Wheeler Machinery is a Utah-based Caterpillar dealer offering Cat power systems. The company will provide local expertise, service, and support for Joule’s new campus. “This project represents the core of Joule’s mission—to deliver artificial intelligence (AI) ready compute capacity by pairing world-class data center campuses with reliable, on-demand power,” said David Gray, President of Joule Capital Partners.  “By combining Caterpillar’s advanced energy systems with Wheeler’s local expertise, we can bring gigawatt-scale capacity to market faster and more efficiently than ever before, ensuring our tenants have

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

The connected customer

In partnership withNiCE As brands compete for increasingly price conscious consumers, customer experience (CX) has become a decisive differentiator. Yet many struggle to deliver, constrained

Read More »