Stay Ahead, Stay ONMINE

A New York legislator wants to pick up the pieces of the dead California AI bill

The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.” Assembly member Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law. SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support. However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill.  Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed. Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models.  The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people.  Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence. The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions. The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development.  A different flavour of bill The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.”  The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good.  The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models). The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models. Focusing on the wrong issues? Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models.  Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research. “But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says. Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.”  The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs.  This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047. Early days The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047.  When asked about the idea of mandated safety plans for AI companies, assembly member Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.”  Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations. Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says. He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.” And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.” Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation.  “There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.” Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

Assembly member Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support.

However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed.

Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions.

The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. 

A different flavour of bill

The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.” 

The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. 

The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models).

The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models.

Focusing on the wrong issues?

Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. 

Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research.

“But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says.

Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.” 

The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. 

This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047.

Early days

The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. 

When asked about the idea of mandated safety plans for AI companies, assembly member Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.” 

Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations.

Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says.

He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. 

“There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.”

Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

CompTIA bolsters penetration testing certification

CompTIA recently upgraded its PenTest+ certification program to educate professionals on cybersecurity penetration testing with training for artificial intelligence (AI), scanning and analysis, and vulnerability management, among other things. PenTest+ certification training now includes access to a hackable website that provides live targets and vulnerabilities for cybersecurity professionals to identify

Read More »

Nvidia launches blueprints to help jump start AI projects

Give Nvidia credit, it’s not just talking up big ideas, it’s helping its customers achieve them. The vendor recently issued designs for AI factories after hyping up the idea for several months. Now it has come out with AI blueprints, essentially pre-built templates that give developers a jump start on

Read More »

Examining disk space on Linux

$ alias bysize=”ls -lhS” Using the fdisk command The fdisk command can provide useful stats on your disk partitions. Here’s an example: $ sudo fdiskfdisk: bad usageTry ‘fdisk –help’ for more information.$ sudo fdisk -lDisk /dev/sda: 14.91 GiB, 16013942784 bytes, 31277232 sectorsDisk model: KINGSTON SNS4151Units: sectors of 1 * 512

Read More »

Oil Companies Skip Auction for Arctic Refuge Drilling Rights

Oil companies declined to bid in a US government auction for drilling rights in the Arctic National Wildlife Refuge, as industry interest wanes in a region that’s rich with crude but difficult to develop.   The Interior Department said it received no bids for the auction mandated by Congress before those offers were set to be unsealed Friday. It marked the second time in four years an auction of oil and gas leases in the refuge’s 1.6-million-acre coastal plain flopped, coming after a 2021 sale held under President-elect Donald Trump drew just 11 high bids, most of which were lodged by an Alaska economic development corporation.  Ultimately, all of the leases sold in that 2021 auction were forfeited, with two relinquished and seven others canceled by Interior. The industry’s no-show this week underscores the legal, social and economic challenges of drilling in the Coastal Plain, despite US Geological Survey estimates the region might hold between 4.3 billion and 11.8 billion barrels of technically recoverable crude.  “The lack of interest from oil companies in development in the Arctic National Wildlife Refuge reflects what we and they have known all along: There are some places too special and sacred to put at risk with oil and gas drilling,” said Laura Daniel-Davis, the acting deputy Interior secretary.  Congress in 2017 mandated two coastal plain oil auctions by late 2024 as a way to pay for the Trump-era tax cuts, based on arguments the sales and oil development would yield more than $2 billion in government revenue over a decade. But the lackluster industry interest in the resulting sales has “exposed the false promises made in the Tax Act,” Daniel-Davis said.  The failed sale also sets up a challenge for the incoming president. Trump has vowed to “drill, baby, drill” and unleash domestic energy production — even at the cost

Read More »

Diversify or Atrophy: the energy transition finance dilemma

As banks redefine their net zero commitments, the cost of capital is rising, which could hamper decarbonisation efforts. The UK’s market review of transition financing recommended new benchmarks for financing decarbonisation projects, but has been accused of a lack of clear “progress”. Banks are withdrawing from the Net Zero Banking Alliance (NZBA), a United Nations group to align lending with net-zero goals, following pressure from regulators. Morgan Stanley and JPMorgan left the net-zero alliance this month, following the recent withdrawal of other big banks including Citigroup, Wells Fargo, Bank of America and Goldman Sachs. The mass exit of banks from a pivotal international climate alliance comes amid growing scrutiny of their commitments to environmental, social and governance pledges. The Republican-led House judiciary committee wrote to 60 asset managers in December to ask them to justify what Republicans described as “woke” climate pledges. In July, Republican committee chair Jim Jordan sent letters to key members of the NZBA, including BlackRock and the Glasgow Financial Alliance for Net Zero, over what he described as the “potentially harmful effects” of a ‘coordinated’ agreement to decarbonise assets. The NZBA operates alongside the Glasgow Financial Alliance, which is run by former Bank of England governor Mark Carney, the UN’s special envoy for climate action. © BloombergFormer Bank of England governor Mark Carney. ‘Messy’ “The energy transition – it is messy,” said Bruce Huber, chief executive of Alexa Capital, a London-based independent investment bank with an office in New York that focuses on energy infrastructure. “In that post-pandemic period, we’ve had a whole bunch of energy transition funds raised under Article 9, which means that they’re prohibited from investing in anything but the low-carbon economy.” In Huber’s view, the mass exit from the NZBA is “more about keeping the capital flowing into conventional energy groups”, especially

Read More »

Flood of tributes to Bill Petrie, Balmedie oil and gas ‘legend’

Bill Petrie, “oilfield legend” and stalwart of the oil and gas industry, has died aged 70. Having worked in the industry in the north-east since the 1970s, he is perhaps best known for his time at Wireline Engineering. Tributes have flooded in for the firm’s former chairman, managing director, and co-owner. Ellon-born Orcadian who made Balmedie home for decades Born in 1954 in Ellon, Bill’s family roots were in Orkney, but he was the only one of six siblings to be born after they moved to mainland Scotland. He grew up just outside Perth. In 1972 he went to Aberdeen University to study physics, chemistry, geography and geology. © Supplied by suppliedBill Petrie with his wife of 48 years, Jess. There he met his wife Jess, whom he married in 1976 in Lossiemouth. The couple, who went on to have two sons, never left the Balmedie area after settling there in 1986. Oil and gas breakthrough led to Middle East move After gaining diplomas in Business Management and Marketing from Robert Gordon’s Institute of Technology, and faced with a difficult labour market, Bill secured a labouring job with John Fleming Timber Merchants in 1977. However, it wasn’t long before his potential was spotted. A move to an office job gave him the experience needed to begin his business career. Bill finally got his breakthrough in the oil and gas industry in 1978, with The Analysts, a subsidiary of Schlumberger. © Supplied by suppliedBill as a young father after returning from Egypt. In 1982 he and Jess moved to Egypt where Bill spent a year setting up a new base for Middle East operations in Cairo. Wireline opened up world of opportunities for Bill In the 1980s and 90s Bill went on to work with Geo Vann and Neyrfor Weir, among

Read More »

USA Crude Oil Inventories Drop 1MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.0 million barrels from the week ending December 27 to the week ending January 3, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. Crude oil stocks, excluding the SPR, stood at 414.6 million barrels on January 3, 415.6 million barrels on December 27, and 432.4 million barrels on January 5, 2024, the report revealed. Crude oil in the SPR came in at 393.8 million barrels on January 3, 393.6 million barrels on December 27, and 355.0 million barrels on January 5, 2024, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.628 billion barrels on January 3, the report revealed. This figure was up 5.3 million barrels week on week and up 13.0 million barrels year on year, the report outlined. “At 414.6 million barrels, U.S. crude oil inventories are about six percent below the five year average for this time of year,” the EIA stated in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 6.3 million barrels from last week and are about one percent below the five year average for this time of year. Finished gasoline inventories decreased last week while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 6.1 million barrels last week and are about four percent below the five year average for this time of year. Propane/ propylene inventories decreased by 2.5 million barrels from last week and are nine percent above the five year average for this time of year,” it continued. In the report, the EIA noted that U.S. crude

Read More »

OPAL Fuels Secure Long-Term RNG Supply Deal

OPAL Fuels Inc. has agreed to supply renewable natural gas (RNG) to five fueling stations to an integrated logistics solutions provider. OPAL Fuels said in a media release that the five stations are currently under construction and will also be serviced by the company under existing long-term agreements. The company said that the new station supply agreement builds upon a decade-long collaboration between the companies in the United States. The six-year RNG supply agreement anticipates a combined annual volume of approximately 12 million gasoline gallon equivalent (GGE) once all stations are operational, OPAL Fuels said. Two of the new stations are set to be commissioned this month with the remaining three stations over the next six to twelve months, it said. The combined annual volume across the five fueling stations lands at 60 million GGEs. “This significant agreement with a leading freight logistics provider underscores both their and OPAL’s commitment to decarbonize heavy-duty trucking through RNG”, Adam Comora, Co-CEO of OPAL Fuels, said. “With its proven financial and sustainability benefits, the demand for RNG as a diesel alternative is only set to grow in the coming years. We are thrilled to leverage our vertically integrated platform to provide a ‘Cleaner, Cheaper, Now’ fuel solution helping fleets lower both their operating costs and their carbon footprint”. OPAL Fuels specializes in the capture and conversion of biogas into low-carbon intensity RNG and renewable electricity. The company is also a major player in the marketing and distribution of RNG to heavy-duty trucking and other hard-to-decarbonize industrial sectors. For the first nine months of 2024, the company sold 54.7 million GGEs as transport fuel, an 81 percent increase compared to the prior-year period. Its Fuel Station Services segment sold, dispensed, and serviced an aggregate of 110.3 million GGEs of transport fuel for the nine

Read More »

US, South Korea Finalize Agreement on Civil Nuclear Cooperation

Seoul and Washington on Wednesday finalized an agreement on civil nuclear energy cooperation and principles concerning nuclear exports. The Memorandum of Understanding on Principles Concerning Nuclear Exports and Cooperation “reflects the two countries’ mutual dedication to maximizing the peaceful uses of nuclear energy under the highest international standards of nuclear safety, security, safeguards, and nonproliferation”, a joint statement said. The pact builds on over 70 years of cooperation in nuclear power between the allies and will bolster each country’s export controls on civil nuclear technology, according to the statement published online by the United States Department of Energy (DOE). “It will also provide a pathway to help both countries keep up with the emergence of new technologies in this sector”, the statement added. The U.S. is also building civil nuclear cooperation with Japan, the United Kingdom and the European Union. In April 2024 the US DOE and Japan’s Ministry of Education, Culture, Sports, Science and Technology agreed to cooperate on research and the development of the supply chain for the societal deployment of fusion energy. The seal emanates from a 2013 agreement between Japan and the U.S. for research and development in science and technology (STA). The partnership will “address the scientific and technical challenges of delivering commercially viable fusion energy for various fusion systems, through activities conducted pursuant to the STA”, said a joint statement April 10, 2024. In March 2024 at a meeting of the US-EU Energy Council, Washington and Brussels agreed to explore cooperation to curb the globe’s reliance on Russia in the nuclear energy supply chain. “The United States and the EU intend to intensify cooperation to reduce dependency on Russia for nuclear materials and fuel cycle services, and support ongoing efforts by affected EU Member States to diversify nuclear supplies, as appropriate”, said a joint

Read More »

AWS to invest $11 billion in Georgia to expand infra for gen AI

However, AWS did point out that it plans to make its Thailand data center “flexible enough to efficiently run GPUs (graphics processing units) for traditional workloads or AI and machine learning models.” And AWS isn’t the only cloud services provider that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  Last week, Microsoft president Brad Smith said that the company was on track to invest around $80 billion this fiscal year to build out AI-enabled data centers. Separately, AWS said that it had launched a new region in Thailand with three availability zones. Typically, AWS Regions are composed of Availability Zones that place infrastructure in separate and distinct geographic locations. Thailand is the company’s fourteenth Region in Asia Pacific, joining existing Regions in Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, and Tokyo, as well as the Beijing and Ningxia China Regions. AWS has announced plans to build out 15 more Availability zones and five more Regions in Germany, Taiwan, Mexico, the Kingdom of Saudi Arabia, and New Zealand. 

Read More »

Cisco in 2025: Lots of hard work ahead

Hypershield is comprised of AI-based software, virtual machines, and other technology that will ultimately be baked into networking components such as switches, routers and servers. It promises to let organizations autonomously segment their networks when threats are a problem, gain rapid exploit protection without having to patch or revamp firewalls, and automatically upgrade software without interrupting computing resources, Cisco said. Networking, AI and platformization goals Looking ahead, Cisco needs to refocus on enterprise networking and work to make the data center an all-inclusive home for AI applications, industry watchers say. Security technologies must continue to be a priority as well. “2025 will be an important year for Cisco as the company executes ambitious internal changes while looking to capitalize on a dynamic external environment driven by the AI opportunity,” said Brandon Butler, senior research manager, enterprise networks, with IDC.  Revamped leadership will play a role: In August 2024, Cisco announced plans to reconfigure its networking, security and collaboration business units as part of a restructuring that included a 7% global workforce reduction and established Jeetu Patel as chief product officer. “As for the internal changes, the ascension of Jeetu Patel to executive vice president and chief product officer is a significant move for the company. Patel has an opportunity to more closely unify Cisco’s broad product portfolio while ensuring it aligns with top growth areas,” Butler said. A key part of this strategy will be Cisco’s vision for a platform approach to networking and security, which enables more unified experiences and management across Cisco’s products and allows integrated features, like AI, observability and security, to be baked into each one, Butler said.

Read More »

Point2 aims to cut data center power consumption through smart cabling

The P1B121 is suitable for a range of data center configurations, including in-rack and adjacent rack setups such as top-of-rack switch-to-server connectivity, rack-to-rack connectivity, and accelerator-to-accelerator compute fabric connectivity. The 112G PAM4 Smart Retimer requires only 3.0W of power consumption per chip, so 6 W total for each cable. That’s half of the 25 W of traditional networking cables. It reduces cable power and cooling demands while achieving an impressive chip latency of 3ns, which is 20 times lower than DSP-based PAM4 Retimers currently available. That can add up, Kuo notes, as a rack can have anywhere from 30 to 150 cables in it. Now multiply each cable by 12 W instead of 25 W and you’ve got a significant savings. There is also savings on weight. To compensate for signal loss, some cable makers simply use more copper, making cabling thicker. Having retimer chips allows you to extend the cable link without having to go to a thicker gauge copper wiring. The Point2 retimer supports the current speeds of 400 Gb/s as well as the upcoming 800 Gb products coming to market and the 1.6 Tb in the coming years, said Kuo. Point2 customers are designing cables now and will be delivering them in the first half of 2025, he added.

Read More »

How adding capacity to a network could reduce IT costs

Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it. Congestion is one, but the other is “serialization delay.” This complex-sounding term means that you can’t switch a packet if you don’t have it all, and so every data packet is delayed until it’s all received. The length of that delay is determined by the speed of the connection it arrives on, so fast interfaces always offer better latency, and the delay a given packet experiences is the sum of the serialization delay of each interface it passes through. Application designs, component costs and AI reshape views on network capacity You might wonder why enterprises are starting to look at this capacity-solves-problems point now, versus years or decades earlier. They say there’s both a demand and supply-side answer. On the demand side, increased componentization of applications, including the division of component hosting between data center and cloud, has radically increased the complexity of application workflows. Monolithic applications have simple workflows—input, process, output. Componentized ones have to move messages among the components, and each

Read More »

Scorecard: Looking Back at Data Center Frontier’s 2024 Industry Predictions

2.  Rethinking Power on Every Level  PREDICTION:  Utilities are struggling to upgrade transmission networks to support the surging requirement for electricity to power data centers. CBRE recently said that data center construction completion timelines have been extended by 24 to 72 months due to power supply delays. Although the constraints in Northern Virginia have made headlines, power availability has quickly become a global challenge, impacting major markets in Europe and Asia as well as U.S. hubs like Ashburn, Santa Clara, and sections of Dallas and Suburban Chicago. Last year we predicted the rise of on-site power generation, but we’ve yet to truly see this at scale. But data center operators are working on a range of new approaches to power. Expect to see innovations in power continue as data centers seek better visibility into their power sourcing. MASSIVE HIT:  This prediction was a huge “Hit,” as evidenced by 2024 data from leading commercial real estate firms CBRE, JLL, and Cushman & Wakefield, and other sources. Throughout the year, data center operators reported facing significant challenges in securing adequate power from utilities, leading to increased interest in adoption of on-site power generation solutions, as reflected by many industry discussions this year. The bottom line on this prediction might be the release of this year’s DOE-backed report indicating that U.S. data center power demand could nearly triple in the next three years, potentially consuming up to 12% of the country’s electricity, underscoring the urgency for alternative power solutions. In terms of the largest data center markets, VPM and others noted how Dominion Energy is projecting unprecedented energy demand from data centers in Virginia, posing significant challenges for accommodating this industry growth in the coming decades. In a noteable effort to shore up that gap, Dominion Energy, American Electric Power (AEP), and FirstEnergy

Read More »

How 2024, the Year That Re-Energized Nuclear Power, Foretells Ongoing ‘New Nuclear’ Developments for Data Centers in 2025

In a world increasingly focused on advanced nuclear technologies and their integration with energy-intensive sectors like data centers, nuclear power could change the way that the world gets its electricity and finally take its place as a clean, renewable, source of power. Evidence of this shift toward nuclear energy and data centers’ role in it came in abundance last year, as the U.S. nuclear energy sector was observed undergoing a sea change with regard to the data center industry. We saw Microsoft, Constellation, AWS, Talen, and Meta with major data center nuclear energy announcements in the Second Half of 2024. With the surge in nuclear stakes has also come a wave of landmark PPAs representing the “new nuclear” industry’s ascendance. To wit, in the latter half of 2024, the data center industry witnessed significant developments concerning “new nuclear” energy integration, specifically in the area of plans for forthcoming nuclear small modular reactor (SMR) deployments by cloud hyperscalers.  Some of the most notable announcements included: Amazon’s Investment in Nuclear Small Modular Reactors (SMRs): October 2024 saw Amazon reveal partnerships with Dominion Energy and X-energy to develop and deploy 5 gigawatts (GW) of nuclear energy, in a bid for future powering of its data centers with carbon-free energy. Google’s SMR Pact with Kairos Power: Also in October 2024, Google announced plans to collaborate with Kairos Power to build up to seven SMRs, providing up to 500 megawatts of power. The first unit is expected to come online by 2030, with the entire project slated for completion by 2035. Oracle’s Gigawatt-Scale SMR Plans: In September 2024, Oracle announced plans to construct a gigawatt-scale data center powered by three small modular reactors (SMRs). Company Founder and CTO Larry Ellison revealed that building permits for these reactors have been secured, and that the project was currently in its design phase. The company said

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »