Stay Ahead, Stay ONMINE

The risks of AI-generated code are real — here’s how enterprises can manage the risk

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Not that long ago, humans wrote almost all application code. But that’s no longer the case: The use of AI tools to write code has expanded dramatically. Some experts, such as Anthropic CEO Dario Amodei, expect […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Not that long ago, humans wrote almost all application code. But that’s no longer the case: The use of AI tools to write code has expanded dramatically. Some experts, such as Anthropic CEO Dario Amodei, expect that AI will write 90% of all code within the next 6 months.

Against that backdrop, what is the impact for enterprises? Code development practices have traditionally involved various levels of control, oversight and governance to help ensure quality, compliance and security. With AI-developed code, do organizations have the same assurances? Even more importantly, perhaps, organizations must know which models generated their AI code.

Understanding where code comes from is not a new challenge for enterprises. That’s where source code analysis (SCA) tools fit in. Historically, SCA tools have not provide insight into AI, but that’s now changing. Multiple vendors, including Sonar, Endor Labs and Sonatype are now providing different types of insights that can help enterprises with AI-developed code.

“Every customer we talk to now is interested in how they should be responsibly using AI code generators,” Sonar CEO Tariq Shaukat told VentureBeat.

Financial firm suffers one outage a week due to AI-developed code

AI tools are not infallible. Many organizations learned that lesson early on when content development tools provided inaccurate results known as hallucinations.

The same basic lesson applies to AI-developed code. As organizations move from experimental mode into production mode, they have increasingly come to the realization that code is very buggy. Shaukat noted that AI-developed code can also lead to security and reliability issues. The impact is real and it’s also not trivial.

“I had a CTO, for example, of a financial services company about six months ago tell me that they were experiencing an outage a week because of AI generated code,” said Shaukat.

When he asked his customer if he was doing code reviews, the answer was yes. That said, the developers didn’t feel anywhere near as accountable for the code, and were not spending as much time and rigor on it, as they had previously. 

The reasons code ends up being buggy, especially for large enterprises, can be variable. One particular common issue, though, is that enterprises often have large code bases that can have complex architectures that an AI tool might not know about. In Shaukat’s view, AI code generators don’t generally deal well with the complexity of larger and more sophisticated code bases.

“Our largest customer analyzes over 2 billion lines of code,” said Shaukat. “You start dealing with those code bases, and they’re much more complex, they have a lot more tech debt and they have a lot of dependencies.”

The challenges of AI developed code

To Mitchell Johnson, chief product development officer at Sonatype, it is also very clear that AI-developed code is here to stay.

Software developers must follow what he calls the engineering Hippocratic Oath. That is, to do no harm to the codebase. This means rigorously reviewing, understanding and validating every line of AI-generated code before committing it — just as developers would do with manually written or open-source code. 

“AI is a powerful tool, but it does not replace human judgment when it comes to security, governance and quality,” Johnson told VentureBeat.

The biggest risks of AI-generated code, according to Johnson, are:

  • Security risks: AI is trained on massive open-source datasets, often including vulnerable or malicious code. If unchecked, it can introduce security flaws into the software supply chain.
  • Blind trust: Developers, especially less experienced ones, may assume AI-generated code is correct and secure without proper validation, leading to unchecked vulnerabilities.
  • Compliance and context gaps: AI lacks awareness of business logic, security policies and legal requirements, making compliance and performance trade-offs risky.
  • Governance challenges: AI-generated code can sprawl without oversight. Organizations need automated guardrails to track, audit and secure AI-created code at scale.

“Despite these risks, speed and security don’t have to be a trade-off, said Johnson. “With the right tools, automation and data-driven governance, organizations can harness AI safely — accelerating innovation while ensuring security and compliance.”

Models matter: Identifying open source model risk for code development

There are a variety of models organizations are using to generate code. Anthopic Claude 3.7, for example, is a particularly powerful option. Google Code Assist, OpenAI’s o3 and GPT-4o models are also viable choices.

Then there’s open source. Vendors such as Meta and Qodo offer open-source models, and there is a seemingly endless array of options available on HuggingFace. Karl Mattson, Endor Labs CISO, warned that these models pose security challenges that many enterprises aren’t prepared for.

“The systematic risk is the use of open source LLMs,” Mattson told VentureBeat. “Developers using open-source models are creating a whole new suite of problems. They’re introducing into their code base using sort of unvetted or unevaluated, unproven models.”

Unlike commercial offerings from companies like Anthropic or OpenAI, which Mattson describes as having “substantially high quality security and governance programs,” open-source models from repositories like Hugging Face can vary dramatically in quality and security posture. Mattson emphasized that rather than trying to ban the use of open-source models for code generation, organizations should understand the potential risks and choose appropriately.

Endor Labs can help organizations detect when open-source AI models, particularly from Hugging Face, are being used in code repositories. The company’s technology also evaluates these models across 10 attributes of risk including operational security, ownership, utilization and update frequency to establish a risk baseline.

Specialized detection technologies emerge

To deal with emerging challenges, SCA vendors have released a number of different capabilities.

For instance, Sonar has developed an AI code assurance capability that can identify code patterns unique to machine generation. The system can detect when code was likely AI-generated, even without direct integration with the coding assistant. Sonar then applies specialized scrutiny to those sections, looking for hallucinated dependencies and architectural issues that wouldn’t appear in human-written code.

Endor Labs and Sonatype take a different technical approach, focusing on model provenance. Sonatype’s platform can be used to identify, track and govern AI models alongside their software components. Endor Labs can also identify when open-source AI models are being used in code repositories and assess the potential risk.

When implementing AI-generated code in enterprise environments, organizations need structured approaches to mitigate risks while maximizing benefits. 

There are several key best practices that enterprises should consider, including:

  • Implement rigorous verification processes: Shaukat recommends that organizations have a rigorous process around understanding where code generators are used in specific part of the code base. This is necessary to ensure the right level of accountability and scrutiny of generated code.
  • Recognize AI’s limitations with complex codebases: While AI-generated code can easily handle simple scripts, it can sometimes be somewhat limited when it comes to complex code bases that have a lot of dependencies.
  • Understand the unique issues in AI-generated code: Shaukat noted that while AI avoids common syntax errors, it tends to create more serious architectural problems through hallucinations. Code hallucinations can include making up a variable name or a library that doesn’t actually exist.
  • Require developer accountability: Johnson emphasizes that AI-generated code is not inherently secure. Developers must review, understand and validate every line before committing it.
  • Streamline AI approval: Johnson also warns of the risk of shadow AI, or uncontrolled use of AI tools. Many organizations either ban AI outright (which employees ignore) or create approval processes so complex that employees bypass them. Instead, he suggests businesses create a clear, efficient framework to evaluate and greenlight AI tools, ensuring safe adoption without unnecessary roadblocks.

What this means for enterprises

The risk of Shadow AI code development is real.  

The volume of code that organizations can produce with AI assistance is dramatically increasing and could soon comprise the majority of all code.

The stakes are particularly high for complex enterprise applications where a single hallucinated dependency can cause catastrophic failures. For organizations looking to adopt AI coding tools while maintaining reliability, implementing specialized code analysis tools is rapidly shifting from optional to essential.

“If you’re allowing AI-generated code in production without specialized detection and validation, you’re essentially flying blind,” Mattson warned. “The types of failures we’re seeing aren’t just bugs — they’re architectural failures that can bring down entire systems.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

EPA to end environmental justice programs, monitoring tools

Dive Brief: The Trump administration announced Wednesday it will shut down all environmental justice offices and officially end other EJ-related initiatives, a move that will impact how waste and recycling industries measure and track their environmental impact on neighboring communities. The closures include the EPA’s Office of Environmental Justice and

Read More »

Intel under Tan: What enterprise IT buyers need to know

Intel’s discrete GPU ambitions — especially in enterprise AI — have often appeared reactive rather than part of a clear strategic vision. The company entered the market late, facing Nvidia’s dominant CUDA ecosystem and AMD’s aggressive push into AI GPUs. “Tan’s background suggests he is unlikely to double down on

Read More »

SUSE expands AI tools to control workloads, LLM usage

“And every few weeks we’ll continue to add to the library,” Puri says. SUSE also announced a partnership with Infosys today. The system integrator has the Topaz AI platform, which includes a set of services and solutions to help enterprises build and deploy AI applications. SUSE is also integrating the

Read More »

D-Wave uses quantum to solve real-world problem

D-Wave published its results today, peer-reviewed in the journal Science. The classical supercomputer that D-Wave benchmarked against was the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory. It was, until recently, the most powerful supercomputer in the world but moved to second place in November. Two different

Read More »

MISO overstates transmission plan benefits by excluding Invenergy project: market monitor

The Midcontinent Independent System Operator’s failure to include Invenergy Transmission’s Grain Belt Express project in its transmission planning process likely led the grid operator to overstate the benefits of a system buildout, according to MISO’s market monitor. The benefits of MISO’s roughly $10.3 billion Tranche 1 and $22 billion Tranche 2.1 transmission expansion plans would have been reduced if the proposed Grain Belt Express project was included in MISO’s planning process, David Patton, president of Potomac Economics, said in a filing at the Federal Energy Regulatory Commission on Thursday. Patton urged FERC to act on a complaint Invenergy filed in August 2022 over MISO excluding the Grain Belt Express project from its transmission expansion planning. Chicago-based Invenergy plans to build the 5-GW Grain Belt Express transmission line in phases from Kansas to Illinois. The U.S. Department of Energy in November conditionally agreed to a loan guarantee of up to $4.9 billion for the project’s first phase. Missouri Attorney General Andrew Bailey on March 6 urged the Department of Government Efficiency to cancel the loan guarantee. Overall, Patton contends that MISO’s transmission plans and business case analyses are “highly biased in favor of over-building transmission.” A “properly calculated” benefit-cost ratio for the Tranche 2.1 transmission projects would be less than 0.4, according to Patton. MISO estimated the portfolio of transmission projects — approved by the grid operator’s board in December — have a benefit-cost ratio of 1.8 to 3.5. There has been little independent oversight of how MISO develops its planning scenarios, selects transmission projects to be built, establishes its cost-benefit methodologies or sets the assumptions and inputs for its models, Patton said. MISO recently tried to limit Potomac Economics’ ability to monitor the grid operator’s transmission planning process, saying it is outside the scope of the market monitor’s duties, according to

Read More »

Some SMRs could bypass environmental review step under Arizona bill

Dive Brief: Arizona utilities would be allowed to bypass a key state environmental review process to deploy small modular nuclear reactors on or near certain power plants and large rural industrial sites, including data centers, under a bill that cleared the state House of Representatives on Feb. 26. In most Arizona counties, the bill would supersede any local zoning restrictions on the construction and operation of colocated SMRs, provided the host facility received all required zoning entitlements, according to a March 7 fact sheet developed by Arizona Senate staff.  Arizona House Majority Leader Michael Carbone, R, introduced the bill on Feb. 10, days after the state’s three largest utilities announced a joint effort to assess “a wide range of possible locations” for the siting and deployment of nuclear reactors. Dive Insight: The collaboration among Arizona Public Service, Salt River Project and Tucson Electric Power would consider options for both small modular reactors and “potential large reactor projects,” the utilities said Feb. 5. The utilities have applied for a grant through the U.S. Department of Energy’s Generation III+ Small Modular Reactor Program, they said. The Gen III+ SMR program will provide up to $800 million to “two first-mover teams of utility, reactor vendor, constructor, and end-users or power off-takers” committed to an initial SMR deployment and developing a multi-reactor order book, according to DOE. The utilities called the application a first step in a broader effort to “explore the possibility” of adding nuclear capacity in Arizona, possibly at retiring coal-fired power plant sites. The trio could select a preferred nuclear site later this decade “at the earliest, potentially enabling additional nuclear to be in operation in the early 2040s,” they said.  “We know the development timeline would be long, so it makes sense for our state’s energy providers to begin this

Read More »

Power Moves: Big changes at Clarke Energy and more

Jamie Clarke and Peter Holliday of Clarke Energy will be stepping down as CEO and chief financial officer, respectively. Jamie Clarke is the son of Jim Clarke, the founder of Liverpool-headquartered Clarke Energy and has led the business since his appointment as CEO in 2010. He has led the business to consistent growth over this period with the company having exceeded £500m of annual revenue in 2024 with expected growth in 2025. Clarke Energy was acquired by the Kohler Company of Wisconsin in 2017. In May 2024, Kohler Energy became a separate, independent business, operating as Rehlko. The move forms part of a planned succession, with the two leaving their roles from 1 May. Kyle Quinn succeeds Clarke as president and Sharon Wright will take on Holliday’s position as group finance director. Both Clarke and Holliday will be retained in non-executive roles and will be supporting the business going forwards. Clarke commented: “Having committed over 50 years between us to the growth and development of Clarke Energy, we still have a huge passion for the business and have both accepted ongoing non-executive roles and will remain involved in the business going forward.” Quinn added: “The business continues to thrive and now covers 27 countries providing industry leading distributed energy solutions for our customers. On behalf of the Clarke Energy leadership team, we look forward to continuing in their footsteps on the next phase of the Clarke journey.” © Supplied by BrimmondBrimmond sales director Matt Nicoll, business development manager Natalie Wright, and key account lead Calum Thomson. Natalie Wright has been appointed as business development manager at Aberdeenshire-based engineering and manufacturing specialists Brimmond. She brings over a decade of experience in sales, account management and business development within the energy sector. In her new role, Wright will focus on driving revenue

Read More »

Energy Voice Live: The next few months are critical for thousands of UK energy jobs

The UK is facing a “critical few months” as energy policy is formulated that will fuel investment and ensure that the energy supply chain retains skills and jobs. This was one of the significant points made at the Energy Voice Live event in Aberdeen, which featured expert panellists Nick Dunn, chief executive of Peterhead-based energy services firm Score, and Professor Paul de Leeuw from the Robert Gordon University Energy Transition Institute. LISTEN: Energy Voice Live – Where are the green jobs? “This is the critical few months,” said de Leeuw who explained the government had launched consultations that will be complete in April and May of this year which will complete the work required for the UK government to deliver an industrial strategy which will in turn provide guidance for the next UK budget in autumn. “If don’t get it right we will lose investment for 2026/27. We have to get it right this year,” he said. Where are the green jobs? The event focused on focus both the UK and Scottish Government has placed on delivering so called “green jobs” as part of a “just transition”. De Leeuw said he finds the description defining a “green job” difficult. “I’m not a great fan of the language of green jobs. “What happens is green jobs are good and non-green jobs are bad – I don’t think that is right.” The Energy Live event was held at ONE BioHub: Image: Kath Flannery/DC Thomson He said the current situation of Grangemouth which is set to shed hundreds of jobs as well as lose Scotland’s last oil refinery right now is a “microcosm” of  what will play out in the UK energy industry transition to clean energy if the right steps are not taken.. “The choice here is what decline are we going

Read More »

Kentucky Power customers pay for AEP transmission without benefit, state officials say in complaint

Dive Brief: Kentucky Power ratepayers are paying for transmission built by other American Electric Power utilities that don’t benefit them, the Kentucky Public Service Commission and Kentucky Attorney General said in a complaint filed Wednesday with federal regulators. Kentucky Power customers have paid at least $75 million since 2019 for transmission built in six other states under a cost allocation framework between AEP utilities, according to the complaint filed with the Federal Energy Regulatory Commission. “AEP self-planned transmission projects outside of the Kentucky Power service territory cannot be shown to be sufficiently connected to serving Kentucky Power customers to allocate costs for far-flung transmission projects geared to serve the customers of other AEP retail distribution companies,” the PSC and attorney general said in the complaint. They contend AEP underinvests in Kentucky’s transmission system. Dive Insight: Kentucky Power, an AEP subsidiary, “strongly disagrees” with the claims in the complaint and has shown in previous FERC proceedings that its customers benefit from transmission investments in Kentucky and the PJM region, Kentucky Power President Cindy Wiseman said in a press release. “A strong transmission grid provides our customers and communities with increased reliability and access to low-cost generation resources,” Wiseman said. The costs for transmission projects that are “self-planned” by AEP utilities in the PJM region are shared by the utilities under an agreement established in 1984 and modified last in 2009, according to the complaint. AEP utilities subject to the agreement are Appalachian Power, Columbus Southern Power, Indiana Michigan Power, Kentucky Power, Kingsport Power, Ohio Power and Wheeling Power. AEP transmission companies that are part of the pact are AEP Appalachian Transmission, AEP Indiana Michigan Transmission, AEP Kentucky Transmission, AEP Ohio Transmission and AEP West Virginia Transmission. The Kentucky PSC reviews proposed transmission projects in its state to ensure that they cost-effectively

Read More »

EPA Says it is Initiating ’31 Historic Actions’

In a video posted on the U.S. Environmental Protection Agency’s (EPA) site this week, EPA Administrator Lee Zeldin said the EPA “is initiating 31 historic actions”. “I’m pleased to make the largest deregulatory announcement in U.S. history,” Zeldin said in the video. “The Environmental Protection Agency is initiating 31 historic actions to fulfil President Trump’s promise to unleash American energy, revitalize our auto industry, restore the rule of law, and give power back to the states,” he added. Combined, these announcements represent the most momentous day in the history of the EPA, the organization said in a statement posted on its site, which accompanied the video. It added that, “as a result of these announcements, the cost of living for American families will decrease” and said “these actions will create American jobs”. The actions include a “reconsideration of regulations throttling the oil and gas industry”, a “reconsideration of [the] mandatory Greenhouse Gas Reporting Program”, and a “reconsideration of regulations on power plants”, the statement highlighted. “While accomplishing EPA’s core mission of protecting the environment, the agency is committed to fulfilling President Trump’s promise to unleash American energy, lower cost of living for Americans, revitalize the American auto industry, restore the rule of law, and give power back to states to make their own decisions,” the EPA said in the statement. In a statement posted on the American Petroleum Institute (API) website, API President and CEO Mike Sommers said, “voters sent a clear message in support of affordable, reliable and secure American energy, and the Trump administration is answering the call by moving forward on many of the priorities in API’s five-point policy roadmap”. “As this regulatory process moves forward, we are committed to working with Administrator Zeldin on commonsense policies that advance American energy dominance,” he added. In a statement sent

Read More »

IBM laying foundation for mainframe as ultimate AI server

“It will truly change what customers are able to do with AI,” Stowell said. IBM’s mainframe processors The next generation of processors is expected to continue a long history of generation-to-generation improvements, IBM stated in a new white paper on AI and the mainframe. “They are projected to clock in at 5.5 GHz. and include ten 36 MB level 2 caches. They’ll feature built-in low-latency data processing for accelerated I/O as well as a completely redesigned cache and chip-interconnection infrastructure for more on-chip cache and compute capacity,” IBM wrote.  Today’s mainframes also have extensions and accelerators that integrate with the core systems. These specialized add-ons are designed to enable the adoption of technologies such as Java, cloud and AI by accelerating computing paradigms that are essential for high-volume, low-latency transaction processing, IBM wrote.  “The next crop of AI accelerators are expected to be significantly enhanced—with each accelerator designed to deliver 4 times more compute power, reaching 24 trillion operations per second (TOPS),” IBM wrote. “The I/O and cache improvements will enable even faster processing and analysis of large amounts of data and consolidation of workloads running across multiple servers, for savings in data center space and power costs. And the new accelerators will provide increased capacity to enable additional transaction clock time to perform enhanced in-transaction AI inferencing.” In addition, the next generation of the accelerator architecture is expected to be more efficient for AI tasks. “Unlike standard CPUs, the chip architecture will have a simpler layout, designed to send data directly from one compute engine, and use a range of lower- precision numeric formats. These enhancements are expected to make running AI models more energy efficient and far less memory intensive. As a result, mainframe users can leverage much more complex AI models and perform AI inferencing at a greater scale

Read More »

VergeIO enhances VergeFabric network virtualization offering

VergeIO is not, however, using an off-the-shelf version of KVM. Rather, it is using what Crump referred to as a heavily modified KVM hypervisor base, with significant proprietary enhancements while still maintaining connections to the open-source community. VergeIO’s deployment profile is currently 70% on premises and about 30% via bare-metal service providers, with a particularly strong following among cloud service providers that host applications for their customers. The software requires direct hardware access due to its low-level integration with physical resources. “Since November of 2023, the normal number one customer we’re attracting right now is guys that have had a heart attack when they got their VMware renewal license,” Crump said. “The more of the stack you own, the better our story becomes.” A 2024 report from Data Center Intelligence Group (DCIG) identified VergeOS as one of the top 5 alternatives to VMware. “VergeIO starts by installing VergeOS on bare metal servers,” the report stated. “It then brings the servers’ hardware resources under its management, catalogs these resources, and makes them available to VMs. By directly accessing and managing the server’s hardware resources, it optimizes them in ways other hypervisors often cannot.” Advanced networking features in VergeFabric VergeFabric is the networking component within the VergeOS ecosystem, providing software-defined networking capabilities as an integrated service rather than as a separate virtual machine or application.

Read More »

Podcast: On the Frontier of Modular Edge AI Data Centers with Flexnode’s Andrew Lindsey

The modular data center industry is undergoing a seismic shift in the age of AI, and few are as deeply embedded in this transformation as Andrew Lindsey, Co-Founder and CEO of Flexnode. In a recent episode of the Data Center Frontier Show podcast, Lindsey joined Editor-in-Chief Matt Vincent and Senior Editor David Chernicoff to discuss the evolution of modular data centers, the growing demand for high-density liquid-cooled solutions, and the industry factors driving this momentum. A Background Rooted in Innovation Lindsey’s career has been defined by the intersection of technology and the built environment. Prior to launching Flexnode, he worked at Alpha Corporation, a top 100 engineering and construction management firm founded by his father in 1979. His early career involved spearheading technology adoption within the firm, with a focus on high-security infrastructure for both government and private clients. Recognizing a massive opportunity in the data center space, Lindsey saw a need for an innovative approach to infrastructure deployment. “The construction industry is relatively uninnovative,” he explained, citing a McKinsey study that ranked construction as the second least-digitized industry—just above fishing and wildlife, which remains deliberately undigitized. Given the billions of square feet of data center infrastructure required in a relatively short timeframe, Lindsey set out to streamline and modernize the process. Founded four years ago, Flexnode delivers modular data centers with a fully integrated approach, handling everything from site selection to design, engineering, manufacturing, deployment, operations, and even end-of-life decommissioning. Their core mission is to provide an “easy button” for high-density computing solutions, including cloud and dedicated GPU infrastructure, allowing faster and more efficient deployment of modular data centers. The Rising Momentum for Modular Data Centers As Vincent noted, Data Center Frontier has closely tracked the increasing traction of modular infrastructure. Lindsey has been at the forefront of this

Read More »

Last Energy to Deploy 30 Microreactors in Texas for Data Centers

As the demand for data center power surges in Texas, nuclear startup Last Energy has now announced plans to build 30 microreactors in the state’s Haskell County near the Dallas-Fort Worth Metroplex. The reactors will serve a growing customer base of data center operators in the region looking for reliable, carbon-free energy. The plan marks Last Energy’s largest project to date and a significant step in advancing modular nuclear power as a viable solution for high-density computing infrastructure. Meeting the Looming Power Demands of Texas Data Centers Texas is already home to over 340 data centers, with significant expansion underway. Google is increasing its data center footprint in Dallas, while OpenAI’s Stargate has announced plans for a new facility in Abilene, just an hour south of Last Energy’s planned site. The company notes the Dallas-Fort Worth metro area alone is projected to require an additional 43 gigawatts of power in the coming years, far surpassing current grid capacity. To help remediate, Last Energy has secured a 200+ acre site in Haskell County, approximately three and a half hours west of Dallas. The company has also filed for a grid connection with ERCOT, with plans to deliver power via a mix of private wire and grid transmission. Additionally, Last Energy has begun pre-application engagement with the U.S. Nuclear Regulatory Commission (NRC) for an Early Site Permit, a key step in securing regulatory approval. According to Last Energy CEO Bret Kugelmass, the company’s modular approach is designed to bring nuclear energy online faster than traditional projects. “Nuclear power is the most effective way to meet Texas’ growing energy demand, but it needs to be deployed faster and at scale,” Kugelmass said. “Our microreactors are designed to be plug-and-play, enabling data center operators to bypass the constraints of an overloaded grid.” Scaling Nuclear for

Read More »

Data Center Jobs: Engineering and Technician Jobs Available in Major Markets

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting.  Data Center Facility Engineer (Night Shift Available) Ashburn, VAThis position is also available in: Tacoma, WA (Nights), Days/Nights: Needham, MA and New York City, NY. This opportunity is working directly with a leading mission-critical data center developer / wholesaler / colo provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer New Albany, OHThis traveling position is also available in: Somerset, NJ; Boydton, VA; Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Des Moines, IA; San Jose, CA; Portland, OR; St Louis, MO; Phoenix, AZ;  Dallas, TX;  Chicago, IL; or Toronto, ON. *** ALSO looking for a LEAD EE and ME CxA agents.*** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Switchgear Field Service Technician – Critical Facilities Nationwide TravelThis position is also available in: Charlotte, NC; Atlanta, GA; Dallas,

Read More »

Amid Shifting Regional Data Center Policies, Iron Mountain and DC Blox Both Expand in Virginia’s Henrico County

The dynamic landscape of data center developments in Maryland and Virginia exemplify the intricate balance between fostering technological growth and addressing community and environmental concerns. Data center developers in this region find themselves both in the crosshairs of groups worried about the environment and other groups looking to drive economic growth. In some cases, the groups are different components of the same organizations, such as local governments. For data center development, meeting the needs of these competing interests often means walking a none-too-stable tightrope. Rapid Government Action Encourages Growth In May 2024, Maryland demonstrated its commitment to attracting data center investments by enacting the Critical Infrastructure Streamlining Act. This legislation provides a clear framework for the use of emergency backup power generation, addressing previous regulatory challenges that a few months earlier had hindered projects like Aligned Data Centers’ proposed 264-megawatt campus in Frederick County, causing Aligned to pull out of the project. However, just days after the Act was signed by the governor, Aligned reiterated its plans to move forward with development in Maryland.  With the Quantum Loop and the related data center development making Frederick County a focal point for a balanced approach, the industry is paying careful attention to the pace of development and the relations between developers, communities and the government. In September of 2024, Frederick County Executive Jessica Fitzwater revealed draft legislation that would potentially restrict where in the county data centers could be built. The legislation was based on information found in the Frederick County Data Centers Workgroup’s final report. Those bills would update existing regulations and create a floating zone for Critical Digital Infrastructure and place specific requirements on siting data centers. Statewide, a cautious approach to environmental and community impacts statewide has been deemed important. In January 2025, legislators introduced SB116,  a bill

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »