Stay Ahead, Stay ONMINE

Cisco launches dedicated wireless certification track

CCIE Wireless The CCIE Wireless certification validates networking professionals’ ability to “maximize the potential of any enterprise wireless solution from designing and deploying to operating and optimizing,” Cisco says. “Our Cisco CCIE Wireless certification also reflects the growth and evolution of wireless technologies. It includes Cisco’s cloud-based network management solution, Meraki, and platform-agnostic technologies such […]

CCIE Wireless

The CCIE Wireless certification validates networking professionals’ ability to “maximize the potential of any enterprise wireless solution from designing and deploying to operating and optimizing,” Cisco says.

“Our Cisco CCIE Wireless certification also reflects the growth and evolution of wireless technologies. It includes Cisco’s cloud-based network management solution, Meraki, and platform-agnostic technologies such as Wi-Fi 6 and Wi-Fi 7,” Richter said in his blog.

While there are no formal prerequisites for this certification, candidates are recommended to have five to seven years of experience with designing, deploying, operating, and optimizing enterprise wireless technologies before taking the exam. Candidates must pass the core exam as well as a hands-on lab exam.

With the two wireless-specific certifications, the CCNP Enterprise core exam will no longer cover wireless concepts, and professionals will not need to validate their skills in wireless technologies to acquire the CCNP Enterprise certification. Cisco notes the wireless and enterprise certifications are complementary, and professional can choose to validate their skills with both.

The first test date for the CCNP Wireless and CCIE Wireless certifications is scheduled for March 19, 2026. Cisco offers exam preparation materials via its learning network.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco launches dedicated wireless certification track

CCIE Wireless The CCIE Wireless certification validates networking professionals’ ability to “maximize the potential of any enterprise wireless solution from designing and deploying to operating and optimizing,” Cisco says. “Our Cisco CCIE Wireless certification also reflects the growth and evolution of wireless technologies. It includes Cisco’s cloud-based network management solution,

Read More »

IBM, AMD team on quantum computing

IBM and AMD are working together to blend Big Blue’s quantum computers with the chipmaker’s CPUs, GPUs and FPGAs to build intelligent, quantum-centric, high-performance computers. The plan is to combine the power and intelligence of quantum computers with the benefits of classic computing to enable new kinds of algorithms that

Read More »

Interior outlines lease sale schedule for Gulf, Cook Inlet

The US Interior Department rolled out its schedule for federal offshore lease sales Aug. 19, outlining 15 proposed sales in the Gulf of Mexico and 6 off Alaska’s Cook Inlet from 2025 to 2032. In addition to the proposed Gulf lease sales, Interior would hold another 15 sales in the Gulf from 2033 to 2040, it said. The schedule sets Gulf sales for twice a year, in March and August, from 2026 to 2040. It also has a March 2040 sale on the books, as well as one on Dec. 10, 2025. The Bureau of Ocean Energy Management will publish a final notice at least 30 days before each sale. Cook Inlet lease sales would occur in March 2026-2028 and again in March 2029-2032, Interior said. President Trump’s recently passed tax law required Interior to publish the long-term lease schedule. The National Ocean Industries Association praised the new schedule. “Today’s announcement restores the stability needed to keep America’s offshore energy future strong,” said NOIA President Erik Milito. “A clear, long-term schedule of lease sales in the Gulf of America and Alaska’s Cook Inlet gives companies the certainty to invest, sustaining jobs and strengthening US energy security.”

Read More »

Oil Settles Higher as US Inventories Tighten

Oil rose 1.3% to settle above $64 a barrel as tightening US crude and fuel inventories eased investor fears about a looming supply glut. While prices continue to trade within a $5 band this month, West Texas Intermediate’s so-called prompt spread — a measure of supply tightness — strengthened to the widest in more than a week. The move followed a US government report showing stockpiles at the key Cushing, Oklahoma, storage hub fell for the first time in eight weeks while national crude inventories declined by 2.4 million barrels, more than expected. Fuel supplies also contracted, suggesting demand remains robust despite tariffs weighing on longer-term consumption expectations. The bullish data belies a worsening global trade backdrop that has contributed to a 12% drop in US oil futures this year. The US on Wednesday raised its tariff on some Indian goods to 50% — the highest levy applied to any Asian nation — to punish the country for buying Moscow’s oil. But Indian processors plan to maintain the bulk of their purchases, suggesting the trade limits won’t ease investor worries about a global supply surplus, Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management. With OPEC+ unwinding output curbs, the International Energy Agency has warned of a record glut next year. Trump, meanwhile, has lauded falling oil prices, saying Tuesday that crude futures would break $60 a barrel “pretty soon.” Oil Prices WTI for October delivery rose 1.4% to settle at $64.15 a barrel in New York. Brent for October settlement added 1.2% to settle at $68.05 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry,

Read More »

Wall Street Exit Culminates in NZBA Halting Activities

The world’s biggest climate alliance for banks suspended its activities and proposed a vote on scrapping its current structure after a wave of exits that started on Wall Street grew into a global exodus. The Net-Zero Banking Alliance, which has been virtually wiped off the North American map and is now starting to lose ground in Japan, Australia and Europe, is asking remaining signatories to decide whether the group should continue to exist as a membership-based organization, according to a statement on Wednesday.  The announcement comes four years after the group’s founding members, which included the world’s biggest banks, stated their commitment to aligning their lending and investment portfolios with achieving net zero greenhouse gas emissions by 2050. The reality is NZBA “never truly challenged the fossil fuel-oriented business models of major banks,” said Lucie Pinson, the founder of climate nonprofit Reclaim Finance. “For those working to protect the environment and the climate, this underlines once again the limits of voluntary corporate commitments and the urgent need for binding measures, including strong regulatory action, to trigger real change,” Pinson said. Since the unveiling of the Paris Agreement at the end of 2015, banks globally have provided almost $6.4 trillion of bonds and loans to oil, gas and coal companies, compared with about $4.3 trillion for green projects, according to data compiled by Bloomberg. Now, NZBA is proposing it continue as an advisory body without members. “The Steering Group believes this is the most appropriate model to continue supporting banks across the globe to remain resilient and accelerate the real economy transition in line with the Paris Agreement,” NZBA said in the statement. The proposed model will also facilitate ongoing “engagement with the global banking industry to develop further guidance and tools needed to support them and their clients,” it said.  The outcome of the

Read More »

Minnesota’s energy future deserves better than BlackRock’s empty promises

Alissa Jean Schafer is climate and energy director at Private Equity Stakeholder Project, a nonprofit watchdog group focused on the impacts of private firms on people and the planet. Over a year after announcing its plan to take over Minnesota Power, the Duluth-based utility serving more than 150,000 accounts and several large industrial users, Global Infrastructure Partners, wholly owned by BlackRock, is now waiting on one final step for the deal to close: approval from the Minnesota Public Utilities Commission. If the commission approves, this giant private equity firm would take over ALLETE, the parent company of Minnesota Power. What’s at stake for Minnesota Power ratepayers is their utility being beholden to the private equity business model, notorious for cost-cutting and raising prices for consumers to generate high profits in a short amount of time. If this deal is approved, BlackRock would be making decisions that affect the price people in Duluth pay to light and heat their homes. That risk is not unique to Minnesota. Across the country, private equity firms have been moving into essential public infrastructure — from hospitals to housing to water systems and now utilities — often with similar results: higher costs for consumers, reduced transparency, and decisions driven by short-term investor returns rather than long-term public needs. Minnesota is now on the front lines of this national trend, and what happens here will send a signal to other states about whether regulators are willing to put public interest ahead of Wall Street profit. Against that backdrop, the Minnesota Public Utilities Commission has the benefit of a lengthy review process before Minnesota Administrative Law Judge Megan J. McKenzie. The commission has pages of expert testimony, cross-examination, and deal details to review, including volumes of information labeled “highly confidential” and blocked from public view. Judge McKenzie has

Read More »

MISO’s fast-track interconnection review draws 26.6 GW in proposals, dominated by gas

Gas-fired generating capacity accounted for about three-quarters of the 26,575 MW that has applied to take part in the Midcontinent Independent System Operator’s fast-track interconnection review process, the grid operator said Tuesday. MISO received 47 applications for its Expedited Resource Addition Study, which aims to bring power supplies online quickly to meet near-term grid needs. The process allows planned resources that meet eligibility criteria to sidestep MISO’s standard interconnection queue reviews. ERAS projects must meet a clear resource adequacy or reliability need, be able to come online in three to six years and have support from a “relevant electric retail regulatory authority,” such as a state utility commission. Most applications were gas-fired projects, while storage projects accounted for about 15% of the proposed capacity and wind, solar and a nuclear project in Iowa made up about 4.5%, 4% and 2.5% of the capacity, respectively, according to MISO’s list of projects. MISO draws 26.6 GW in fast-track review proposals Gas-fired generation made up about three-quarters of the proposed capacity. “This broad mix underscores MISO’s evolving energy landscape and the urgent need to bring new resources online to address growing reliability challenges,” Aubrey Johnson, MISO’s vice president of system planning, said in a press release. “These projects are designed to meet localized and accelerating demand growth.” MISO will study ERAS projects on a first-come, first-served basis, with the first quarterly study beginning on Sept. 2. Under the ERAS process, MISO will study up to 10 projects per quarter, up to a maximum of 68 projects before the program ends on Aug. 31, 2027. Proposals from Louisiana include five gas-fired power plants totaling 6,170 MW and a 208-MW storage project, according to MISO. Potential projects in Indiana include three gas-fired projects totaling 4,100 MW and three storage projects totaling 1,470 MW. Proposals from Minnesota

Read More »

Fed’s utility-scale storage outlook ticks upward post-OBBBA

The One Big Beautiful Bill Act accelerated the phase-out of key tax credits for solar and wind projects but battery energy storage “retains a relatively stronger position,” Stout, a global advisory firm, said Tuesday in a first half energy update. Battery storage projects will have tax credits available through 2033. However, they will still face headwinds including compliance with new Foreign Entities of Concern restrictions that pose “a significant challenge due to the sector’s heavy reliance on Chinese-dominated supply chains,” Stout noted. “While U.S. energy storage capacity is still expected to grow, the administrative burden of demonstrating compliance with FEOC rules and sourcing non-restricted components could slow development and raise costs, hindering the sector’s long-term expansion,” the firm said. The growth outlook for battery storage is evident in the most Short-Term Energy Outlook published by the U.S. Energy Information Administration on Aug. 7. The report estimates utility-scale domestic storage capacity will rise from about 29 GW at the end of Q1’25 to 65.7 GW at the end of 2026. That’s slightly higher than the 64.9 GW at the end of 2026 that EIA estimated in its June STEO, before the budget bill phasing out renewable tax credits was signed. U.S. developers are planning to bring a total of 64 GW of generating capacity online this year, with more than 18 GW coming from battery storage, EIA said in an Aug. 20 note. Retrieved from U.S. Energy Information Administration. “Battery storage, wind, and natural gas power plants account for virtually all of the remaining capacity additions for 2025,” EIA said in an Aug. 20 note. “If planned capacity additions for solar photovoltaic and battery storage capacities are realized, both technologies will add more capacity than in any previous year. For both technologies, this growth is largely attributable to changes occurring in Texas.”

Read More »

HPE extends Juniper’s Mist AI to boost data center management

Further, Aaron stated that Marvis Actions offers automated remediations for IT-approved scenarios. Using a Human-in-the-Loop (HITL) trust model, customers can develop confidence over time, giving Marvis AI Assistant permission to automatically resolve problems such as: Correcting VLAN misconfigurations Shutting down ports to resolve network loops Upgrading noncompliant devices Handling routine policy updates and firmware compliance Resolving port-stuck issues and misconfigured access points “Each action, whether initiated by IT or executed autonomously by Marvis AI Assistant, is validated post-remediation and logged in the Marvis Actions Dashboard. This maintains full auditability and HITL oversight while building trust through consistent, accurate results,” Aaron wrote. Juniper also extended Marvis further into the vendor’s Apstra data center networking environment by letting the platform have access to Apstra’s contextual graph database, which maps the components in the data center including switches, routers, servers, links, policies and services.  The idea is to let the MistAI framework understand complex queries, break them into logical components, and iteratively query data sources to synthesize actionable responses, Aaron stated.  “This framework currently supports nearly 300 API queries. It will expand to enable autonomous service provisioning activities, incorporate additional data sources like elastic search, and enhance feedback mechanisms for continuous learning—critical steps toward fully self-driving data centers,” Aaron wrote. In addition to the Apstra extension, Juniper is adding Marvis Minis capabilities to data center operations. Marvis Minis set up a digital twin of a customer’s network environment to simulate and test user connections, validate network configurations, and find/detect problems without users being present and without requiring any additional hardware, according to Juniper.

Read More »

Vertiv launches one-day installation package for AI data center systems

Data center infrastructure vendor Vertiv has introduced Vertiv OneCore, a fully modular data center building block design supporting AI and HPC applications intended to speed deployment of compute equipment in as little as one day. OneCore is a unified “slab-up,” factory-assembled, turnkey platform that integrates the company’s power, thermal, and IT infrastructure systems within a pre-assembled shell. Slab-up is a data center design where equipment—such as server racks and cabinets—are installed directly on a solid concrete slab floor, rather than on a raised floor system. Vertiv says the design simplifies logistics, minimizes on-site labor and complexity, and supports consistent quality, cost, and schedule outcomes. Vertiv Unify, which assists Vertiv cooling, UPS, and power management equipment to connect to building and data center management, provides integrated system visibility and centralized management. 

Read More »

Nvidia turns to software to speed up its data center networking hardware for AI

Typically chunks of AI tasks are distributed across GPUs, which then coordinate to provide a unified output. Adaptive routing ensures the network and GPUs over long distances are in sync when running AI workloads, Shainer said. Jitter bugs “If I retransmit the packet, I create jitter, which means one GPU out of many will be delayed and all the others have to wait for that GPU to finish,” Shainer said. The congestion control improvements remove bottlenecks by balancing transmissions across switches. Nvidia tested XGS algorithms in its server hardware and measured a 1.9x improvement in GPU-to-GPU communication compared to off-the-shelf networking technology, executives said during a briefing on the technology. Cloud providers already have long-distance high-speed networks. For example, Google’s large-scale Jupiter network uses optical switching for fast communications between its AI chips, which are called TPUs. It is important to separate the physical infrastructure from the software algorithms like XGS, Shainer said.

Read More »

Fluke Networks expands testing to help ease data center networking challenges

High-density fiber connections amplify contamination risks The shift toward higher-density fiber connections has significantly complicated contamination control. Modern array connectors can house up to 24 individual fibers within a single connection point. In contrast, traditional duplex connections contained just two fibers. “The slightest little bit of dust on one of those nine micron wide fibers, which, by the way, is much smaller than a human hair, the slightest little bit of dust on any one of the 24 in that connector, and it’s not going to work,” Mullins explained.  The inspection and cleaning requirements extend beyond traditional fiber testing. Each kit includes cleaning and inspection capabilities. Mullins noted that many technicians take shortcuts on fiber preparation.  “Cleaning and inspecting a fiber, every time you unplug it and plug it back in, adds, like another minute worth of work. But you know what? If you don’t do it, you’re gonna pay for it down the road,” he said. Cable identification a persistent challenge In addition to the new kits, Fluke Networks is also continuing to help solve other persistent networking issues. Physical cable identification continues to plague data center operations despite advances in network management and monitoring. Fluke’s solutions address this through multiple approaches. These include tone and probe technology, remote identification systems, and active network port discovery.

Read More »

Cisco ties storage networking gear to IBM z17 mainframe

“IBM Z systems are mainframes known for their ability to handle massive transaction volumes, support large-scale databases, and provide unmatched levels of security and uptime,” wrote Fausto Vaninetti, a senior solutions engineer for data center technologies at Cisco, in a blog post about the news. “The newest in the IBM Z system family, IBM z17 is the first mainframe fully engineered for the AI age, unlocking expanded capabilities for enterprise-scale AI, such as large language models, generative AI, and accelerated inferencing. However, the performance of mainframe applications depends on the underlying storage infrastructure.” SANs play a critical role in ensuring fast, reliable, and secure access to data, Vaninetti wrote: “For mainframe environments, which leverage high-speed [Fibre Connection] FICON protocol, having a robust SAN fabric that supports these requirements is non-negotiable. A solution that combines high throughput, low latency, and enterprise-class resilience is vital to ensure seamless operations and meet stringent service-level agreement requirements.” According to Vaninetti, some standout features of the MDS 9000 Series for mainframe environments include:

Read More »

Scaling Up: Tract’s Master-Planned Land and Infrastructure Approach to Data Center Development

With the rapid growth of physical data center infrastructure, it’s no surprise that a niche market has emerged for companies specializing in land acquisition. Reports of massive property purchases by firms planning new facilities appear almost daily—and so do accounts of the challenges developers face before the first shovel hits the ground. As parcel sizes grow and power and water demands intensify, the complexities of acquiring and preparing these sites have only increased. Tract is a leader in this space. The Denver-based company develops master-planned data center parks, with more than 25,000 acres of potential sites under its control and plans to support over 25 GW of workload capacity. To put that into perspective, 25,000 acres is roughly 40 square miles—about two-thirds the land area of Washington, D.C., or, for European readers, two-thirds the size of Liechtenstein. Building Shovel-Ready Megasites Rather than waiting for developers to come knocking, Tract takes a proactive approach, built on the core belief that the future of data center growth lies in pre-entitled, zoned, and infrastructure-ready megasites. The company works years in advance to deliver shovel-ready campuses with reliable energy, fiber connectivity, and municipal cooperation already in place. Its model emphasizes strategic land aggregation in high-growth regions, the cultivation of long-term relationships with utilities and governments, and master planning for power, cooling, transportation, and sustainability. This integrated approach positions Tract to deliver both speed and certainty to hyperscale project developers—at scale. Tract’s leadership team brings deep industry experience. Founder and Executive Chairman Grant van Rooyen previously led acquisitions and expansions at Cologix and Teraco. President Matt Spencer brings more than 35 years of telecom and infrastructure leadership, while Chief Energy Officer Nat Sahlstrom, former head of Amazon’s global energy, water, and sustainability teams, helped make Amazon the world’s largest buyer of renewable energy. Backed by

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »