Stay Ahead, Stay ONMINE

Networking terms and definitions

Monitoring: DCIM tools provide real-time visibility into the data center environment, tracking metrics like power consumption, temperature, humidity, and equipment status.   Management: DCIM enables administrators to control and manage various aspects of the data center, including power distribution, cooling systems, and IT assets.  Planning: DCIM facilitates capacity planning, helping data center operators understand current resource […]

  • Monitoring: DCIM tools provide real-time visibility into the data center environment, tracking metrics like power consumption, temperature, humidity, and equipment status.  
  • Management: DCIM enables administrators to control and manage various aspects of the data center, including power distribution, cooling systems, and IT assets. 
  • Planning: DCIM facilitates capacity planning, helping data center operators understand current resource utilization and forecast future needs. 
  • Optimization: DCIM helps identify areas for improvement in energy efficiency, resource allocation, and overall operational efficiency. 

Data center sustainability

Data center sustainability is the practice of designing, building and operating data centers in a way that minimizes their environmental by reducing energy consumption, water usage and waste generation, while also promoting sustainable practices such as renewable energy and efficient resource management.

Hyperconverged infrastructure (HCI)

Hyperconverged infrastructure combines compute, storage and networking in a single system and is used frequently in data centers. Enterprises can choose an appliance from a single vendor or install hardware-agnostic hyperconvergence software on white-box servers.

Edge computing

Edge computing is a distributed computing architecture that brings computation and storage closer to the sources of data. That is, instead of sending all data to a centralized cloud or data center, processing occurs at or near the edge of the network, where devices like sensors, IoT devices, or local servers are located to process, analyze and retain the data.  In short, it’s about processing data closer to where it’s generated, which is designed to minimize latency, reduce bandwidth usage,and enable real-time responses.

Edge AI

Edge AI is the deployment and execution of artificial intelligence (AI) algorithms on edge devices or local servers, rather than relying solely on cloud-based, more centralized, AI processing. This involves running machine learning models and AI applications directly on devices at the edge of the network. Some key aspects of edge AI include the following:

  • Local processing: AI calculations happen on the device.
  • Reduced latency: Faster responses due to not sending all data to a data center or cloud.
  • Privacy: Sensitive data can be processed locally.
  • Offline capabilities: AI functions can work even without constant internet connectivity.

Think of edge computing as the infrastructure and edge AI as the intelligence at the edge of the network.

Firewall

Network firewalls were created as the primary perimeter defense for most organizations, but since its creation the technology has spawned many iterations: proxy, stateful, Web app, next-generation.

Next-generation firewall (NGFW)

Next-generation firewalls defend network perimeters and include features to inspect traffic at a fine level including intrusion prevention systems, deep-packet inspection, and SSL inspection all integrated into a single system.

Infiniband

Infiniband is a highly specialized technology, Infiniband’s performance and scalability make it a valuable tool for organizations that require the highest levels of network performance. The high-performance interconnect technology designed to provide low-latency, high-bandwidth communication between servers, storage devices, and other high-performance computing (HPC) components. It’s particularly well-suited for applications that require rapid data transfer, such as scientific computing, financial modeling and video rendering. Infiniband is commonly used for HPC clusters,  data centers, supercomputers and scientific research.

Ethernet

Ethernet is one of the original networking technologies and was invented 50 years ago. Despite its age, the communications protocol can be deployed and incorporate modern advancements without losing backwards compatibility, Ethernet continues to reign as the de facto standard for computer networking. As artificial intelligence (AI) workloads increase, network industry giants are teaming up to ensure Ethernet networks can keep pace and satisfy AI’s high performance networking requirements. At its core, Ethernet is a protocol that allows computers (from servers to laptops) to talk to each other over wired networks that use devices like routers, switches and hubs to direct traffic. Ethernet works seamlessly with wireless protocols, too.

Internet

The internet is a global network of computers using internet protocol (IP) to communicate globally via switches and routers deployed in a cooperative network designed to direct traffic efficiently and to provide resiliency should some part of the internet fail.

Internet backbone

Tier 1 internet service providers (ISP) mesh their high-speed fiber-optic networks together to create the internet backbone, which moves traffic efficiently among geographic regions.

IP address

An IP address is a unique set of numbers or combination of letters and numbers that are assigned to each device on an IP network to make it possible for switches and routers to deliver packets to the correct destinations.

PaaS, NaaS, IaaS and IDaaS

Platform as a service (PaaS): In PaaS, a cloud provider delivers a platform for developers to build, run and manage applications. It includes the operating system, programming languages, database and other development tools. This allows developers to focus on building applications without worrying about the underlying infrastructure.

Network as a service (NaaS): NaaS is a cloud-based service that provides network infrastructure, such as routers, switches and firewalls, as a service. This allows organizations to access and manage their network resources through a cloud-based platform.

Infrastructure as a service (IaaS): IaaS provides the building blocks of cloud computing — servers, storage and networking. This gives users the most control over their cloud environment, but it also requires them to manage the operating system, applications, and other components.

Identity as a service (IDaaS): providers maintain cloud-based user profiles that authenticate users and enable access to resources or applications based on security policies, user groups, and individual privileges. The ability to integrate with various directory services (Active Directory, LDAP, etc.) and provide single sign-on across business-oriented SaaS applications is essential.

Internet of things (IoT)

The internet of things (IoT) is a network of connected smart devices providing rich operational data to enterprises. It is a catch-all term for the growing number of electronics that aren’t traditional computing devices, but are connected to the internet to to gather data, receive instructions or both.

Industrial internet of things (IIoT)

The industrial internet of things (IIoT) connects machines and devices in industries. It is the application of instrumentation and connected sensors and other devices to machinery and vehicles in the transport, energy and manufacturing sectors.

Industry 4.0

Industry 4.0 blends technologies to create custom industrial solutions that make better use of resources. It connects the supply chain and the ERP system directly to the production line to form integrated, automated, and potentially autonomous manufacturing processes that make better use of capital, raw materials, and human resources.

IoT standards and protocols

There’s an often-impenetrable alphabet soup of protocols, standards and technologies around the Internet of Things, and this is a guide to essential IoT terms.

Narrowband IoT (NB-IoT)

NB-IoT is a communication standard designed for IoT devices to operate via carrier networks, either within an existing GSM bandwidth used by some cellular services, in an unused “guard band” between LTE channels, or independently.

IP

Internet protocol (IP) is the set of rules governing the format of data sent over IP networks. 

DHCP

DHCP stands for dynamic host-configuration protocol, an IP-network protocol  used for a server to automatically assign networked devices with IP addresses on the fly and and share other information to those devices so they can communicate efficiently with other endpoints.

DNS

The Domain Name System (DNS) resolves the common names of Web sites with their underlying IP addresses, adding efficiency and even security in the process.

IPv6

IPv6 is the latest version of internet protocol that identifies devices across the internet so they can be located but also can handle packets more efficiently, improve performance and increase security.

IP address

An IP address is a number or combination of letters and numbers used to label devices connected to a network on which the Internet Protocol is used as the medium for communication. IP addresses give devices on IP networks their own identities so they can find each other.

Liquid cooling

CDU (coolant distribution unit)

Think of this as the “heart” of the liquid cooling system. It is a standalone unit (often sitting at the end of a row or inside a rack) that manages the heat exchange between the facility’s water loop and the specialized fluid running directly to the chips.

With the high flow rates required for Rubin NVL72 (up to 60 liters per minute), 2026-era CDUs are now being built with “redundant pumps” to ensure an AI training run doesn’t fail if a single motor dies.

    Approach temperature

    This is the difference in temperature between the liquid entering the cold plate and the liquid leaving it.

    As we move to 45°C (113°F) “warm-water” cooling, the “approach” becomes tighter. If your ambient air is 40°C and your water is 45°C, you have a very narrow 5-degree window to reject that heat without using energy-hungry chillers.

    Secondary fluid loop

    Large data centers have two loops. The Primary Loop goes to the roof (the dry coolers/radiators). The Secondary Loop is the ultra-pure, treated water or dielectric fluid that actually touches the IT equipment.

    By keeping them separate via a CDU, you ensure that if a pipe bursts on the roof, dirty “river water” doesn’t end up inside your $3 million AI rack.

    Dielectric fluid

    A specialized liquid that does not conduct electricity. This is what allows you to dunk a fully powered server into a tank (Immersion Cooling) without it short-circuiting.

    In 2026, the industry is split between Hydrocarbons (oils) and Fluorocarbons (engineered fluids). Watch out for PFAS regulations here—many “Phase 2” fluids are facing strict reporting requirements this year.

    Dry cooler (The chiller killer)

    Essentially a massive car radiator for a data center. It uses large fans to blow ambient air over coils of hot water.

    Previously, air was too hot to cool dense chips. But because Vera Rubin can run “hot” (45°C), we can now use Dry Coolers even in relatively warm climates, completely bypassing the need for expensive, water-evaporating mechanical chillers.

    Network management

    Network management is the process of administering and managing computer networks.

    Intent-based networking

    Intent-based networking (IBNS) is network management that gives network administrators the ability to define what they want the network to do in plain language, and having a network-management platform automatically configure devices on the network to create the desired state and enforce policies.

    Microsegmentation

    Microsegmentation is a way to create secure zones in networks, in data centers, and cloud deployments by segregating sections so only designated users and applications can gain access to each segment.

    Software-defined networking (SDN)

    Software-defined networking (SDN) is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring. It operates by separating the network control plane from the data plane, enabling network-wide changes without manually reconfiguring each device.

    Network security

    Network security consists of the policies, processes, and practices adopted to prevent, detect, and monitor unauthorized access, misuse, modification, or denial of service on a computer network and network-accessible resources.

    Identity-based networking

    Identity-based networking ties a user’s identity to the networked services that user can receive.

    Microsegmentation

    Microsegmentation is a way to create secure zones in networks, in data centers, and cloud deployments by segregating sections so only designated users and applications can gain access to each segment.

    Network access control (NAC)

    Network Access Control is an approach to computer security that attempts to unify endpoint-security technology, user or system authentication, and network security enforcement.

    SASE

    Secure access service edge (SASE) is a network architecture that rolls software-defined wide area networking (SD-WAN) and security into a cloud service that promises simplified WAN deployment, improved efficiency and security, and to provide appropriate bandwidth per application. SASE, a term coined by Gartner in 2019, offers a comprehensive solution for securing and optimizing network access in today’s hybrid work environment.   Its core elements include the following: 

    Secure web gateway (SWG): Filters and inspects web traffic, blocking malicious content and preventing unauthorized access to websites.  
    Cloud access security broker (CASB): Enforces security policies and controls for cloud applications, protecting data and preventing unauthorized access. 
    Zero trust network access (ZTNA): Grants access to applications based on user identity and device posture, rather than relying on network location. 
    Firewall-as-a-service (FWaaS): Provides a cloud-based firewall that protects networks from threats and unauthorized access. 
    Unified management: A centralized platform for managing and monitoring both network and security components.  
    Automation: Automated workflows and policies to simplify operations and improve efficiency. 
    Analytics: Advanced analytics to provide insights into network and security performance. 

    Multivendor SASE

    Refers to a SASE platform that is provided by multiple vendors. This means you’d source that different components of the SASE platform, such as the secure web gateway (SWG), cloud access security broker (CASB), and zero-trust network access (ZTNA) from different vendors. This allows you to choose the best-of-breed solutions for each component of the platform. By using multivendor SASE platform, you avoid being tied to a single vendor and reduce the risk of vendor lock-in. On the negative side, managing multiple vendors is time-consuming than managing a single-vendor solution. Also, issues among vendors can impact the performance, efficiency and reliability of the SASE solution.

    Single-vendor SASE

    Single-vendor SASE refers to a solution that is provided by a single vendor. This means that all of the components of the SASE platform, such as the secure web gateway (SWG), cloud access security broker (CASB), and zero-trust network access (ZTNA) are delivered by a single vendor. Advantages of single-vendor SASE include simplified management, smoother integration and enhanced support. Disadvantages include vendor lock-in, more limited capabilities compared to multivendor platforms, and higher costs for large organizations.

    Network switch

    A network switch is a device that operates at the Data Link layer of the OSI model — Layer 2. It takes in packets being sent by devices that are connected to its physical ports and sends them out again, but only through the ports that lead to the devices the packets are intended to reach. They can also operate at the network layer — Layer 3 where routing occurs.

    Open systems interconnection (OSI) reference model

    Open Systems Interconnection (OSI) reference model is a framework for structuring  messages transmitted between any two entities in a network.

    Power over Ethernet (PoE)

    PoE is the delivery of electrical power to networked devices over the same data cabling that connects them to the LAN. This simplifies the devices themselves by eliminating the need for an electric plug  and power converter, and makes it unnecessary to have separate AC electric wiring and sockets installed near each device.

    Routers

    A router is a networking device that forwards data packets between computer networks. Routers operate at Layer 3 of the OSI model and perform traffic-directing functions between subnets within organizations and on the internet.

    Border-gateway protocol (BGP)

    Border Gateway Protocol is a standardized protocol designed to exchange routing and reachability information among the large, autonomous systems on the internet.

    UDP port

    UDP (User Datagram Protocol) is a communications protocol primarily used for establishing low-latency and loss-tolerant connections between applications on the internet. It speeds up transmissions by enabling the transfer of data before the receiving device agrees to the connection.

    Storage networking

    Storage networking is the process of interconnecting external storage resources over a network to all connected computers/nodes.

    Network attached storage (NAS)

    Network-attached storage (NAS) is a category of file-level storage that’s connected to a network and enables data access and file sharing across a heterogeneous client and server environment.

    Non-volatile memory express (NVMe)

    A communications protocol developed specifically for all-flash storage, NVMe enables faster performance and greater density compared to legacy protocols. It’s geared for enterprise workloads that require top performance, such as real-time data analytics, online trading platforms, and other latency-sensitive workloads.

    Solid-state drive (SSD)

    Solid-solid drives, or an SSD, are storage device that uses flash memory to store data. Unlike traditional hard disk drives (HDDs), SSDs have no moving parts, making them faster, more reliable, and quieter.

    Storage-area network (SAN)

    A storage-area network (SAN) is a dedicated, high-speed network that provides access to block-level storage. SANs were adopted to improve application availability and performance by segregating storage traffic from the rest of the LAN. 

    Tensor processing unit (TPU)

    A tensor processing unit (TPU) is a integrated circuit developed by Google for accelerating machine learning workloads. Unlike general-purpose CPUs or graphics processing units (GPUs), TPUs are designed and optimized specifically to handle the massive matrix multiplication and vector operations that are fundamental to neural networks and other machine learning algorithms.

    While both TPUs and GPUs are used to accelerate AI, they have different design philosophies:

    TPUs are optimized for massive, high-throughput machine learning tasks. They excel at inference and training large models.
    GPUs are more versatile and programmable. While also excellent for parallel processing, they are not exclusively for machine learning and are widely used for computer graphics, scientific computing, and general-purpose parallel programming.

    Virtualization

    Virtualization is the creation of a virtual version of something, including virtual computer hardware platforms, storage devices, and computer network resources. This includes virtual servers that can co-exist on the same hardware, but behave separately.

    Containerization

    Containerization (e.g., Docker, Kubernetes)  refers to a form of virtualization at the operating-system level. That is, rather than virtualizing hardware, containers virtualize the operating system itself. All containers on a single host share the same underlying OS kernel. Each container bundles only the application code, its runtime, system tools, libraries, and settings. This makes them much smaller and faster to start than virtual machines VMs. They provide isolation at the process and filesystem level, running in isolated “user spaces.”

    Hypervisor

    A hypervisor is software that separates a computer’s operating system and applications from the underlying physical hardware, allowing the hardware to be shared among multipe virtual machines.

    Network virtualizaton

    Network virtualization is the combination of network hardware and software resources with network functionality into a single, software-based administrative entity known as a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.

    Network function virtualization (NFV)

    Network functions virtualization (NFV) uses commodity server hardware to replace specialized network appliances for more flexible, efficient, and scalable services.

    Application-delivery controller (ADC)

    An application delivery controller (ADC) is a network component that manages and optimizes how client machines connect to web and enterprise application servers. In general, a ADC is a hardware device or a software program that can manage and direct the flow of data to applications.

    Virtual machine (VM)

    A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. In a VM instance, one or more guest machines can run on a physical host computer.

    VLAN
     A virtual LAN (VLAN) allows network administrators to logically segment a single physical LAN into multiple distinct broadcast domains. In simpler terms, a VLAN lets you group devices together as if they were on a separate network, even if those devices are connected to the same physical network switch or to different switches across a building or campus.

    Traditionally, a LAN segments traffic using physical network segments, where each segment is a separate broadcast domain. Any device on that segment can hear broadcast traffic from other devices on the same segment. VLANs break this physical constraint. When a VLAN is configured on a switch, ports on that switch are assigned to specific VLAN IDs. Traffic from devices connected to ports in one VLAN cannot directly communicate with devices in another VLAN, unless a Layer 3 device (like a router or a Layer 3 switch) is used to route traffic between them.

    This logical segmentation is achieved by adding a tag to the Ethernet frames as they traverse the network. This tag identifies which VLAN the frame belongs to, allowing switches to keep traffic within its assigned VLAN.

    VPN (virtual private network)

    A virtual private network can create secure remote-access and site-to-site connections inexpensively, are a stepping stone to software-defined WANs, and are proving useful in IoT.

    Split tunneling

    Split tunneling is a device configuration that ensures that only traffic destined for corporate resources go through the organization’s internet VPN, with the rest of the traffic going outside the VPN, directly to other sites on the internet.

    WAN

    A WAN  or wide-area network, is a network that uses various links—private lines, Multiprotocol Label Switching (MPLS), virtual private networks (VPNs), wireless (cellular), the Internet — to connect organizations’ geographically distributed sites. In an enterprise, a WAN could connect branch offices and individual remote workers with headquarters or the data center.

    Data deduplication

    Data deduplication, or dedupe, is the identification and elimination of duplicate blocks within a dataset, reducing the amount of traffic that must go on WAN connections. Deduplication can find redundant blocks of data within files from different directories, different data types, even different servers in different locations.

    MPLS

    Multi-protocol label switching (MPLS) is a packet protocol that ensures reliable connections for real-time applications, but it’s expensive, leading many enterprises to consider SD-WAN as a means to limit its use.

    SASE

    Secure access service edge (SASE) is a network architecture that rolls software-defined wide area networking (SD-WAN) and security into a cloud service that promises simplified WAN deployment, improved efficiency and security, and to provide appropriate bandwidth per application. SASE, a term coined by Gartner in 2019, offers a comprehensive solution for securing and optimizing network access in today’s hybrid work environment.   Its core elements include the following: 

    Secure web gateway (SWG): Filters and inspects web traffic, blocking malicious content and preventing unauthorized access to websites.  
    Cloud access security broker (CASB): Enforces security policies and controls for cloud applications, protecting data and preventing unauthorized access. 
    Zero trust network access (ZTNA): Grants access to applications based on user identity and device posture, rather than relying on network location. 
    Firewall-as-a-service (FWaaS): Provides a cloud-based firewall that protects networks from threats and unauthorized access. 
    Unified management: A centralized platform for managing and monitoring both network and security components.  
    Automation: Automated workflows and policies to simplify operations and improve efficiency. 
    Analytics: Advanced analytics to provide insights into network and security performance. 

    SD-WAN

    Software-defined wide-area networks (SD-WAN) is sofware that can manage and enforce the routing of WAN traffic to the appropriate wide-area connection based on policies that can take into consideration factors including cost, link performance, time of day, and application needs based on policies. Like its bigger technology brother, software-defined networking, SD-WAN decouples the control plane from the data plane. 

    VPN

    Virtual private networks (VPNs) can create secure remote-access and site-to-site connections inexpensively, can be an option in SD-WANs, and are proving useful in IoT.

    Wi-Fi

    Wi-Fi refers to the wireless LAN technologies that utilize the IEEE 802.11 standards for communications. Wi-Fi products use radio waves to transmit data to and from devices with Wi-Fi software clients to access points that route the data to the connected wired network..

    802.11ad

    802.11ad is an amendment to the IEEE 802.11 wireless networking standard, developed to provide a multiple gigabit wireless system standard at 60 GHz frequency, and is a networking standard for WiGig networks.

    802.11ay

    802.11ay is a proposed enhancement to the current (2021) technical standards for Wi-Fi. It is the follow-up to IEEE 802.11ad, quadrupling the bandwidth and adding MIMO up to 8 streams. It will be the second WiGig standard.

    802.11ax (Wi-Fi 6)

    802.11ax, officially marketed by the Wi-Fi Alliance as Wi-Fi 6 and Wi-Fi 6E, is an IEEE standard for wireless local-area networks and the successor of 802.11ac. It is also known as High Efficiency Wi-Fi, for the overall improvements to Wi-Fi 6 clients under dense environments.

    Access point

    An access point is networking device that allows wireless-capable devices to connect to a wired network. Access points typically create a wireless local area network (WLAN) using Wi-Fi standards.

    Wi-Fi 6E

    Wi-Fi 6E is an extension of Wi-Fi 6 unlicensed wireless technology operating in the 6GHz band, and it provides lower latency and faster data rates than Wi-Fi 6. The spectrum also has a shorter range and supports more channels than bands that were already dedicated to Wi-Fi, making it suitable for deployment in high-density areas like stadiums.

    Beamforming

    Beamforming is a technique that focuses a wireless signal towards a specific receiving device, rather than having the signal spread in all directions from a broadcast antenna, as it normally would. The resulting more direct connection is faster and more reliable than it would be without beamforming.

    Controllerless Wi-Fi

    It’s no longer necessary for enterprises to install dedicated Wi-Fi controllers in their data centers because that function can be distributed among access points or moved to the cloud, but it’s not for everybody.

    MU-MIMO

    MU-MIMO stands for multi-user, multiple input, multiple output, and is wireless technology supported by routers and endpoint devices. MU-MIMO is the next evolution from single-user MIMO (SU-MIMO), which is generally referred to as MIMO. MIMO technology was created to help increase the number of simultaneous users a singel access point can support, which was initially achieved by increasing the number of antennas on a wireless router.

    OFDMA

    Orthogonal frequency-division multiple-access (OFDMA) provides Wi-Fi 6 with high throughput and more network efficiency by letting multiple clients connect to a single access point simultaneously.

    Wi-Fi 6 (802.11ax)

    802.11ax, officially marketed by the Wi-Fi Alliance as Wi-Fi 6 and Wi-Fi 6E, is an IEEE standard for wireless local-area networks and the successor of 802.11ac. It is also known as High Efficiency Wi-Fi, for the overall improvements to Wi-Fi 6 clients under dense environments.

    Wi-Fi 7

    Wi-Fi 7 is currently the leading edge of wireless internet standards, providing more bandwidth, lower latency and more resiliency than prior standards. A year ago, there was some speculation that 2024 would be the breakout year for Wi-Fi 7. While some Wi-Fi 7 gear began to emerge in 2024, it looks like 2025 will be the year for Wi-Fi 7 rollouts. 

    Wi-Fi standards and speeds

    Ever-improving Wi-Fi standards make for denser, faster Wi-Fi networks.

    WPA3

    The WPA3 Wi-Fi security standard tackles WPA2 shortcomings to better secure personal, enterprise, and IoT wireless networks.

    Zero trust

    Zero trust is security model based on the principle of “never trust, always verify.” It assumes that no user, device, or application, whether inside or outside the network, should be trusted by default. Access is granted only after authentication and authorization, based on context and least privilege.

    Zero-water cooling

    Zero-water cooling refers to various cooling technologies designed to eliminate or substantially reduce the amount of fresh water used for cooling purposes in data centers and power plants.

    The goal of zero-watering cooling is to achieve a near-zero water usage effectiveness (WUE), a metric that measures water consumed for cooling against energy consumed by IT equipment.

    The technology is critical because a typical hyperscale data center can evaporate more than a million liters of water a day. Zero-water cooling addresses this by significantly reducing or eliminating the dependency on local water supplies, making it a critical sustainability goal for industries in water-stressed regions.

    Shape
    Shape
    Stay Ahead

    Explore More Insights

    Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

    Shape

    AI’s need for speed, optical connectivity in focus at OFC 2026

    “While the scale-up domain today is largely serviced by passive copper, data rates and rack densities are necessitating a shift to alternatives,” Naji wrote. “While many of the optical providers like Marvell (following its acquisition of Celestial AI), Broadcom, and Nvidia believe that co-packaged optic is the right solution, others

    Read More »

    Arm shifts course, moves into silicon business

    With a Thermal Design Power (TDP) of 300 watts, the AGI CPU draw significantly less power than X86 based CPUs from Intel and AMD. It supports high-density 1U server chassis that allow air-cooled deployments with up to 8,160 cores per rack, and liquid-cooled systems delivering 45,000+ cores per rack. Meta

    Read More »

    DOE and GSA Announce Collaborative Effort for a New Headquarters for the U.S. Department of Energy

    WASHINGTON—The U.S. Department of Energy (DOE), in partnership with the U.S. General Services Administration (GSA), announced today that DOE’s headquarters will relocate from the James V. Forrestal Building to the Lyndon B. Johnson (LBJ) building. LBJ currently serves as the headquarters for the U.S. Department of Education (ED). The relocation to the LBJ building will save taxpayers over $350 million in deferred maintenance and modernization costs, advancing President Trump’s commitment to eliminating waste and promoting efficiency within government agencies.“Relocating to the LBJ building will deliver significant taxpayer savings and will ensure the Energy Department continues to deliver on its mission,” said Energy Secretary Chris Wright. “We look forward to working closely with the General Services Administration and the Education Department throughout this process.” The LBJ building has been modernized to a Class A building with minimal deferred maintenance. All DOE Forrestal staff will be reassigned to LBJ, DOE Germantown Campus, Portals, or 950 L’Enfant.  “GSA is partnering with the Department of Education and the Department of Energy to match their missions of tomorrow with ideal environments that powers their talented workforce, cuts waste, and lowers costs,” said GSA Administrator Edward C. Forst. “This is the government working smarter for the American people. I want to thank Secretary Wright and Secretary McMahon for their positive energy and collaboration in executing President Trump’s directive to strengthen the government’s real estate portfolio.” This effort aligns with the Trump Administration’s broader strategy to streamline the federal real estate footprint, reduce wasteful spending, and support a high-performing government workforce with facilities that reflect modern expectations for efficiency and accountability. For more information, please visit the U.S. General Services Administration (GSA), U.S. Department of Energy (DOE), and Accelerated Disposition Website.  ### 

    Read More »

    Energy Department Announces $50 Million Investment to Advance Affordable, Reliable, and Secure Energy for Tribes

    These new investments will support Tribal-led energy project planning and development, strengthening energy reliability and increasing electricity access across Tribal communities  WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Indian Energy (IE) today announced a $50 million notice of funding opportunity (NOFO) aimed at fostering affordable, reliable, and secure energy solutions in Indian Country. This investment will support Tribal-led community-scale energy project planning and development and large-scale energy project planning.   In accordance with President Trump’s Executive Order, Unleashing American Energy, this NOFO highlights the fundamental role of energy in strengthening Tribal economies.   “This investment reflects the Trump Administration’s commitment to ensuring Tribal communities have access to affordable, reliable, and secure energy,” said U.S. Secretary of Energy Chris Wright. “By strengthening local energy infrastructure, we are supporting long-term economic growth, energy independence, and resilience across Indian Country. “This $50 million competitive funding opportunity for Tribal entities is directly aligned with the priorities of the U.S. Department of Energy,” said DOE’s Office of Indian Energy Director Eric Mahroum. “This funding will unleash Tribal energy development— supporting energy projects that aim to cut energy costs, expand electricity access, and advance economic opportunities. It’s exciting and like nothing we have offered before.”   Through the Unleashing Tribal Energy Development NOFO, the Office of Indian Energy is soliciting applications from Indian Tribes, which include Alaska Native regional corporations and Village corporations, Tribal and intertribal organizations, Tribal Energy Development Organizations, and Tribal Colleges and Universities—or any consortium of these eligible groups–to focus on:   Construction and installation of Tribal community-scale energy projects to meet the needs of the community   Predevelopment activities required to identify community-scale energy opportunities and bring projects from concept to implementation ready   Planning, assessment, and feasibility activities to de-risk and advance development for large-scale Tribal energy projects that provide opportunities for revenue generation and economic development   DOE works comprehensively from inception through commercialization, helping Tribes develop solutions

    Read More »

    Trump Administration Keeps Indiana Coal Plants Open to Ensure Affordable, Reliable and Secure Power in the Midwest

    Emergency orders address critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued emergency orders to keep two Indiana coal plants operational to ensure Americans in the Midwest region of the United States have continued access to affordable, reliable, and secure electricity. The orders direct the Northern Indiana Public Service Company (NIPSCO), CenterPoint Energy, and the Midcontinent Independent System Operator, Inc. (MISO) to take all measures necessary to ensure specified generation units at both the R.M. Schahfer and F.B. Culley generating stations in Indiana are available to operate. Certain generation units at the coal plants were scheduled to shut down at the end of 2025. The orders prioritize minimizing electricity costs for the American people and minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to keep America’s coal plants running to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” The reliable supply of power from these two coal plants was essential in powering the grid during recent extreme winter weather. From January 23–February 1, Schahfer operated at over 285 megawatts (MW) every day and Culley operated at approximately 30 MW almost every day. These operations serve as a reminder that allowing reliable generation to go offline would unnecessarily contribute to grid reliability risks. Since the Department of Energy’s (DOE) original orders were issued on December 23, 2025, the coal plants have proven critical to MISO’s operations, operating during periods of high energy demand and low levels of intermittent

    Read More »

    Energy Department Begins Delivering SPR Barrels at Record Speeds

    WASHINGTON — The U.S. Department of Energy (DOE) today announced the award of contracts for the initial phase of the Strategic Petroleum Reserve (SPR) Emergency Exchange as directed by President Trump. The first oil shipments began today—just nine days after President Trump and the Department of Energy announced the United States would lead a coordinated release of emergency oil reserves among International Energy Agency (IEA) member nations to address short-term supply disruptions. Under these initial awards, DOE will move forward with an exchange of 45.2 million barrels of crude oil and receive 55 million barrels in return, all at no cost to the taxpayer. This represents the first tranche of the United States’ 172-million-barrel release. Companies will receive 10 million barrels from the Bayou Choctaw SPR site, 15.7 million barrels from Bryan Mound, and 19.5 million barrels from West Hackberry. “Thanks to President Trump, the Energy Department began this first exchange at record speeds to address short-term supply disruptions while also strengthening the Strategic Petroleum Reserve by returning additional barrels at no cost to taxpayers,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “This exchange not only maintains reliability in the current market but will generate hundreds of millions of dollars in value in the form of additional barrels for the American people when the barrels are returned.” This initial action will ultimately add close to 10 million barrels to the SPR’s inventory when the barrels are returned. Taxpayers will benefit from both the short-term support for global supply and long-term growth of the SPR’s inventory. This helps protects U.S. and global energy security. The Trump Administration continues to pursue additional opportunities to strengthen the reserve and restore its long-term readiness as a cornerstone of American energy security. For more information on the Strategic Petroleum Reserve and DOE’s

    Read More »

    Then & Now: Oil prices, US shale, offshore, and AI—Deborah Byers on what changed since 2017

    In this Then & Now episode of the Oil & Gas Journal ReEnterprised podcast, Managing Editor and Content Strategist Mikaila Adams reconnects with Deborah Byers, nonresident fellow at Rice University’s Baker Institute Center for Energy Studies and former EY Americas industry leader, to revisit a set of questions first posed in 2017. In 2017, the industry was emerging from a downturn and recalibrating strategy; today, it faces heightened geopolitical risk, market volatility, and a rapidly evolving technology landscape. The conversation examines how those earlier perspectives have aged—covering oil price bands and the speed of recovery from geopolitical shocks, the role of US shale relative to OPEC in balancing global supply, and the shift from scarcity to economic abundance driven by technology and capital discipline. Adams and Byers also compare the economics and risk profiles of shale and offshore development, including the growing role of Brazil, Guyana, and the Gulf of Mexico, and discuss how infrastructure and regulatory constraints shape market outcomes. The episode further explores where digital transformation—particularly artificial intelligence—is delivering tangible returns across upstream operations, from predictive maintenance and workforce planning to capital project execution. The discussion concludes with insights on consolidation and scale in the Permian basin, the strategic rationale behind recent megamergers, and the industry’s ongoing challenge to attract and retain next‑generation talent through flexibility, technical opportunity, and purpose‑driven work.

    Read More »

    Eni plans tieback of new gas discoveries offshore Libya

    Eni North Africa, a unit of Eni SPA, together with Libya’s National Oil Corp., plans to develop two new gas discoveries offshore Libya as tiebacks to existing infrastructure. The gas discoveries were made offshore Libya, about 85 km off the coast in about 650 ft of water. Bahr Essalam South 2 (BESS 2) and Bahr Essalam South 3 (BESS 3), adjacent geological structures, were successfully drilled through the exploration well C1-16/4 and the appraisal well B2-16/4 about 16 km south of Bahr Essalam gas field, which lies about 110 km from the Tripoli coast. Gas-bearing intervals were encountered in both wells within the Metlaoui formation, the main productive reservoir of the area. The acquired data indicate the presence of a high-quality reservoir, with productive capacity confirmed by the well test already carried out on the first well. Preliminary volumetric estimates indicate that the BESS 2 and BESS 3 structures jointly contain more than 1 tcf of gas in place. Their proximity to Bahr Essalam field will enable rapid development through tie-back, the operator said. The gas produced will be supplied to the Libyan domestic market and for export to Italy. Bahr Essalam produces through the Sabratha platform to the Mellitah onshore treatment plant.

    Read More »

    Networking terms and definitions

    Monitoring: DCIM tools provide real-time visibility into the data center environment, tracking metrics like power consumption, temperature, humidity, and equipment status.   Management: DCIM enables administrators to control and manage various aspects of the data center, including power distribution, cooling systems, and IT assets.  Planning: DCIM facilitates capacity planning, helping data center operators understand current resource utilization and forecast future needs.  Optimization: DCIM helps identify areas for improvement in energy efficiency, resource allocation, and overall operational efficiency.  Data center sustainability Data center sustainability is the practice of designing, building and operating data centers in a way that minimizes their environmental by reducing energy consumption, water usage and waste generation, while also promoting sustainable practices such as renewable energy and efficient resource management. Hyperconverged infrastructure (HCI) Hyperconverged infrastructure combines compute, storage and networking in a single system and is used frequently in data centers. Enterprises can choose an appliance from a single vendor or install hardware-agnostic hyperconvergence software on white-box servers. Edge computing Edge computing is a distributed computing architecture that brings computation and storage closer to the sources of data. That is, instead of sending all data to a centralized cloud or data center, processing occurs at or near the edge of the network, where devices like sensors, IoT devices, or local servers are located to process, analyze and retain the data.  In short, it’s about processing data closer to where it’s generated, which is designed to minimize latency, reduce bandwidth usage,and enable real-time responses. Edge AI Edge AI is the deployment and execution of artificial intelligence (AI) algorithms on edge devices or local servers, rather than relying solely on cloud-based, more centralized, AI processing. This involves running machine learning models and AI applications directly on devices at the edge of the network. Some key aspects of edge AI include the

    Read More »

    Data center poaching adds to staffing crisis

    “You can’t just not have a pipeline and keep drawing from the same talent pool. It’s going to wane. It’s going to dwindle, and then eventually you’re going to be at a point where you are needing to upskill a bunch of people, rapidly all at once, and you don’t have enough senior experts to really pass on that information,” Weinschenk said. Shortages are shifting up the stack In 2023, Uptime data showed most staffing pain at junior and mid-level roles, particularly in facilities. Senior gaps were visible but less severe. By 2024, electrical expertise had become a pressure point, reflecting a broader trade shortage just as infrastructures densified and voltages increased. When asked which roles in the data center have the highest rates of staff turnover, respondents said: Operations junior/mid-level: 57% Operations management: 27% Electrical: 21% Cabling/IT: 20% Senior management/strategy: 12% Design: 7% None: 9% By 2025, a pattern emerged: Operations management roles overtook junior positions as shortage areas, Uptime reported, marking the arrival of the silver tsunami as highly experienced managers and engineers retire without enough trained successors to replace them. As more sites are built—often in regions with limited local expertise—operators are discovering they cannot simply hire experience indefinitely, Uptime said. The pool of ready-made experts is shrinking just as demand rises, according to its data. Poaching masks a deeper talent pipeline failure Uptime survey data revealed how heavily the sector leans on poaching. Roughly a quarter of staff departures are employees hired away by competitors; only a small amount of workers leave the industry entirely. Instead of investing in training and upskilling, many operators are rotating the same set of skilled people around the industry, hoping higher pay will keep them in place. Uptime said that this creates several long-term risks:

    Read More »

    Panasonic says datacenter batteries are selling out and AI is to blame

    AI servers are rewriting the power rulebook The root cause, Panasonic noted in the statement, is the electrical behavior of AI workloads. Unlike conventional server applications, AI inference and training draw large amounts of electricity in short bursts to sustain GPU processing, causing peak power levels to spike rapidly and voltages to fluctuate. “Peak power levels for such servers can rise rapidly, and voltages can often become unstable,” the statement said. “Securing stable, highly reliable power supplies is an absolute necessity for AI datacenters.” Vertiv warned in its 2025 Data Center Trends predictions that AI racks must handle loads that “can fluctuate from a 10% idle to a 150% overload in a flash,” requiring UPS systems and batteries with significantly higher power densities than current infrastructure provides. Panasonic said the solution gaining traction among hyperscalers is to place a battery backup unit on each server rack rather than rely on centralized UPS infrastructure upstream, absorbing voltage instability at the source. The company said its systems also carry a peak shaving function that stores off-peak electricity and deploys it during demand spikes, reducing peak grid draw at a time when AI-driven consumption faces growing regulatory and utility scrutiny. Several independent research bodies have reached similar conclusions on the severity of the power challenge ahead. Uptime Institute, in its Five Data Center Predictions for 2026, said “developers will not outrun the power shortage,” with research analyst Max Smolaks warning the crisis “is likely to last many years.” The IEA projected global datacenter electricity consumption could exceed 1,000 TWh by 2026, more than double 2022 levels, while Gartner has warned that energy shortages could restrict 40% of AI datacenters by 2027. Gogia said the shift runs deeper than a hardware swap. “This is not backup in the traditional sense. This is active stabilisation,”

    Read More »

    Why AI rack densities make liquid cooling non-negotiable

    Average rack power density has more than doubled in two years, from 8 kW to 17 kW, and is projected to reach 30 kW by 2027, according to anOctober 2024 McKinsey report, with AI training racks already well ahead of that average. Those limits show up in GPU clock speed. H100 GPUs under inadequate air cooling can throttle to a fraction of their rated clock speed within seconds of a sustained training run. In distributed jobs across thousands of GPUs, one throttled chip can stall the entire run. TheDOE estimates cooling accounts for up to 40% of data center energy use. JLL research establishes three density thresholds: Up to ~20 kW per rack: air cooling is adequate Up to ~100 kW: rear-door heat exchangers extend viability Above ~175 kW: immersion cooling is required Direct-to-chip cooling fills the middle band, handling densities between ~100 and ~175 kW where rear-door exchangers fall short and immersion is not yet warranted. Hot water changes the economics Mechanical chillers are one of the biggest energy draws in any liquid-cooled data center, and until recently there were an unavoidable cost of liquid cooling. Nvidia’s Vera Rubin processor is changing that. At CES in January 2026, Jensen Huang announced that Vera Rubin supports liquid cooling at 45 degrees Celsius, high enough for data centers to reject heat through dry coolers using ambient air rather than mechanical chillers.Nvidia’s CES press release confirmed Rubin is in full production, with customer availability in the second half of 2026. According toNvida’s product specifications, the Vera Rubin NVL72 uses warm-water, single-phase direct liquid cooling at a 45°C supply temperature, allowing data centers to reject heat through dry coolers using ambient air rather than energy-intensive chiller systems.

    Read More »

    Executive Roundtable: AI Infrastructure Enters Its Execution Era

    Miranda Gardiner, iMasons Climate Accord:  Since 2023, the digital infrastructure industry has moved definitively from planning to execution in the AI infrastructure cycle. Industry analysts forecast continued exponential growth, with active capacity at least doubling between now and 2030 and total capacity potentially tripling, quintupling, or more. In practical terms, we’ll see more digital infrastructure capacity come online in the next five year than has been built in the past 30 years, representing a historic industrial transformation requiring trillions of dollars in capital expenditure and a workforce measured in the millions. Design and organizational flexibility, integrated execution of sustainable solutions, and community-centered workforce development will separate those that thrive from those that struggle. Effective organizations will pivot quickly under these constantly shifting conditions and the leaders will be those that build fast but build right, as strategic flexibility balances long-term performance, efficiency, and regulatory compliance. We already know the resource intensity required to bring AI resources online and are working diligently to ensure this short-term, delivering streamlined and optimized solutions for everything from site selection to cooling and power management while lower lifecycle emissions. Additionally, in some regions, grid interconnection timelines and power availability are already the pacing item for data center development. Organizations that align their sustainability targets and energy procurement strategies will have a clearer path to execution. An operational model capable of delivering multiple large-scale facilities simultaneously across regions is another key piece to successful outcomes. Standardized, repeatable frameworks that reduce engineering time and accelerate permitting. We hear often about collaboration and strong partnerships, and these will be critical with utilities, regulators, and equipment manufacturers to anticipate bottlenecks before they impact schedules. Execution discipline will increasingly determine competitive advantage as the industry scales. The world and, especially, our host communities, are watching closely. Projects that move forward

    Read More »

    Jensen Huang Maps the AI Factory Era at NVIDIA GTC 2026

    SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase. For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that era is already giving way to something far bigger: the industrial-scale deployment of AI systems that run continuously, generating intelligence on demand. “The inference inflection point has arrived,” Huang told the audience gathered at the SAP Center. That shift carries enormous implications for the data center industry. Instead of episodic bursts of compute used to train models, the next generation of AI systems will require persistent, high-throughput infrastructure designed to serve billions, and eventually trillions, of inference requests every day. And the scale of the buildout Huang envisions is staggering. Throughout the keynote, the Nvidia CEO repeatedly referenced what he believes will become a trillion-dollar global market for AI infrastructure in the coming years, spanning accelerated computing systems, networking fabrics, storage architectures, power systems, and the facilities required to house them. At that scale, Huang argued, data centers are no longer simply IT facilities. They are truly becoming AI factories: industrial systems designed to convert electricity into tokens. “Tokens are the new commodity,” Huang said. “AI factories are the infrastructure that produces them.” Across more than two hours on stage, Huang sketched the architecture of that new computing platform, introducing new computing systems, networking technologies, software frameworks, and infrastructure blueprints designed to support what Nvidia believes will be the largest computing buildout in history. Four main themes defined the presentation: • The arrival of the inference inflection point.• The emergence of OpenClaw as a foundational operating layer for AI agents.• New hybrid inference architectures involving

    Read More »

    Microsoft will invest $80B in AI data centers in fiscal 2025

    And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

    Read More »

    John Deere unveils more autonomous farm machines to address skill labor shortage

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

    Read More »

    2025 playbook for enterprise AI success, from agents to evals

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

    Read More »

    OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

    Read More »