Stay Ahead, Stay ONMINE

Why Data Scientists Should Care about Containers — and Stand Out with This Knowledge

“I train models, analyze data and create dashboards — why should I care about Containers?” Many people who are new to the world of data science ask themselves this question. But imagine you have trained a model that runs perfectly on your laptop. However, error messages keep popping up in the cloud when others access […]

“I train models, analyze data and create dashboards — why should I care about Containers?”

Many people who are new to the world of data science ask themselves this question. But imagine you have trained a model that runs perfectly on your laptop. However, error messages keep popping up in the cloud when others access it — for example because they are using different library versions.

This is where containers come into play: They allow us to make machine learning models, data pipelines and development environments stable, portable and scalable — regardless of where they are executed.

Let’s take a closer look.

Table of Contents
1 — Containers vs. Virtual Machines: Why containers are more flexible than VMs
2 — Containers & Data Science: Do I really need Containers? And 4 reasons why the answer is yes.
3 — First Practice, then Theory: Container creation even without much prior knowledge
4 — Your 101 Cheatsheet: The most important Docker commands & concepts at a glance
Final Thoughts: Key takeaways as a data scientist
Where Can You Continue Learning?

1 — Containers vs. Virtual Machines: Why containers are more flexible than VMs

Containers are lightweight, isolated environments. They contain applications with all their dependencies. They also share the kernel of the host operating system, making them fast, portable and resource-efficient.

I have written extensively about virtual machines (VMs) and virtualization in ‘Virtualization & Containers for Data Science Newbiews’. But the most important thing is that VMs simulate complete computers and have their own operating system with their own kernel on a hypervisor. This means that they require more resources, but also offer greater isolation.

Both containers and VMs are virtualization technologies.

Both make it possible to run applications in an isolated environment.

But in the two descriptions, you can also see the 3 most important differences:

  • Architecture: While each VM has its own operating system (OS) and runs on a hypervisor, containers share the kernel of the host operating system. However, containers still run in isolation from each other. A hypervisor is the software or firmware layer that manages VMs and abstracts the operating system of the VMs from the physical hardware. This makes it possible to run multiple VMs on a single physical server.
  • Resource consumption: As each VM contains a complete OS, it requires a lot of memory and CPU. Containers, on the other hand, are more lightweight because they share the host OS.
  • Portability: You have to customize a VM for different environments because it requires its own operating system with specific drivers and configurations that depend on the underlying hardware. A container, on the other hand, can be created once and runs anywhere a container runtime is available (Linux, Windows, cloud, on-premise). Container runtime is the software that creates, starts and manages containers — the best-known example is Docker.
Created by the author

You can experiment faster with Docker — whether you’re testing a new ML model or setting up a data pipeline. You can package everything in a container and run it immediately. And you don’t have any “It works on my machine”-problems. Your container runs the same everywhere — so you can simply share it.

2 — Containers & Data Science: Do I really need Containers? And 4 reasons why the answer is yes.

As a data scientist, your main task is to analyze, process and model data to gain valuable insights and predictions, which in turn are important for management.

Of course, you don’t need to have the same in-depth knowledge of containers, Docker or Kubernetes as a DevOps Engineer or a Site Reliability Engineer (SRE). Nevertheless, it is worth having container knowledge at a basic level — because these are 4 examples of where you will come into contact with it sooner or later:

Model deployment

You are training a model. You not only want to use it locally but also make it available to others. To do this, you can pack it into a container and make it available via a REST API.

Let’s look at a concrete example: Your trained model runs in a Docker container with FastAPI or Flask. The server receives the requests, processes the data and returns ML predictions in real-time.

Reproducibility and easier collaboration

ML models and pipelines require specific libraries. For example, if you want to use a deep learning model like a Transformer, you need TensorFlow or PyTorch. If you want to train and evaluate classic machine learning models, you need Scikit-Learn, NumPy and Pandas. A Docker container now ensures that your code runs with exactly the same dependencies on every computer, server or in the cloud. You can also deploy a Jupyter Notebook environment as a container so that other people can access it and use exactly the same packages and settings.

Cloud integration

Containers include all packages, dependencies and configurations that an application requires. They therefore run uniformly on local computers, servers or cloud environments. This means you don’t have to reconfigure the environment.

For example, you write a data pipeline script. This works locally for you. As soon as you deploy it as a container, you can be sure that it will run in exactly the same way on AWS, Azure, GCP or the IBM Cloud.

Scaling with Kubernetes

Kubernetes helps you to orchestrate containers. But more on that below. If you now get a lot of requests for your ML model, you can scale it automatically with Kubernetes. This means that more instances of the container are started.

3 — First Practice, then Theory: Container creation even without much prior knowledge

Let’s take a look at an example that anyone can run through with minimal time — even if you haven’t heard much about Docker and containers. It took me 30 minutes.

We’ll set up a Jupyter Notebook inside a Docker container, creating a portable, reproducible Data Science environment. Once it’s up and running, we can easily share it with others and ensure that everyone works with the exact same setup.

0 — Install Docker Dekstop and create a project directory

To be able to use containers, we need Docker Desktop. To do this, we download Docker Desktop from the official website.

Now we create a new folder for the project. You can do this directly in the desired folder. I do this via Terminal — on Windows with Windows + R and open CMD.

We use the following command:

Screenshot taken by the author

1. Create a Dockerfile

Now we open VS Code or another editor and create a new file with the name ‘Dockerfile’. We save this file without an extension in the same directory. Why doesn’t it need an extension?

We add the following code to this file:

# Use the official Jupyter notebook image with SciPy
FROM jupyter/scipy-notebook:latest  

# Set the working directory inside the container
WORKDIR /home/jovyan/work  

# Copy all local files into the container
COPY . .

# Start Jupyter Notebook without token
CMD ["start-notebook.sh", "--NotebookApp.token=''"]

We have thus defined a container environment for Jupyter Notebook that is based on the official Jupyter SciPy Notebook image.

First, we define with FROM on which base image the container is built. jupyter/scipy-notebook:latest is a preconfigured Jupyter notebook image and contains libraries such as NumPy, SiPy, Matplotlib or Pandas. Alternatively, we could also use a different image here.

With WORKDIR we set the working directory within the container. /home/jovyan/work is the default path used by Jupyter. User jovyan is the default user in Jupyter Docker images. Another directory could also be selected — but this directory is best practice for Jupyter containers.

With COPY . . we copy all files from the local directory — in this case the Dockerfile, which is located in the jupyter-docker directory — to the working directory /home/jovyan/work in the container.

With CMD [“start-notebook.sh”, “ — NotebookApp.token=‘’’”] we specify the default start command for the container, specify the start script for Jupyter Notebook and define that the notebook is started without a token — this allows us to access it directly via the browser.

2. Create the Docker image

Next, we will build the Docker image. Make sure you have the previously installed Docker desktop open. We now go back to the terminal and use the following command:

cd jupyter-docker
docker build -t my-jupyter .

With cd jupyter-docker we navigate to the folder we created earlier. With docker build we create a Docker image from the Dockerfile. With -t my-jupyter we give the image a name. The dot means that the image will be built based on the current directory. What does that mean? Note the space between the image name and the dot.

The Docker image is the template for the container. This image contains everything needed for the application such as the operating system base (e.g. Ubuntu, Python, Jupyter), dependencies such as Pandas, Numpy, Jupyter Notebook, the application code and the startup commands. When we “build” a Docker image, this means that Docker reads the Dockerfile and executes the steps that we have defined there. The container can then be started from this template (Docker image).

We can now watch the Docker image being built in the terminal.

Screenshot taken by the author

We use docker images to check whether the image exists. If the output my-jupyter appears, the creation was successful.

docker images

If yes, we see the data for the created Docker image:

Screenshot taken by the author

3. Start Jupyter container

Next, we want to start the container and use this command to do so:

docker run -p 8888:8888 my-jupyter

We start a container with docker run. First, we enter the specific name of the container that we want to start. And with -p 8888:8888 we connect the local port (8888) with the port in the container (8888). Jupyter runs on this port. I do not understand.

Alternatively, you can also perform this step in Docker desktop:

Screenshot taken by the author

4. Open Jupyter Notebook & create a test notebook

Now we open the URL [http://localhost:8888](http://localhost:8888/) in the browser. You should now see the Jupyter Notebook interface.

Here we will now create a Python 3 notebook and insert the following Python code into it.

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0, 10, 100)
y = np.sin(x)

plt.plot(x, y)
plt.title("Sine Wave")
plt.show()

Running the code will display the sine curve:

Screenshot taken by the author

5. Terminate the container

At the end, we end the container either with ‘CTRL + C’ in the terminal or in Docker Desktop.

With docker ps we can check in the terminal whether containers are still running and with docker ps -a we can display the container that has just been terminated:

Screenshot taken by the author

6. Share your Docker image

If you now want to upload your Docker image to a registry, you can do this with the following command. This will upload your image to Docker Hub (you need a Docker Hub account for this). You can also upload it to a private registry of AWS Elastic Container, Google Container, Azure Container or IBM Cloud Container.

docker login

docker tag my-jupyter your-dockerhub-name/my-jupyter:latest

docker push dein-dockerhub-name/mein-jupyter:latest

If you then open Docker Hub and go to your repositories in your profile, the image should be visible.

This was a very simple example to get started with Docker. If you want to dive a little deeper, you can deploy a trained ML model with FastAPI via a container.

4 — Your 101 Cheatsheet: The most important Docker commands & concepts at a glance

You can actually think of a container like a shipping container. Regardless of whether you load it onto a ship (local computer), a truck (cloud server) or a train (data center) — the content always remains the same.

The most important Docker terms

  • Container: Lightweight, isolated environment for applications that contains all dependencies.
  • Docker: The most popular container platform that allows you to create and manage containers.
  • Docker Image: A read-only template that contains code, dependencies and system libraries.
  • Dockerfile: Text file with commands to create a Docker image.
  • Kubernetes: Orchestration tool to manage many containers automatically.

The basic concepts behind containers

  • Isolation: Each container contains its own processes, libraries and dependencies
  • Portability: Containers run wherever a container runtime is installed.
  • Reproducibility: You can create a container once and it runs exactly the same everywhere.

The most basic Docker commands

docker --version # Check if Docker is installed
docker ps # Show running containers
docker ps -a # Show all containers (including stopped ones)
docker images # List of all available images
docker info # Show system information about the Docker installation

docker run hello-world # Start a test container
docker run -d -p 8080:80 nginx # Start Nginx in the background (-d) with port forwarding
docker run -it ubuntu bash # Start interactive Ubuntu container with bash

docker pull ubuntu # Load an image from Docker Hub
docker build -t my-app . # Build an image from a Dockerfile

Final Thoughts: Key takeaways as a data scientist

👉 With Containers you can solve the “It works on my machine” problem. Containers ensure that ML models, data pipelines, and environments run identically everywhere, independent of OS or dependencies.

👉 Containers are more lightweight and flexible than virtual machines. While VMs come with their own operating system and consume more resources, containers share the host operating system and start faster.

👉 There are three key steps when working with containers: Create a Dockerfile to define the environment, use docker build to create an image, and run it with docker run — optionally pushing it to a registry with docker push.

And then there’s Kubernetes.

A term that comes up a lot in this context: An orchestration tool that automates container management, ensuring scalability, load balancing and fault recovery. This is particularly useful for microservices and cloud applications.

Before Docker, VMs were the go-to solution (see more in ‘Virtualization & Containers for Data Science Newbiews’.) VMs offer strong isolation, but require more resources and start slower.

So, Docker was developed in 2013 by Solomon Hykes to solve this problem. Instead of virtualizing entire operating systems, containers run independently of the environment — whether on your laptop, a server or in the cloud. They contain all the necessary dependencies so that they work consistently everywhere.

I simplify tech for curious minds🚀 If you enjoy my tech insights on Python, data science, Data Engineering, machine learning and AI, consider subscribing to my substack.

Where Can You Continue Learning?

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Four new vulnerabilities found in Ingress NGINX

NGINX is a reverse proxy/load balancer that generally acts as the front-end web traffic receiver and directs it to the application service for data transformation. Ingress NGINX is a version used in Kubernetes as the controller for traffic coming into the infrastructure. It takes care of mapping traffic to pods

Read More »

Phillips 66 to Cut Nearly 300 Jobs as LA Refinery Shuts

Phillips 66 will lay off around half of its employees at its sole remaining oil refinery in California after shuttering operations. The Houston-based company said it will cut 122 employees effective April 3 at two facilities in Carson and Wilmington that make up the company’s Los Angeles refinery, according a notice filed Monday with California’s employment regulator. This follows a separate notice last month that 155 employees will be terminated at the refinery in December, bringing the total to 277. The century-old refinery employs about 600 staff, according to Phillips 66’s website. The fuel-making plant has been slated to close since 2024 and the facility, once capable of processing 139,000 barrels of oil a day, refined its final barrel of crude in late 2025. Another Texas-based refiner, Valero Energy Corp., is also cutting more than 200 jobs in California this year as it idles a San Francisco Bay Area plant. Oil companies have decried what they call a hostile regulatory environment in the state, whose residents regularly pay the highest gasoline prices in the nation. Chevron Corp. officially relocated its headquarters to Texas in recent years and refiners have either fled or converted plants to producing biofuels, dwindling the in-state supply of petroleum products like gasoline, diesel and jet fuel. Some state lawmakers have recently tried to soften their stance toward the oil and gas industry. Phillips 66 continues to operate a biofuels refinery near San Francisco and import fossil fuels to California. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

WTI, Brent Gain as Talks Ease Conflict Fears

Oil edged marginally higher after a choppy session as investors assessed the status of nuclear talks between the US and Iran. West Texas Intermediate settled above $63 a barrel, with markets reacting sharply to headlines tied to the meeting. Iranian Foreign Minister Abbas Araghchi said the talks had a “good start,” even as the Wall Street Journal reported that Tehran stood by its refusal to end enrichment of nuclear fuel, a major sticking point for the US. The escalation in the Middle East, which provides about a third of the world’s crude, has added a risk premium to benchmark oil prices. Traders have weighed the geopolitical tensions against an outlook for oversupply. Still, futures in New York notched their first weekly retreat since mid-December as the US-Iran talks helped allay concerns over a broader conflict in the region. Prices also extended gains after data showed US consumer sentiment unexpectedly improved to the highest in six months, calming some concerns over an economic slowdown in the country that could lead to weaker oil demand. Meanwhile, in trilateral negotiations with the US, Ukraine and Russia agreed to exchange prisoners for the first time in five months as they sought to end their four-year conflict. Talks were making progress, with results expected “in the coming weeks,” President Donald Trump’s special envoy said. Saudi Arabia cut prices for buyers in Asia by less than expected, signaling confidence in demand for its barrels, although prices have still been reduced to the lowest levels since late 2020. Oil Prices WTI for March delivery settled 0.4% higher at $63.55 a barrel in New York. Brent for April settlement rose 0.7% to close at $68.05 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy

Read More »

Saudis Cut Key Oil Price for Asian Buyers

Saudi Arabia cut the price of its main oil grade for buyers in Asia to the lowest in years, a further sign that global supplies are running ahead of demand. State oil producer Saudi Aramco will reduce the price of its Arab Light grade by 30 cents a barrel to parity with the regional benchmark for March, according to a price list seen by Bloomberg. That brings pricing for the kingdom’s most plentiful crude blend to the lowest level since late 2020. Still, Aramco’s cut was not as deep as buyers expected, coming in smaller than even the most modest estimate of a reduction in a survey of refiners and traders. That offers a sign that the kingdom has faith in demand for its barrels and Aramco’s Chief Executive Officer Amin Nasser has previously said that fears of a glut are overblown. Saudi Arabia’s monthly crude pricing is keenly watched by traders across the globe as it sets the tone for other sellers in the world’s top producing regions. Asia is the biggest market for Middle Eastern crude, with the prices set for refiners determining the profitability of processing and influencing the cost of fuels like gasoline and diesel the world over. Aramco also cut pricing for its Arab Medium and Arab Heavy crude grades to Asia to the lowest levels since mid 2020, while it increased prices for the Extra Light and Super Light blends. That split reflects that dynamic in the Middle East market where prices for the heavier and more sulfurous crudes that are most plentiful in the region have trailed those for the lighter blends. The OPEC+ producers group, led by Saudi Arabia and Russia, agreed to keep production levels steady during talks on Feb. 1, maintaining an earlier decision to forgo output increases to avoid

Read More »

Shell to Pause Kazakh Oil and Gas Investments

Shell Plc will pause investment in Kazakhstan as it navigates legal claims from the OPEC+ nation against oil majors that could stretch into the billions of dollars, Chief Executive Officer Wael Sawan said. Kazakhstan is pressing multiple western oil companies for compensation across a series of cases both in the Central Asian country’s courts and in international arbitration. This month, it emerged that Shell and partners lost a dispute that could see them pay as much as $4 billion. There is also ongoing litigation about sulfur breaches and project costs. “It does impact our appetite to invest further in Kazakhstan,” Sawan said Thursday during an earnings conference call with analysts. While the company sees plenty of investment opportunities in the future, “we will hold until we have a better line of sight to where things end up.” The setbacks in Kazakhstan come as Shell seeks to ensure future production growth with a healthy inventory of projects. Acquisitions have largely filled the company’s production gap through 2030, buying time to deal with the 2030-2035 period, Sawan said in an interview on Thursday. The Kazakh energy ministry didn’t reply to an emailed request for comment sent outside normal working hours. Sawan didn’t elaborate on whether the pause would apply to new or existing projects. Shell didn’t immediately respond to a request to clarify whether the CEO was talking about new or existing investments. The latest dispute was against the Karachaganak field joint venture led by Italy’s Eni SpA and Shell, over cost deductions. Other partners include Chevron Corp., Lukoil PJSC and KazMunayGas National Co. The venture may still appeal the decision.   Last year, the companies proposed settling the dispute by building a plant that would process natural gas from the field for domestic use. WHAT DO YOU THINK? Generated by readers,

Read More »

Tankers With Russian Oil Flock to East Asia

More than a dozen tankers loaded with Russian Urals oil are sailing toward Asia or idling along the route, a sign of producers racing to get cargoes closer to China as India pulls back from the trade.  These vessels — carrying a combined 10 million to 12 million barrels of oil — are spread across the Indian Ocean, and off the coasts of Malaysia, China and Russia. Five of them are indicating ‘for orders’ or ‘China for orders’ as their status, according to data intelligence firm Kpler, a category that usually means they don’t yet have a specific buyer or discharge port. Another six are signaling Singapore and Malaysia, and are likely heading to a popular spot for ship-to-ship transfers in the South China Sea where they can wait until the crude is bought. Four are floating off Malaysia, China and Russia’s Far East, without indicating a clear destination. Urals — Russia’s flagship crude grade, which is loaded from ports in the Baltic Sea — has become the variety of choice for Indian refiners since the invasion of Ukraine in early 2022 saw it become heavily discounted. But pressure from Washington has pushed imports lower, reaching an average of 1.2 million barrels a day in January compared with a peak of more than 2 million barrels a day in mid-2024. Indian imports of the crude could be trimmed further after President Donald Trump said on Monday the country would stop buying Russian oil as part of deal to cut trade tariffs. Prime Minister Narendra Modi confirmed the agreement but didn’t comment on oil. Some refiners are holding off purchases while they seek clarification from New Delhi.  The big question is where the surplus cargoes of Urals — the bulk of which have gone to India over the last few years — will now end up. China’s

Read More »

BP, KOC Sign ETSA Extension

In a statement sent to Rigzone on Thursday, BP announced that it and Kuwait Oil Company have signed an extension of the Enhanced Technical Services Agreement (ETSA) between the companies. The agreement “paves the way for both companies to collaboratively progress Kuwait’s most strategic asset fields”, BP noted in the statement. BP added that the deal enables it to “bring expertise in enhanced oil recovery to the Greater Burgan oil field and develop local capabilities with Kuwait Oil Company to manage the development of South and East Kuwait fields through 50 secondment opportunities of BP’s technical experts”. Rigzone asked BP to disclose the deal’s value. A BP spokesperson was unable to do so. The ETSA was originally signed in 2016 for a period of 10 years, the statement highlighted, adding that it will now extend through to March 2029. BP Executive Vice President, Gas & Low Carbon Energy, William Lin, noted in the statement, “BP’s commitment to Kuwait dates back to our participation in the discovery of the Greater Burgan oil field in the 1930s, and we appreciate the trust placed in our expertise in giant oil and gas fields to continue to help develop this important strategic asset”. “This is another example of the deep relationships we’ve formed across governments, partners, and supply chains in the regions where we operate. We look forward to continuing our strong collaboration with Kuwait and to working with KOC to help support the country’s long-term energy resilience,” he added. BP notes on its website that it was one of the founders of the original Kuwait Oil Company, which it highlighted first discovered oil at Burgan in 1938. “Exportation of KOC began in 1946, in which the first export of Kuwait crude was loaded on to the bp vessel ‘Fusilier’,” BP’s site adds. BP

Read More »

Nvidia’s $100 Billion OpenAI Bet Shrinks and Signals a New Phase in the AI Infrastructure Cycle

One of the most eye-popping figures of the AI boom – a proposed $100 billion Nvidia commitment to OpenAI and as much as 10 gigawatts of compute for the company’s Stargate AI infrastructure buildout – is no longer on the table. And that partial retreat tells the data center industry something important. According to multiple reports surfacing at the end of January, Nvidia has paused and re-scoped its previously discussed, non-binding investment framework with OpenAI, shifting from an unprecedented capital-plus-infrastructure commitment to a much smaller (though still massive) equity investment. What was once framed as a potential $100 billion alignment is now being discussed in the $20-30 billion range, as part of OpenAI’s broader effort to raise as much as $100 billion at a valuation approaching $830 billion. For data center operators, infrastructure developers, and power providers, the recalibration matters less for the headline number and more for what it reveals about risk discipline, competitive dynamics, and the limits of vertical circularity in AI infrastructure finance. From Moonshot to Measured Capital The original September 2025 memorandum reportedly contemplated not just capital, but direct alignment on compute delivery: a structure that would have tightly coupled Nvidia’s balance sheet with OpenAI’s AI-factory roadmap. By late January, however, sources indicated Nvidia executives had grown uneasy with both the scale and the structure of the deal. Speaking in Taipei on January 31, Nvidia CEO Jensen Huang pushed back on reports of friction, calling them “nonsense” and confirming Nvidia would “absolutely” participate in OpenAI’s current fundraising round. But Huang was also explicit on what had changed: the investment would be “nothing like” $100 billion, even if it ultimately becomes the largest single investment Nvidia has ever made. That nuance matters. Nvidia is not walking away from OpenAI. But it is drawing a clearer boundary around

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Onsite Engineer – Critical FacilitiesCharleston, SC This is NOT a traveling position. Having degreed engineers seems to be all the rage these days. I can also use this type of candidate in following cities: Ashburn, VA; Moncks Corner, SC; Binghamton, NY; Dallas, TX or Indianapolis, IN. Our client is an engineering design and commissioning company that is a subject matter expert in the data center space. This role will be onsite at a customer’s data center. They will provide onsite design coordination and construction administration, consulting and management support for the data center / mission critical facilities space with the mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Ashburn, VA This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Montvale, NJ; Charlotte, NC; Atlanta, GA; Hampton, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX; Kansas City, MO; Omaha, NE; Chesterton, IN or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They

Read More »

Operationalizing AI at Scale: Google Cloud on Data Infrastructure, Search, and Enterprise AI

The AI conversation has been dominated by model announcements, benchmark races, and the rapid evolution of large language models. But in enterprise environments, the harder problem isn’t building smarter models. It’s making them work reliably with real-world data. On the latest episode of the Data Center Frontier Show Podcast, Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, pulled back the curtain on the infrastructure layer where many ambitious AI initiatives succeed, or quietly fail. Krishnamurthy operates at the intersection of databases, search, and AI systems. His perspective underscores a growing reality across enterprise IT: AI success increasingly depends on how organizations manage, integrate, and govern data across operational systems, not just how powerful their models are. The Disconnect Between LLMs and Reality Enterprises today face a fundamental challenge: connecting LLMs to real-time operational data. Search systems handle documents and unstructured information well. Operational databases manage transactions, customer data, and financial records with precision. But combining the two remains difficult. Krishnamurthy described the problem as universal. “Inside enterprises, knowledge workers are often searching documents while separately querying operational systems,” he said. “But combining unstructured information with operational database data is still hard to do.” Externally, customers encounter the opposite issue. Portals expose personal data but struggle to incorporate broader contextual information. “You get a narrow view of your own data,” he explained, “but combining that with unstructured information that might answer your real question is still challenging.” The result: AI systems often operate with incomplete context. Vector Search Moves Into the Database Vector search has emerged as a bridge between structured and unstructured worlds. But its evolution over the past three years has changed how enterprises deploy it. Early use cases focused on semantic search, i.e. finding meaning rather than exact keyword matches. Bug tracking systems, for example, began

Read More »

Transmission at the Breaking Point: Why the Grid Is Becoming the Defining Constraint for AI Data Centers

Regions in a Position to Scale California (A- overall)California continues to lead in long-term, scenario-based transmission planning. CAISO’s most recent transmission plan identifies $4.8 billion in new projects to accommodate approximately 76 gigawatts of additional capacity by 2039, explicitly accounting for data center growth alongside broader electrification. For data center developers, California’s challenge is less about planning quality and more about execution. Permitting timelines, cost allocation debates, and political scrutiny remain significant hurdles. Plains / Southwest Power Pool (B- overall, A in regional planning)SPP stands out nationally for embracing ultra-high-voltage transmission as a backbone strategy. Its recent Integrated Transmission Plans approve more than $16 billion in new projects, including multiple 765-kV lines, with benefit-cost ratios exceeding 10:1. This approach positions the Plains region as one of the most structurally “AI-ready” grids in North America, particularly for multi-gigawatt campuses supported by wind, natural gas, and emerging nuclear resources. Midwest / MISO (B overall)MISO’s Long-Range Transmission Planning framework aligns closely with federal best practices, co-optimizing generation and transmission over long planning horizons. While challenges remain—particularly around interregional coordination—the Midwest is comparatively well positioned for sustained data center growth. Regions Facing Heightened Risk Texas / ERCOT (D- overall)Texas has approved massive new transmission investments, including 765-kV projects tied to explosive load growth in the Permian Basin. However, the report criticizes ERCOT’s planning for remaining largely siloed and reliability-driven, with limited long-term scenario analysis and narrow benefit assessments. For data centers, ERCOT still offers speed to market, but increasingly with risks tied to congestion, price volatility, and political backlash surrounding grid reliability. Southeast (F overall)The Southeast receives failing grades across all categories, with transmission development remaining fragmented, utility-driven, and largely disconnected from durable regional planning frameworks. As AI data centers increasingly target the region for its land availability and tax incentives, the lack of

Read More »

From Row-Level CDUs to Facility-Scale Cooling: DCX Ramps Liquid Cooling for the AI Factory Era

Enter the 8MW CDU Era The next evolution arrived just days later. On Jan. 20, DCX announced its second-generation facility-scale unit, the FDU V2AT2, pushing capacity into territory previously unimaginable for single CDU platforms. The system delivers up to 8.15 megawatts of heat transfer capacity with record flow rates designed to support 45°C warm-water cooling, aligning directly with NVIDIA’s roadmap for rack-scale AI systems, including Vera Rubin-class deployments. That temperature target is significant. Warm-water cooling at this level allows many facilities to eliminate traditional chillers for heat rejection, depending on climate and deployment design. Instead of relying on compressor-driven refrigeration, operators can shift toward dry coolers or other simplified heat rejection strategies. The result: • Reduced mechanical complexity• Lower energy consumption• Improved efficiency at scale• New opportunities for heat reuse According to DCX CTO Maciek Szadkowski, the goal is to avoid obsolescence in a single hardware generation: “As the datacenter industry transitions to AI factories, operators need cooling systems that won’t be obsolete in one platform cycle. The FDU V2AT2 replaces multiple legacy CDUs and enables 45°C supply water operation while simplifying cooling topology and significantly reducing both CAPEX and OPEX.” The unit incorporates a high-capacity heat exchanger with a 2°C approach temperature, N+1 redundant pump configuration, integrated water quality control, and diagnostics systems designed for predictive maintenance. In short, this is infrastructure built not for incremental density growth, but for hyperscale AI facilities where megawatts of cooling must scale as predictably as compute capacity. Liquid Cooling Becomes System Architecture The broader industry implication is clear: cooling is no longer an auxiliary mechanical function. It is becoming system architecture. DCX’s broader 2025 performance metrics underscore the speed of this transition. The company reported 600% revenue growth, expanded its workforce fourfold, and shipped or secured contracts covering more than 500 MW

Read More »

AI Infrastructure Scales Out and Up: Edge Expansion Meets the Gigawatt Campus Era

The AI infrastructure boom is often framed around massive hyperscale campuses racing to secure gigawatts of power. But an equally important shift is happening in parallel: AI infrastructure is also becoming more distributed, modular, and sovereign, extending compute far beyond traditional data center hubs. A wave of recent announcements across developers, infrastructure investors, and regional operators shows the market pursuing a dual strategy. On one end, developers are accelerating delivery of hyperscale campuses measured in hundreds of megawatts, and increasingly gigawatts, often located where power availability and energy economics offer structural advantage, and in some cases pairing compute directly with dedicated generation. On the other, providers are building increasingly capable regional and edge facilities designed to bring AI compute closer to users, industrial operations, and national jurisdictions. Taken together, these moves point toward a future in which AI infrastructure is no longer purely centralized, but built around interconnected hub-and-spoke architectures combining energy-advantaged hyperscale cores with rapidly deployable edge capacity. Recent developments across hyperscale developers, edge specialists, infrastructure investors, and regional operators illustrate how quickly this model is taking shape. Sovereign AI Moves Beyond the Core On Feb. 5, 2026, San Francisco-based Armada and European AI infrastructure builder Nscale signed a letter of intent to jointly deploy both large-scale and edge AI infrastructure worldwide. The collaboration targets enterprise and public sector customers seeking sovereign, secure, geographically distributed AI environments. Nscale is building large AI supercomputer clusters globally, offering vertically integrated capabilities spanning power, data centers, compute, and software. Armada specializes in modular deployments through its Galleon data centers and Armada Edge Platform, delivering compute and storage into remote or infrastructure-poor environments. The combined offering addresses a growing challenge: many governments and enterprises want AI capability deployed within their own jurisdictions, even where traditional hyperscale infrastructure does not yet exist. “There is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »