Stay Ahead, Stay ONMINE

Data Science: From School to Work, Part II

In my previous article, I highlighted the importance of effective project management in Python development. Now, let’s shift our focus to the code itself and explore how to write clean, maintainable code — an essential practice in professional and collaborative environments.  Readability & Maintainability: Well-structured code is easier to read, understand, and modify. Other developers — or even your future self — can quickly grasp the logic without struggling to decipher messy code. Debugging & Troubleshooting: Organized code with clear variable names and structured functions makes it easier to identify and fix bugs efficiently. Scalability & Reusability: Modular, well-organized code can be reused across different projects, allowing for seamless scaling without disrupting existing functionality. So, as you work on your next Python project, remember:  Half of good code is Clean Code. Introduction Python is one of the most popular and versatile Programming languages, appreciated for its simplicity, comprehensibility and large community. Whether web development, data analysis, artificial intelligence or automation of tasks — Python offers powerful and flexible tools that are suitable for a wide range of areas. However, the efficiency and maintainability of a Python project depends heavily on the practices used by the developers. Poor structuring of the code, a lack of conventions or even a lack of documentation can quickly turn a promising project into a maintenance and development-intensive puzzle. It is precisely this point that makes the difference between student code and professional code. This article is intended to present the most important best practices for writing high-quality Python code. By following these recommendations, developers can create scripts and applications that are not only functional, but also readable, performant and easily maintainable by third parties. Adopting these best practices right from the start of a project not only ensures better collaboration within teams, but also prepares your code to evolve with future needs. Whether you’re a beginner or an experienced developer, this guide is designed to support you in all your Python developments. The code structuration Good code structuring in Python is essential. There are two main project layouts: flat layout and src layout. The flat layout places the source code directly in the project root without an additional folder. This approach simplifies the structure and is well-suited for small scripts, quick prototypes, and projects that do not require complex packaging. However, it may lead to unintended import issues when running tests or scripts. 📂 my_project/ ├── 📂 my_project/ # Directly in the root │ ├── 🐍 __init__.py │ ├── 🐍 main.py # Main entry point (if needed) │ ├── 🐍 module1.py # Example module │ └── 🐍 utils.py ├── 📂 tests/ # Unit tests │ ├── 🐍 test_module1.py │ ├── 🐍 test_utils.py │ └── … ├── 📄 .gitignore # Git ignored files ├── 📄 pyproject.toml # Project configuration (Poetry, setuptools) ├── 📄 uv.lock # UV file ├── 📄 README.md # Main project documentation ├── 📄 LICENSE # Project license ├── 📄 Makefile # Automates common tasks ├── 📄 DockerFile # Automates common tasks ├── 📂 .github/ # GitHub Actions workflows (CI/CD) │ ├── 📂 actions/ │ └── 📂 workflows/ On the other hand, the src layout (src is the contraction of source) organizes the source code inside a dedicated src/ directory, preventing accidental imports from the working directory and ensuring a clear separation between source files and other project components like tests or configuration files. This layout is ideal for large projects, libraries, and production-ready applications as it enforces proper package installation and avoids import conflicts. 📂 my-project/ ├── 📂 src/ # Main source code │ ├── 📂 my_project/ # Main package │ │ ├── 🐍 __init__.py # Makes the folder a package │ │ ├── 🐍 main.py # Main entry point (if needed) │ │ ├── 🐍 module1.py # Example module │ │ └── … │ │ ├── 📂 utils/ # Utility functions │ │ │ ├── 🐍 __init__.py │ │ │ ├── 🐍 data_utils.py # data functions │ │ │ ├── 🐍 io_utils.py # Input/output functions │ │ │ └── … ├── 📂 tests/ # Unit tests │ ├── 🐍 test_module1.py │ ├── 🐍 test_module2.py │ ├── 🐍 conftest.py # Pytest configurations │ └── … ├── 📂 docs/ # Documentation │ ├── 📄 index.md │ ├── 📄 architecture.md │ ├── 📄 installation.md │ └── … ├── 📂 notebooks/ # Jupyter Notebooks for exploration │ ├── 📄 exploration.ipynb │ └── … ├── 📂 scripts/ # Standalone scripts (ETL, data processing) │ ├── 🐍 run_pipeline.py │ ├── 🐍 clean_data.py │ └── … ├── 📂 data/ # Raw or processed data (if applicable) │ ├── 📂 raw/ │ ├── 📂 processed/ │ └── …. ├── 📄 .gitignore # Git ignored files ├── 📄 pyproject.toml # Project configuration (Poetry, setuptools) ├── 📄 uv.lock # UV file ├── 📄 README.md # Main project documentation ├── 🐍 setup.py # Installation script (if applicable) ├── 📄 LICENSE # Project license ├── 📄 Makefile # Automates common tasks ├── 📄 DockerFile # To create Docker image ├── 📂 .github/ # GitHub Actions workflows (CI/CD) │ ├── 📂 actions/ │ └── 📂 workflows/ Choosing between these layouts depends on the project’s complexity and long-term goals. For production-quality code, the src/ layout is often recommended, whereas the flat layout works well for simple or short-lived projects. You can imagine different templates that are better adapted to your use case. It is important that you maintain the modularity of your project. Do not hesitate to create subdirectories and to group together scripts with similar functionalities and separate those with different uses. A good code structure ensures readability, maintainability, scalability and reusability and helps to identify and correct errors efficiently. Cookiecutter is an open-source tool for generating preconfigured project structures from templates. It is particularly useful for ensuring the coherence and organization of projects, especially in Python, by applying good practices from the outset. The flat layout and src layout can be initiate using a UV tool. The SOLID principles SOLID programming is an essential approach to software development based on five basic principles for improving code quality, maintainability and scalability. These principles provide a clear framework for developing robust, flexible systems. By following the Solid Principles, you reduce the risk of complex dependencies, make testing easier and ensure that applications can evolve more easily in the face of change. Whether you are working on a single project or a large-scale application, mastering SOLID is an important step towards adopting object-oriented programming best practices. S — Single Responsibility Principle (SRP) The principle of single responsibility means that a class/function can only manage one thing. This means that it only has one reason to change. This makes the code more maintainable and easier to read. A class/function with multiple responsibilities is difficult to understand and often a source of errors. Example: # Violates SRP class MLPipeline:     def __init__(self, df: pd.DataFrame, target_column: str):         self.df = df         self.target_column = target_column         self.scaler = StandardScaler()         self.model = RandomForestClassifier()         def preprocess_data(self):         self.df.fillna(self.df.mean(), inplace=True)  # Handle missing values         X = self.df.drop(columns=[self.target_column])         y = self.df[self.target_column]         X_scaled = self.scaler.fit_transform(X)  # Feature scaling         return X_scaled, y         def train_model(self):         X, y = self.preprocess_data()  # Data preprocessing inside model training         self.model.fit(X, y)         print(“Model training complete.”) Here, the Report class has two responsibilities: Generate content and save the file. # Follows SRP class DataPreprocessor:     def __init__(self):         self.scaler = StandardScaler()         def preprocess(self, df: pd.DataFrame, target_column: str):         df = df.copy()         df.fillna(df.mean(), inplace=True)  # Handle missing values         X = df.drop(columns=[target_column])         y = df[target_column]         X_scaled = self.scaler.fit_transform(X)  # Feature scaling         return X_scaled, y class ModelTrainer:     def __init__(self, model):         self.model = model         def train(self, X, y):         self.model.fit(X, y)         print(“Model training complete.”) O — Open/Closed Principle (OCP) The open/close principle means that a class/function must be open to extension, but closed to modification. This makes it possible to add functionality without the risk of breaking existing code. It is not easy to develop with this principle in mind, but a good indicator for the main developer is to see more and more additions (+) and fewer and fewer removals (-) in the merge requests during project development. L — Liskov Substitution Principle (LSP) The Liskov substitution principle states that a subordinate class can replace its parent class without changing the behavior of the program, ensuring that the subordinate class meets the expectations defined by the base class. It limits the risk of unexpected errors. Example : # Violates LSP class Rectangle: def __init__(self, width, height): self.width = width self.height = height def area(self): return self.width * self.height class Square(Rectangle): def __init__(self, side): super().__init__(side, side) # Changing the width of a square violates the idea of a square. To respect the LSP, it is better to avoid this hierarchy and use independent classes: class Shape: def area(self): raise NotImplementedError class Rectangle(Shape): def __init__(self, width, height): self.width = width self.height = height def area(self): return self.width * self.height class Square(Shape): def __init__(self, side): self.side = side def area(self): return self.side * self.side I — Interface Segregation Principle (ISP) The principle of interface separation states that several small classes should be built instead of one with methods that cannot be used in certain cases. This reduces unnecessary dependencies. Example: # Violates ISP class Animal: def fly(self): raise NotImplementedError def swim(self): raise NotImplementedError It is better to split the class Animal into several classes: # Follows ISP class CanFly:     def fly(self):         raise NotImplementedError class CanSwim:     def swim(self):         raise NotImplementedError class Bird(CanFly):     def fly(self):         print(“Flying”) class Fish(CanSwim):     def swim(self):         print(“Swimming”) D — Dependency Inversion Principle (DIP) The Dependency Inversion Principle means that a class must depend on an abstract class and not on a concrete class. This reduces the connections between the classes and makes the code more modular. Example: # Violates DIP class Database: def connect(self): print(“Connecting to database”) class UserService: def __init__(self): self.db = Database() def get_users(self): self.db.connect() print(“Getting users”) Here, the attribute db of UserService depends on the class Database. To respect the DIP, db has to depend on an abstract class. # Follows DIP class DatabaseInterface:     def connect(self):         raise NotImplementedError class MySQLDatabase(DatabaseInterface):     def connect(self):         print(“Connecting to MySQL database”) class UserService:     def __init__(self, db: DatabaseInterface):         self.db = db     def get_users(self):         self.db.connect()         print(“Getting users”) # We can easily change the used database. db = MySQLDatabase() service = UserService(db) service.get_users() PEP standards PEPs (Python Enhancement Proposals) are technical and informative documents that describe new features, language improvements or guidelines for the Python community. Among them, PEP 8, which defines style conventions for Python code, plays a fundamental role in promoting readability and consistency in projects. Adopting the PEP standards, especially PEP 8, not only ensures that the code is understandable to other developers, but also that it conforms to the standards set by the community. This facilitates collaboration, re-reads and long-term maintenance. In this article, I present the most important aspects of the PEP standards, including: Style Conventions (PEP 8): Indentations, variable names and import organization. Best practices for documenting code (PEP 257). Recommendations for writing typed, maintainable code (PEP 484 and PEP 563). Understanding and applying these standards is essential to take full advantage of the Python ecosystem and contribute to professional quality projects. PEP 8 This documentation is about coding conventions to standardize the code, and there exists a lot of documentation about the PEP 8. I will not show all recommendation in this posts, only those that I judge essential when I review a code Naming conventions Variable, function and module names should be in lower case, and use underscore to separate words. This typographical convention is called snake_case. my_variable my_new_function() my_module Constances are written in capital letters and set at the beginning of the script (after the imports): LIGHT_SPEED MY_CONSTANT Finally, class names and exceptions use the CamelCase format (a capital letter at the beginning of each word). Exceptions must contain an Error at the end. MyGreatClass MyGreatError Remember to give your variables names that make sense! Don’t use variable names like v1, v2, func1, i, toto… Single-character variable names are permitted for loops and indexes: my_list = [1, 3, 5, 7, 9, 11] for i in range(len(my_liste)):     print(my_list[i]) A more “pythonic” way of writing, to be preferred to the previous example, gets rid of the i index: my_list = [1, 3, 5, 7, 9, 11] for element in my_list: print(element ) Spaces management It is recommended surrounding operators (+, -, *, /, //, %, ==, !=, >, not, in, and, or, …) with a space before AND after: # recommended code: my_variable = 3 + 7 my_text = “mouse” my_text == my_variable # not recommended code: my_variable=3+7 my_text=”mouse” my_text== ma_variable You can’t add several spaces around an operator. On the other hand, there are no spaces inside square brackets, braces or parentheses: # recommended code: my_list[1] my_dict{“key”} my_function(argument) # not recommended code: my_list[ 1 ] my_dict{ “key” } my_function( argument ) A space is recommended after the characters “:” and “,”, but not before: # recommended code: my_list= [1, 2, 3] my_dict= {“key1”: “value1”, “key2”: “value2”} my_function(argument1, argument2) # not recommended code: my_list= [1 , 2 , 3] my_dict= {“key1″:”value1”, “key2″:”value2”} my_function(argument1 , argument2) However, when indexing lists, we don’t put a space after the “:”: my_list= [1, 3, 5, 7, 9, 1] # recommended code: my_list[1:3] my_list[1:4:2] my_list[::2] # not recommended code: my_list[1 : 3] my_list[1: 4:2 ] my_list[ : :2] Line length For the sake of readability, we recommend writing lines of code no longer than 80 characters long. However, in certain circumstances this rule can be broken, especially if you are working on a Dash project, it may be complicated to respect this recommendation  The character can be used to cut lines that are too long. For example: my_variable = 3 if my_variable > 1 and my_variable > > my_function.__doc__ > > > ‘This is a doctring.’ We always write a docstring between triple double quote “””. Docstring on a line Used for simple functions or methods, it must fit on a single line, with no blank line at the beginning or end. The closing quotes are on the same line as opening quotes and there are no blank lines before or after the docstring. def add(a, b): “””Return the sum of a and b.””” return a + b Single-line docstring MUST NOT reintegrate function/method parameters. Do not do: def my_function(a, b): “”” my_function(a, b) – > list””” Docstring on several lines The first line should be a summary of the object being documented. An empty line follows, followed by more detailed explanations or clarifications of the arguments. def divide(a, b):     “””Divide a byb.     Returns the result of the division. Raises a ValueError if b equals 0.     “””     if b == 0:         raise ValueError(“Only Chuck Norris can divide by 0”) return a / b Complete Docstring A complete docstring is made up of several parts (in this case, based on the numpydoc standard). Short description: Summarizes the main functionality. Parameters: Describes the arguments with their type, name and role. Returns: Specifies the type and role of the returned value. Raises: Documents exceptions raised by the function. Notes (optional): Provides additional explanations. Examples (optional): Contains illustrated usage examples with expected results or exceptions. def calculate_mean(numbers: list[float]) – > float:     “””     Calculate the mean of a list of numbers.     Parameters     ———-     numbers : list of float         A list of numerical values for which the mean is to be calculated.     Returns     ——-     float         The mean of the input numbers.     Raises     ——     ValueError         If the input list is empty.     Notes     —–     The mean is calculated as the sum of all elements divided by the number of elements.     Examples     ——–     Calculate the mean of a list of numbers:     > > > calculate_mean([1.0, 2.0, 3.0, 4.0])     2.5 Tool to help you VsCode’s autoDocstring extension lets you automatically create a docstring template. PEP 484 In some programming languages, typing is mandatory when declaring a variable. In Python, typing is optional, but strongly recommended. PEP 484 introduces a typing system for Python, annotating the types of variables, function arguments and return values. This PEP provides a basis for improving code readability, facilitating static analysis and reducing errors. What is typing? Typing consists in explicitly declaring the type (float, string, etc.) of a variable. The typing module provides standard tools for defining generic types, such as Sequence, List, Union, Any, etc. To type function attributes, we use “:” for function arguments and “- >” for the type of what is returned. Here a list of none typing functions: def show_message(message):     print(f”Message : {message}”) def addition(a, b):     return a + b def is_even(n):     return n % 2 == 0 def list_square(numbers):       return [x**2 for x in numbers] def reverse_dictionary(d):     return {v: k for k, v in d.items()} def add_element(ensemble, element):     ensemble.add(element)   return ensemble Now here’s how they should look: from typing import List, Tuple, Dict, Set, Any def show _message(message: str) – > None:     print(f”Message : {message}”) def addition(a: int, b: int) – > int:     return a + b def is_even(n: int) – > bool:     return n % 2 == 0 def list_square (numbers: List[int]) – > List[int]:     return [x**2 for x in numbers] def reverse_dictionary (d: Dict[str, int]) – > Dict[int, str]:     return {v: k for k, v in d.items()} def add_element(ensemble: Set[int], element: int) – > Set[int]:     ensemble.add(element)     return ensemble Tool to help you The MyPy extension automatically checks whether the use of a variable corresponds to the declared type. For example, for the following function: def my_function(x: float) – > float: return x.mean() The editor will point out that a float has no “mean” attribute. Image from author The benefit is twofold: you’ll know whether the declared type is the right one and whether the use of this variable corresponds to its type. In the above example, x must be of a type that has a mean() method (e.g. np.array). Conclusion In this article, we have looked at the most important principles for creating clean Python production code. A solid architecture, adherence to SOLID principles, and compliance with PEP recommendations (at least the four discussed here) are essential for ensuring code quality. The desire for beautiful code is not (just) coquetry. It standardizes development practices and makes teamwork and maintenance much easier. There’s nothing more frustrating than spending hours (or even days) reverse-engineering a program, deciphering poorly written code before you’re finally able to fix the bugs. By applying these best practices, you ensure that your code remains clear, scalable, and easy for any developer to work with in the future. References 1. src layout vs flat layout 2. SOLID principles 3. Python Enhancement Proposals index

In my previous article, I highlighted the importance of effective project management in Python development. Now, let’s shift our focus to the code itself and explore how to write clean, maintainable code — an essential practice in professional and collaborative environments. 

  • Readability & Maintainability: Well-structured code is easier to read, understand, and modify. Other developers — or even your future self — can quickly grasp the logic without struggling to decipher messy code.
  • Debugging & Troubleshooting: Organized code with clear variable names and structured functions makes it easier to identify and fix bugs efficiently.
  • Scalability & Reusability: Modular, well-organized code can be reused across different projects, allowing for seamless scaling without disrupting existing functionality.

So, as you work on your next Python project, remember: 

Half of good code is Clean Code.


Introduction

Python is one of the most popular and versatile Programming languages, appreciated for its simplicity, comprehensibility and large community. Whether web development, data analysis, artificial intelligence or automation of tasks — Python offers powerful and flexible tools that are suitable for a wide range of areas.

However, the efficiency and maintainability of a Python project depends heavily on the practices used by the developers. Poor structuring of the code, a lack of conventions or even a lack of documentation can quickly turn a promising project into a maintenance and development-intensive puzzle. It is precisely this point that makes the difference between student code and professional code.

This article is intended to present the most important best practices for writing high-quality Python code. By following these recommendations, developers can create scripts and applications that are not only functional, but also readable, performant and easily maintainable by third parties.

Adopting these best practices right from the start of a project not only ensures better collaboration within teams, but also prepares your code to evolve with future needs. Whether you’re a beginner or an experienced developer, this guide is designed to support you in all your Python developments.


The code structuration

Good code structuring in Python is essential. There are two main project layouts: flat layout and src layout.

The flat layout places the source code directly in the project root without an additional folder. This approach simplifies the structure and is well-suited for small scripts, quick prototypes, and projects that do not require complex packaging. However, it may lead to unintended import issues when running tests or scripts.

📂 my_project/
├── 📂 my_project/                  # Directly in the root
│   ├── 🐍 __init__.py
│   ├── 🐍 main.py                   # Main entry point (if needed)
│   ├── 🐍 module1.py             # Example module
│   └── 🐍 utils.py
├── 📂 tests/                            # Unit tests
│   ├── 🐍 test_module1.py
│   ├── 🐍 test_utils.py
│   └── ...
├── 📄 .gitignore                      # Git ignored files
├── 📄 pyproject.toml              # Project configuration (Poetry, setuptools)
├── 📄 uv.lock                         # UV file
├── 📄 README.md               # Main project documentation
├── 📄 LICENSE                     # Project license
├── 📄 Makefile                       # Automates common tasks
├── 📄 DockerFile                   # Automates common tasks
├── 📂 .github/                        # GitHub Actions workflows (CI/CD)
│   ├── 📂 actions/               
│   └── 📂 workflows/

On the other hand, the src layout (src is the contraction of source) organizes the source code inside a dedicated src/ directory, preventing accidental imports from the working directory and ensuring a clear separation between source files and other project components like tests or configuration files. This layout is ideal for large projects, libraries, and production-ready applications as it enforces proper package installation and avoids import conflicts.

📂 my-project/
├── 📂 src/                              # Main source code
│   ├── 📂 my_project/            # Main package
│   │   ├── 🐍 __init__.py        # Makes the folder a package
│   │   ├── 🐍 main.py             # Main entry point (if needed)
│   │   ├── 🐍 module1.py       # Example module
│   │   └── ...
│   │   ├── 📂 utils/                  # Utility functions
│   │   │   ├── 🐍 __init__.py     
│   │   │   ├── 🐍 data_utils.py  # data functions
│   │   │   ├── 🐍 io_utils.py      # Input/output functions
│   │   │   └── ...
├── 📂 tests/                             # Unit tests
│   ├── 🐍 test_module1.py     
│   ├── 🐍 test_module2.py     
│   ├── 🐍 conftest.py              # Pytest configurations
│   └── ...
├── 📂 docs/                            # Documentation
│   ├── 📄 index.md                
│   ├── 📄 architecture.md         
│   ├── 📄 installation.md         
│   └── ...                     
├── 📂 notebooks/                   # Jupyter Notebooks for exploration
│   ├── 📄 exploration.ipynb       
│   └── ...                     
├── 📂 scripts/                         # Standalone scripts (ETL, data processing)
│   ├── 🐍 run_pipeline.py         
│   ├── 🐍 clean_data.py           
│   └── ...                     
├── 📂 data/                            # Raw or processed data (if applicable)
│   ├── 📂 raw/                    
│   ├── 📂 processed/
│   └── ....                                 
├── 📄 .gitignore                      # Git ignored files
├── 📄 pyproject.toml              # Project configuration (Poetry, setuptools)
├── 📄 uv.lock                         # UV file
├── 📄 README.md               # Main project documentation
├── 🐍 setup.py                       # Installation script (if applicable)
├── 📄 LICENSE                     # Project license
├── 📄 Makefile                       # Automates common tasks
├── 📄 DockerFile                   # To create Docker image
├── 📂 .github/                        # GitHub Actions workflows (CI/CD)
│   ├── 📂 actions/               
│   └── 📂 workflows/

Choosing between these layouts depends on the project’s complexity and long-term goals. For production-quality code, the src/ layout is often recommended, whereas the flat layout works well for simple or short-lived projects.

You can imagine different templates that are better adapted to your use case. It is important that you maintain the modularity of your project. Do not hesitate to create subdirectories and to group together scripts with similar functionalities and separate those with different uses. A good code structure ensures readability, maintainability, scalability and reusability and helps to identify and correct errors efficiently.

Cookiecutter is an open-source tool for generating preconfigured project structures from templates. It is particularly useful for ensuring the coherence and organization of projects, especially in Python, by applying good practices from the outset. The flat layout and src layout can be initiate using a UV tool.


The SOLID principles

SOLID programming is an essential approach to software development based on five basic principles for improving code quality, maintainability and scalability. These principles provide a clear framework for developing robust, flexible systems. By following the Solid Principles, you reduce the risk of complex dependencies, make testing easier and ensure that applications can evolve more easily in the face of change. Whether you are working on a single project or a large-scale application, mastering SOLID is an important step towards adopting object-oriented programming best practices.

S — Single Responsibility Principle (SRP)

The principle of single responsibility means that a class/function can only manage one thing. This means that it only has one reason to change. This makes the code more maintainable and easier to read. A class/function with multiple responsibilities is difficult to understand and often a source of errors.

Example:

# Violates SRP
class MLPipeline:
    def __init__(self, df: pd.DataFrame, target_column: str):
        self.df = df
        self.target_column = target_column
        self.scaler = StandardScaler()
        self.model = RandomForestClassifier()
        def preprocess_data(self):
        self.df.fillna(self.df.mean(), inplace=True)  # Handle missing values
        X = self.df.drop(columns=[self.target_column])
        y = self.df[self.target_column]
        X_scaled = self.scaler.fit_transform(X)  # Feature scaling
        return X_scaled, y
        def train_model(self):
        X, y = self.preprocess_data()  # Data preprocessing inside model training
        self.model.fit(X, y)
        print("Model training complete.")

Here, the Report class has two responsibilities: Generate content and save the file.

# Follows SRP
class DataPreprocessor:
    def __init__(self):
        self.scaler = StandardScaler()
        def preprocess(self, df: pd.DataFrame, target_column: str):
        df = df.copy()
        df.fillna(df.mean(), inplace=True)  # Handle missing values
        X = df.drop(columns=[target_column])
        y = df[target_column]
        X_scaled = self.scaler.fit_transform(X)  # Feature scaling
        return X_scaled, y


class ModelTrainer:
    def __init__(self, model):
        self.model = model
        def train(self, X, y):
        self.model.fit(X, y)
        print("Model training complete.")

O — Open/Closed Principle (OCP)

The open/close principle means that a class/function must be open to extension, but closed to modification. This makes it possible to add functionality without the risk of breaking existing code.

It is not easy to develop with this principle in mind, but a good indicator for the main developer is to see more and more additions (+) and fewer and fewer removals (-) in the merge requests during project development.

L — Liskov Substitution Principle (LSP)

The Liskov substitution principle states that a subordinate class can replace its parent class without changing the behavior of the program, ensuring that the subordinate class meets the expectations defined by the base class. It limits the risk of unexpected errors.

Example :

# Violates LSP
class Rectangle:
    def __init__(self, width, height):
        self.width = width
        self.height = height

    def area(self):
        return self.width * self.height


class Square(Rectangle):
    def __init__(self, side):
        super().__init__(side, side)
# Changing the width of a square violates the idea of a square.

To respect the LSP, it is better to avoid this hierarchy and use independent classes:

class Shape:
    def area(self):
        raise NotImplementedError


class Rectangle(Shape):
    def __init__(self, width, height):
        self.width = width
        self.height = height

    def area(self):
        return self.width * self.height


class Square(Shape):
    def __init__(self, side):
        self.side = side

    def area(self):
        return self.side * self.side

I — Interface Segregation Principle (ISP)

The principle of interface separation states that several small classes should be built instead of one with methods that cannot be used in certain cases. This reduces unnecessary dependencies.

Example:

# Violates ISP
class Animal:
    def fly(self):
        raise NotImplementedError

    def swim(self):
        raise NotImplementedError

It is better to split the class Animal into several classes:

# Follows ISP
class CanFly:
    def fly(self):
        raise NotImplementedError


class CanSwim:
    def swim(self):
        raise NotImplementedError


class Bird(CanFly):
    def fly(self):
        print("Flying")


class Fish(CanSwim):
    def swim(self):
        print("Swimming")

D — Dependency Inversion Principle (DIP)

The Dependency Inversion Principle means that a class must depend on an abstract class and not on a concrete class. This reduces the connections between the classes and makes the code more modular.

Example:

# Violates DIP
class Database:
    def connect(self):
        print("Connecting to database")


class UserService:
    def __init__(self):
        self.db = Database()

    def get_users(self):
        self.db.connect()
        print("Getting users")

Here, the attribute db of UserService depends on the class Database. To respect the DIP, db has to depend on an abstract class.

# Follows DIP
class DatabaseInterface:
    def connect(self):
        raise NotImplementedError


class MySQLDatabase(DatabaseInterface):
    def connect(self):
        print("Connecting to MySQL database")


class UserService:
    def __init__(self, db: DatabaseInterface):
        self.db = db

    def get_users(self):
        self.db.connect()
        print("Getting users")


# We can easily change the used database.
db = MySQLDatabase()
service = UserService(db)
service.get_users()

PEP standards

PEPs (Python Enhancement Proposals) are technical and informative documents that describe new features, language improvements or guidelines for the Python community. Among them, PEP 8, which defines style conventions for Python code, plays a fundamental role in promoting readability and consistency in projects.

Adopting the PEP standards, especially PEP 8, not only ensures that the code is understandable to other developers, but also that it conforms to the standards set by the community. This facilitates collaboration, re-reads and long-term maintenance.

In this article, I present the most important aspects of the PEP standards, including:

  • Style Conventions (PEP 8): Indentations, variable names and import organization.
  • Best practices for documenting code (PEP 257).
  • Recommendations for writing typed, maintainable code (PEP 484 and PEP 563).

Understanding and applying these standards is essential to take full advantage of the Python ecosystem and contribute to professional quality projects.


PEP 8

This documentation is about coding conventions to standardize the code, and there exists a lot of documentation about the PEP 8. I will not show all recommendation in this posts, only those that I judge essential when I review a code

Naming conventions

Variable, function and module names should be in lower case, and use underscore to separate words. This typographical convention is called snake_case.

my_variable
my_new_function()
my_module

Constances are written in capital letters and set at the beginning of the script (after the imports):

LIGHT_SPEED
MY_CONSTANT

Finally, class names and exceptions use the CamelCase format (a capital letter at the beginning of each word). Exceptions must contain an Error at the end.

MyGreatClass
MyGreatError

Remember to give your variables names that make sense! Don’t use variable names like v1, v2, func1, i, toto…

Single-character variable names are permitted for loops and indexes:

my_list = [1, 3, 5, 7, 9, 11]
for i in range(len(my_liste)):
    print(my_list[i])

A more “pythonic” way of writing, to be preferred to the previous example, gets rid of the i index:

my_list = [1, 3, 5, 7, 9, 11]
for element in my_list:
    print(element )

Spaces management

It is recommended surrounding operators (+, -, *, /, //, %, ==, !=, >, not, in, and, or, …) with a space before AND after:

# recommended code:
my_variable = 3 + 7
my_text = "mouse"
my_text == my_variable

# not recommended code:
my_variable=3+7
my_text="mouse"
my_text== ma_variable

You can’t add several spaces around an operator. On the other hand, there are no spaces inside square brackets, braces or parentheses:

# recommended code:
my_list[1]
my_dict{"key"}
my_function(argument)

# not recommended code:
my_list[ 1 ]
my_dict{ "key" }
my_function( argument )

A space is recommended after the characters “:” and “,”, but not before:

# recommended code:
my_list= [1, 2, 3]
my_dict= {"key1": "value1", "key2": "value2"}
my_function(argument1, argument2)

# not recommended code:
my_list= [1 , 2 , 3]
my_dict= {"key1":"value1", "key2":"value2"}
my_function(argument1 , argument2)

However, when indexing lists, we don’t put a space after the “:”:

my_list= [1, 3, 5, 7, 9, 1]

# recommended code:
my_list[1:3]
my_list[1:4:2]
my_list[::2]

# not recommended code:
my_list[1 : 3]
my_list[1: 4:2 ]
my_list[ : :2]

Line length

For the sake of readability, we recommend writing lines of code no longer than 80 characters long. However, in certain circumstances this rule can be broken, especially if you are working on a Dash project, it may be complicated to respect this recommendation 

The character can be used to cut lines that are too long.

For example:

my_variable = 3
if my_variable > 1 and my_variable < 10 
    and my_variable % 2 == 1 and my_variable % 3 == 0:
    print(f"My variable is equal to {my_variable }")

Within a parenthesis, you can return to the line without using the character. This can be useful for specifying the arguments of a function or method when defining or using it:

def my_function(argument_1, argument_2,
                argument_3, argument_4):
    return argument_1 + argument_2

It is also possible to create multi-line lists or dictionaries by skipping a line after a comma:

my_list = [1, 2, 3,
          4, 5, 6,
          7, 8, 9]
my_dict = {"key1": 13,
          "key2": 42,
          "key2": -10}

Blank lines

In a script, blank lines are useful for visually separating different parts of the code. It is recommended to leave two blank lines before the definition of a function or class, and to leave a single blank line before the definition of a method (in a class). You can also leave a blank line in the body of a function to separate the logical sections of the function, but this should be used sparingly.

Comments

Comments always begin with the # symbol followed by a space. They give clear explanations of the purpose of the code and must be synchronized with the code, i.e. if the code is modified, the comments must be too (if applicable). They are on the same indentation level as the code they comment on. Comments are complete sentences, with a capital letter at the beginning (unless the first word is a variable, which is written without a capital letter) and a period at the end.I strongly recommend writing comments in English and it is important to be consistent between the language used for comments and the language used to name variables. Finally, Comments that follow the code on the same line should be avoided wherever possible, and should be separated from the code by at least two spaces.

Tool to help you

Ruff is a linter (code analysis tool) and formatter for Python code written in Rust. It combines the advantages of the flake8 linter and black and isort formatting while being faster.

Ruff has an extension on the VS Code editor.

To check your code you can type:

ruff check my_modul.py

But, it is also possible to correct it with the following command:

ruff format my_modul.py

PEP 20

PEP 20: The Zen of Python is a set of 19 principles written in poetic form. They are more a way of coding than actual guidelines.

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren’t special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one– and preferably only one –obvious way to do it.
Although that way may not be obvious at first unless you’re Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it’s a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea — let’s do more of those!

PEP 257

The aim of PEP 257 is to standardize the use of docstrings.

What is a docstring?

A docstring is a string that appears as the first instruction after the definition of a function, class or method. A docstring becomes the output of the __doc__ special attribute of this object.

def my_function():
    """This is a doctring."""
    pass

And we have:

>>> my_function.__doc__
>>> 'This is a doctring.'

We always write a docstring between triple double quote """.

Docstring on a line

Used for simple functions or methods, it must fit on a single line, with no blank line at the beginning or end. The closing quotes are on the same line as opening quotes and there are no blank lines before or after the docstring.

def add(a, b):
    """Return the sum of a and b."""
    return a + b

Single-line docstring MUST NOT reintegrate function/method parameters. Do not do:

def my_function(a, b):
    """ my_function(a, b) -> list"""

Docstring on several lines

The first line should be a summary of the object being documented. An empty line follows, followed by more detailed explanations or clarifications of the arguments.

def divide(a, b):
    """Divide a byb.

    Returns the result of the division. Raises a ValueError if b equals 0.
    """
    if b == 0:
        raise ValueError("Only Chuck Norris can divide by 0") return a / b

Complete Docstring

A complete docstring is made up of several parts (in this case, based on the numpydoc standard).

  1. Short description: Summarizes the main functionality.
  2. Parameters: Describes the arguments with their type, name and role.
  3. Returns: Specifies the type and role of the returned value.
  4. Raises: Documents exceptions raised by the function.
  5. Notes (optional): Provides additional explanations.
  6. Examples (optional): Contains illustrated usage examples with expected results or exceptions.
def calculate_mean(numbers: list[float]) -> float:
    """
    Calculate the mean of a list of numbers.

    Parameters
    ----------
    numbers : list of float
        A list of numerical values for which the mean is to be calculated.

    Returns
    -------
    float
        The mean of the input numbers.

    Raises
    ------
    ValueError
        If the input list is empty.

    Notes
    -----
    The mean is calculated as the sum of all elements divided by the number of elements.

    Examples
    --------
    Calculate the mean of a list of numbers:
    >>> calculate_mean([1.0, 2.0, 3.0, 4.0])
    2.5

Tool to help you

VsCode’s autoDocstring extension lets you automatically create a docstring template.

PEP 484

In some programming languages, typing is mandatory when declaring a variable. In Python, typing is optional, but strongly recommended. PEP 484 introduces a typing system for Python, annotating the types of variables, function arguments and return values. This PEP provides a basis for improving code readability, facilitating static analysis and reducing errors.

What is typing?

Typing consists in explicitly declaring the type (float, string, etc.) of a variable. The typing module provides standard tools for defining generic types, such as Sequence, List, Union, Any, etc.

To type function attributes, we use “:” for function arguments and “->” for the type of what is returned.

Here a list of none typing functions:

def show_message(message):
    print(f"Message : {message}")

def addition(a, b):
    return a + b

def is_even(n):
    return n % 2 == 0

def list_square(numbers):
      return [x**2 for x in numbers]

def reverse_dictionary(d):
    return {v: k for k, v in d.items()}

def add_element(ensemble, element):
    ensemble.add(element)
  return ensemble

Now here’s how they should look:

from typing import List, Tuple, Dict, Set, Any

def show _message(message: str) -> None:
    print(f"Message : {message}")

def addition(a: int, b: int) -> int:
    return a + b

def is_even(n: int) -> bool:
    return n % 2 == 0

def list_square (numbers: List[int]) -> List[int]:
    return [x**2 for x in numbers]

def reverse_dictionary (d: Dict[str, int]) -> Dict[int, str]:
    return {v: k for k, v in d.items()}

def add_element(ensemble: Set[int], element: int) -> Set[int]:
    ensemble.add(element)
    return ensemble

Tool to help you

The MyPy extension automatically checks whether the use of a variable corresponds to the declared type. For example, for the following function:

def my_function(x: float) -> float:
    return x.mean()

The editor will point out that a float has no “mean” attribute.

Image from author

The benefit is twofold: you’ll know whether the declared type is the right one and whether the use of this variable corresponds to its type.

In the above example, x must be of a type that has a mean() method (e.g. np.array).


Conclusion

In this article, we have looked at the most important principles for creating clean Python production code. A solid architecture, adherence to SOLID principles, and compliance with PEP recommendations (at least the four discussed here) are essential for ensuring code quality. The desire for beautiful code is not (just) coquetry. It standardizes development practices and makes teamwork and maintenance much easier. There’s nothing more frustrating than spending hours (or even days) reverse-engineering a program, deciphering poorly written code before you’re finally able to fix the bugs. By applying these best practices, you ensure that your code remains clear, scalable, and easy for any developer to work with in the future.


References

1. src layout vs flat layout

2. SOLID principles

3. Python Enhancement Proposals index

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

IP Fabric 7.9 boosts visibility across hybrid environments

Multicloud and hybrid network viability has also been extended to include IPv6 path analysis, helping teams reason about connectivity in dual-stack and hybrid environments. This capability addresses a practical challenge for enterprises deploying IPv6 alongside existing IPv4 infrastructure. Network teams can now validate that applications can reach IPv6 endpoints and

Read More »

Veteran Gas Executive Leaving Mercuria

Steve Hill, who was hired by Mercuria Energy Group in 2024 to build out its liquefied natural gas business, is leaving the trading house. Hill was part of the company’s efforts to expand into the fast-growing global LNG market. Before joining, he was responsible for the vast LNG, gas and power marketing and trading business at energy giant Shell Plc. He was one of a trio of heavyweight hires Mercuria made after reaping bumper profits, setting off a renewed push into trading physical commodities, along with Kostas Bintas in metals and Nick O’Kane in gas and power. Known as one of the world’s biggest traders of oil and gas, the firm has been a relative latecomer behind other trading house rivals in building out a large-scale physical trading business for LNG. During Hill’s relatively brief tenure, Mercuria signed deals to offtake LNG from Oman, as well as supply Turkey and China. He also hired several of his former colleagues from Shell, though one — Singapore-based Dong Yuan — recently left the company. A spokesperson for Mercuria confirmed Hill is leaving the company. Hill didn’t immediately respond to a request for comment. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Crude Settles Higher After Volatile Week

Oil edged higher at the end of a volatile week, as traders weighed tensions in Iran and positive sentiment in wider markets. West Texas Intermediate settled near $60 a barrel after plunging 4.6% on Thursday, the most since June. President Donald Trump said in a social media post that he “greatly” respects Iran’s decision to cancel scheduled hangings of protesters. His rhetoric over recent days has reduced expectations of an immediate US response to violent protests in the Islamic Republic, which could have led to disruptions to the country’s roughly 3.3 million barrel-per-day oil production, as well as shipping. Nevertheless, Washington is boosting its military presence in the Middle East. At least one aircraft carrier is moving into the region and other military assets are expected to be shifted there in the coming days and weeks, Fox News reported, citing military sources. Traders have in the past covered bearish wagers ahead of the weekend in periods of heightened geopolitical risks. “While the risk of imminent intervention from the US against Iran has subsided, it’s pretty clear that the risk is still present, which should keep the market on its toes in the short term,” said Warren Patterson, head of commodities strategy at ING Groep NV. “However, the longer this goes on without a US response, the risk premium will continue to evaporate, allowing more bearish fundamentals to take center stage.” Disruption to Kazakh exports from the Black Sea, short-term tightness in the North Sea and a host of financial flows from options markets to commodity index rebalancing have also helped lift an oil market coming off its biggest drop since 2020 on rising supplies. In a sign that lower prices are starting to bite, Harold Hamm, the billionaire wildcatter who helped kick off the US shale revolution, said his firm

Read More »

U.S. Energy Secretary and Slovakia’s Prime Minister Sign Agreement to Advance U.S.-Slovakia Civil Nuclear Program

WASHINGTON—U.S. Secretary of Energy Chris Wright and Slovak Prime Minister Robert Fico today signed an Intergovernmental Agreement (IGA) to advance cooperation on Slovakia’s civil nuclear power program. This landmark agreement includes the development of a new, state-owned American 1,200 MWe nuclear unit at the Jaslovské Bohunice Nuclear Power Plant, deepening the U.S.-Slovakia strategic partnership and strengthening European energy security. The agreement builds on President Trump’s commitment to advancing American energy leadership. A project of this scale is expected to create thousands of American jobs across engineering, advanced manufacturing, construction, nuclear fuel services, and project management, while reinforcing U.S. supply chains and expanding access to global markets for American-made nuclear technology. These efforts lay the foundation for sustained U.S. engagement in Slovakia’s nuclear energy program and support future civil nuclear projects across the region. It also supports Slovakia’s efforts to diversify its energy supply, strengthen long-term energy security, and integrate advanced American nuclear technology into Central Europe’s energy infrastructure. “The United States is proud to partner with Slovakia as a trusted ally as we expand cooperation across the energy sector,” said Energy Secretary Chris Wright. “Today’s civil nuclear agreement reflects our shared commitment to strengthening European energy security and sovereignty for decades to come. By deploying America’s leading nuclear technology, we are creating thousands of good-paying American jobs, expanding global markets for U.S. nuclear companies, and driving economic growth at home”. “I see this moment as a significant milestone in our bilateral relations, but also as a clear signal that Slovakia and the United States are united by a common strategic thinking about the future of energy – about its safety, sustainability, and technological maturity,” said the Prime Minister of the Slovak Republic Robert Fico. The planned nuclear unit represents a multibillion-dollar energy infrastructure investment and one of the largest in

Read More »

Valero to Cut 200+ Jobs as California Refinery Closes

Valero Energy Corp. plans to let go of 237 employees at its Benicia refinery as it winds down operations at one of California’s few remaining fuel-making plants. Valero expects the shutdown to be permanent and 237 jobs will be cut March 15 to July 1, the company said in a letter to California’s employment regulator and local officials. Those losing jobs are not represented by a union and represent the bulk of the plant’s 348-person staff.  “We do not plan to coordinate services with the local workforce development board or any other entity,” refinery manager Lauren Bird, whose position is being eliminated, said in the letter. The Texas-based oil company announced in 2025 plans to close the plant and last-ditch efforts by Governor Gavin Newsom, regulators and local officials to keep the gates open were unsuccessful. Multiple California refineries have closed or converted to making biofuels in recent years, dwindling fuel supply in a state where drivers regularly pay the highest gasoline prices in the nation. Last week, Newsom praised plans by Valero to continue supplying the state with gasoline amid the shutdown, saying the decision to import fuel to the region was a constructive development from an earlier possibility of a full-on exit. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Trump Administration Calls for Emergency Power Auction to Build Big Power Plants Again

WASHINGTON—U.S. Secretary of Energy Chris Wright and Secretary of the Interior Doug Burgum, vice-chair and chair of the National Energy Dominance Council (NEDC) respectively, today joined Mid-Atlantic governors urging PJM Interconnection, L.L.C. (PJM) to temporarily overhaul its market rules to strengthen grid reliability and reduce electricity costs for American families and businesses by building more than $15 billion of reliable baseload power generation.  The initiative calls on PJM to conduct an emergency procurement auction to address escalating electricity prices and growing reliability risks across the mid-Atlantic region of the United States. The action follows a series of PJM policies over the years that have weakened the electric grid, including the premature shutdown of reliable power generation.  President Trump declared a National Energy Emergency on his first day in office, warning that the previous administrations energy subtraction agenda left the country vulnerable to blackouts and soaring electricity prices. During the Biden administration, PJM forced nearly 17 gigawatts of reliable baseload power generation offline. For the first time in history, PJM’s capacity auction failed to secure enough generation resources to meet basic reliability requirements. If not fixed, it will lead to further rising prices and blackouts.  “High electricity prices are a choice,” said Energy Secretary Chris Wright. “The Biden administration’s forceful closures of coal and natural gas plants without reliable replacements left the United States in an energy emergency. Perhaps no region in America is more at risk than in PJM. That’s why President Trump asked governors across the Mid-Atlantic to come together and call upon PJM to allow America to build big reliable power plants again. Our directives will restore affordable and reliable electricity so American families thrive and America’s manufacturing industries once again boom. President Trump promised to unleash American energy and put the American people first. This plan keeps

Read More »

Russian Oil and Gas Revenue Falls to Lowest in 5 Years

Russia’s revenues from its oil and gas industry, vital to financing its war in Ukraine, dropped to a five-year low in 2025 as crude prices slumped and gas exports declined. The nation’s budget received a total of 8.48 trillion rubles ($108 billion) in oil and gas taxes last year, Finance Ministry said on Thursday. That’s 24 percent less than in 2024 and the lowest level since the start of the decade, historic figures show.  Russia, a top-three global oil producer and home to the world’s largest gas reserves, heavily relies on tax revenues from the two industries to fill its state coffers. The decline, mainly driven by a combination of weaker global oil prices, stronger ruble and energy sanctions against Russia, comes as the Kremlin has boosted military spending significantly above what it planned to fund the war, which is about to enter a fifth year. To bridge the widening gap between revenues and spending, the government in Moscow has eaten into more than half of the country’s National Wellbeing Fund – a buffer against economic shocks – and turned to expensive borrowings that will take years to pay back.   Oil revenues dropped more than 22 percent year on year to 7.13 trillion rubles, reaching the lowest level since 2023, Bloomberg calculations show. Concerns about an oversupply in the global crude market, and discounts for Russian barrels in particular due to western sanctions, hit the flow of money into state coffers. The official data show that the average price of Urals, Russia’s main oil-export blend, for tax purposes was $57.65 a barrel in 2025, a 15 percent drop from a year earlier.   Starting from November, when the US blacklisted two major oil producers Rosneft PJSC and Lukoil PJSC, the discount of Urals to the Brent benchmark widened to about $27 a barrel at

Read More »

NVIDIA’s Rubin Redefines the AI Factory

The Architecture Shift: From “GPU Server” to “Rack-Scale Supercomputer” NVIDIA’s Rubin architecture is built around a single design thesis: “extreme co-design.” In practice, that means GPUs, CPUs, networking, security, software, power delivery, and cooling are architected together; treating the data center as the compute unit, not the individual server. That logic shows up most clearly in the NVL72 system. NVLink 6 serves as the scale-up spine, designed to let 72 GPUs communicate all-to-all with predictable latency, something NVIDIA argues is essential for mixture-of-experts routing and synchronization-heavy inference paths. NVIDIA is not vague about what this requires. Its technical materials describe the Rubin GPU as delivering 50 PFLOPS of NVFP4 inference and 35 PFLOPS of NVFP4 training, with 22 TB/s of HBM4 bandwidth and 3.6 TB/s of NVLink bandwidth per GPU. The point of that bandwidth is not headline-chasing. It is to prevent a rack from behaving like 72 loosely connected accelerators that stall on communication. NVIDIA wants the rack to function as a single engine because that is what it will take to drive down cost per token at scale. The New Idea NVIDIA Is Elevating: Inference Context Memory as Infrastructure If there is one genuinely new concept in the Rubin announcements, it is the elevation of context memory, and the admission that GPU memory alone will not carry the next wave of inference. NVIDIA describes a new tier called NVIDIA Inference Context Memory Storage, powered by BlueField-4, designed to persist and share inference state (such as KV caches) across requests and nodes for long-context and agentic workloads. NVIDIA says this AI-native context tier can boost tokens per second by up to 5× and improve power efficiency by up to 5× compared with traditional storage approaches. The implication is clear: the path to cheaper inference is not just faster GPUs.

Read More »

Power shortages, carbon capture, and AI automation: What’s ahead for data centers in 2026

“Despite a broader use of AI tools in enterprises and by consumers, that does not mean that AI compute, AI infrastructure in general, will be more evenly spread out,” said Daniel Bizo, research director at Uptime Institute, during the webinar. “The concentration of AI compute infrastructure is only increasing in the coming years.” For enterprises, the infrastructure investment remains relatively modest, Uptime Institute found. Enterprises will limit investment to inference and only some training, and inference workloads don’t require dramatic capacity increases. “Our prediction, our observation, was that the concentration of AI compute infrastructure is only increasing in the coming years by a couple of points. By the end of this year, 2026, we are projecting that around 10 gigawatts of new IT load will have been added to the global data center world, specifically to run generative AI workloads and adjacent workloads, but definitely centered on generative AI,” Bizo said. “This means these 10 gigawatts or so load, we are talking about anywhere between 13 to 15 million GPUs and accelerators deployed globally. We are anticipating that a majority of these are and will be deployed in supercomputing style.” 2. Developers will not outrun the power shortage The most pressing challenge facing the industry, according to Uptime, is that data centers can be built in less than three years, but power generation takes much longer. “It takes three to six years to deploy a solar or wind farm, around six years for a combined-cycle gas turbine plant, and even optimistically, it probably takes more than 10 years to deploy a conventional nuclear power plant,” said Max Smolaks, research analyst at Uptime Institute. This mismatch was manageable when data centers were smaller and growth was predictable, the report notes. But with projects now measured in tens and sometimes hundreds of

Read More »

Google warns transmission delays are now the biggest threat to data center expansion

The delays stem from aging transmission infrastructure unable to handle concentrated power demands. Building regional transmission lines currently takes seven to eleven years just for permitting, Hanna told the gathering. Southwest Power Pool has projected 115 days of potential loss of load if transmission infrastructure isn’t built to match demand growth, he added. These systemic delays are forcing enterprises to reconsider fundamental assumptions about cloud capacity. Regions including Northern Virginia and Santa Clara that were prime locations for hyperscale builds are running out of power capacity. The infrastructure constraints are also reshaping cloud competition around power access rather than technical capabilities. “This is no longer about who gets to market with the most GPU instances,” Gogia said. “It’s about who gets to the grid first.” Co-location emerges as a faster alternative to grid delays Unable to wait years for traditional grid connections, hyperscalers are pursuing co-location arrangements that place data centers directly adjacent to power plants, bypassing the transmission system entirely. Pricing for these arrangements has jumped 20% in power-constrained markets as demand outstrips availability, with costs flowing through to cloud customers via regional pricing differences, Gogia said. Google is exploring such arrangements, though Hanna said the company’s “strong preference is grid-connected load.” “This is a speed to power play for us,” he said, noting Google wants facilities to remain “front of the meter” to serve the broader grid rather than operating as isolated power sources. Other hyperscalers are negotiating directly with utilities, acquiring land near power plants, and exploring ownership stakes in power infrastructure from batteries to small modular nuclear reactors, Hanna said.

Read More »

OpenAI turns to Cerebras in a mega deal to scale AI inference infrastructure

Analysts expect AI workloads to grow more varied and more demanding in the coming years, driving the need for architectures tuned for inference performance and putting added pressure on data center networks. “This is prompting hyperscalers to diversify their computing systems, using Nvidia GPUs for general-purpose AI workloads, in-house AI accelerators for highly optimized tasks, and systems such as Cerebras for specialized low-latency workloads,” said Neil Shah, vice president for research at Counterpoint Research. As a result, AI platforms operating at hyperscale are pushing infrastructure providers away from monolithic, general-purpose clusters toward more tiered and heterogeneous infrastructure strategies. “OpenAI’s move toward Cerebras inference capacity reflects a broader shift in how AI data centers are being designed,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “This move is less about replacing Nvidia and more about diversification as inference scales.” At this level, infrastructure begins to resemble an AI factory, where city-scale power delivery, dense east–west networking, and low-latency interconnects matter more than peak FLOPS, Ram added. “At this magnitude, conventional rack density, cooling models, and hierarchical networks become impractical,” said Manish Rawat, semiconductor analyst at TechInsights. “Inference workloads generate continuous, latency-sensitive traffic rather than episodic training bursts, pushing architectures toward flatter network topologies, higher-radix switching, and tighter integration of compute, memory, and interconnect.”

Read More »

Cisco’s 2026 agenda prioritizes AI-ready infrastructure, connectivity

While most of the demand for AI data center capacity today comes from hyperscalers and neocloud providers, that will change as enterprise customers delve more into the AI networking world. “The other ecosystem members and enterprises themselves are becoming responsible for an increasing proportion of the AI infrastructure buildout as inferencing and agentic AI, sovereign cloud, and edge AI become more mainstream,” Katz wrote. More enterprises will move to host AI on premises via the introduction of AI agents that are designed to inject intelligent insight into applications and help improve operations. That’s where the AI impact on enterprise network traffic will appear, suggests Nolle. “Enterprises need to host AI to create AI network impact. Just accessing it doesn’t do much to traffic. Having cloud agents access local data center resources (RAG etc.) creates a governance issue for most corporate data, so that won’t go too far either,” Nolle said.  “Enterprises are looking at AI agents, not the way hyperscalers tout agentic AI, but agents running on small models, often open-source, and are locally hosted. This is where real AI traffic will develop, and Cisco could be vulnerable if they don’t understand this point and at least raise it in dialogs where AI hosting comes up,” Nolle said. “I don’t expect they’d go too far, because the real market for enterprise AI networking is probably a couple years out.” Meanwhile, observers expect Cisco to continue bolstering AI networking capabilities for enterprise branch, campus and data centers as well as hyperscalers, including through optical support and other gear.

Read More »

Microsoft tells communities it will ‘pay its way’ as AI data center resource usage sparks backlash

It will work with utilities and public commissions to set the rates it pays high enough to cover data center electricity costs (including build-outs, additions, and active use). “Our goal is straightforward: To ensure that the electricity cost of serving our data centers is not passed on to residential customers,” Smith emphasized. For example, the company is supporting a new rate structure Wisconsin that would charge a class of “very large customers,” including data centers, the true cost of the electricity required to serve them. It will collaborate “early, closely, and transparently” with local utilities to add electricity and supporting infrastructure to existing grids when needed. For instance, Microsoft has contracted with the Midcontinent Independent System Operator (MISO) to add 7.9GW of new electricity generation to the grid, “more than double our current consumption,” Smith noted. It will pursue ways to make data centers more efficient. For example, it is already experimenting with AI to improve planning, extract more electricity from existing infrastructure, improve system resilience, and speed development of new infrastructure and technologies (like nuclear energy). It will advocate for state and national public policies that ensure electricity access that is affordable, reliable, and sustainable in neighboring communities. Microsoft previously established priorities for electricity policy advocacy, Smith noted, but “progress has been uneven. This needs to change.” Microsoft is similarly committed when it comes to data center water use, promising four actions: Reducing the overall amount of water its data centers use, initially improving it by 40% by 2030. The company is exploring innovations in cooling, including closed-loop systems that recirculate cooling liquids. It will collaborate with local utilities to map out water, wastewater, and pressure needs, and will “fully fund” infrastructure required for growth. For instance, in Quincy, Washington, Microsoft helped construct a water reuse utility that recirculates

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »