Stay Ahead, Stay ONMINE

No More Tableau Downtime: Metadata API for Proactive Data Health

In today’s world, the reliability of data solutions is everything. When we build dashboards and reports, one expects that the numbers reflected there are correct and up-to-date. Based on these numbers, insights are drawn and actions are taken. For any unforeseen reason, if the dashboards are broken or if the numbers are incorrect — then it becomes a fire-fight to fix everything. If the issues are not fixed in time, then it damages the trust placed on the data team and their solutions.  But why would dashboards be broken or have wrong numbers? If the dashboard was built correctly the first time, then 99% of the time the issue comes from the data that feeds the dashboards — from the data warehouse. Some possible scenarios are: Few ETL pipelines failed, so the new data is not yet in A table is replaced with another new one  Some columns in the table are dropped or renamed Schemas in data warehouse have changed And many more. There is still a chance that the issue is on the Tableau site, but in my experience, most of the times, it is always due to some changes in data warehouse. Even though we know the root cause, it’s not always straightforward to start working on a fix. There is no central place where you can check which Tableau data sources rely on specific tables. If you have the Tableau Data Management add-on, it could help, but from what I know, its hard to find dependencies of custom sql queries used in data sources. Nevertheless, the add-on is too expensive and most companies don’t have it. The real pain begins when you have to go through all the data sources manually to start fixing it. On top of it, you have a string of users on your head impatiently waiting for a quick-fix. The fix itself might not be difficult, it would just be a time-consuming one. What if we could anticipate these issues and identify impacted data sources before anyone notices a problem? Wouldn’t that just be great? Well, there is a way now with the Tableau Metadata API. The Metadata API uses GraphQL, a query language for APIs that returns only the data that you’re interested in. For more info on what’s possible with GraphQL, do check out GraphQL.org. In this blog post, I’ll show you how to connect to the Tableau Metadata API using Python’s Tableau Server Client (TSC) library to proactively identify data sources using specific tables, so that you can act fast before any issues arise. Once you know which Tableau data sources are affected by a specific table, you can make some updates yourself or alert the owners of those data sources about the upcoming changes so they can be prepared for it. Connecting to the Tableau Metadata API Lets connect to the Tableau Server using TSC. We need to import in all the libraries we would need for the exercise! ### Import all required libraries import tableauserverclient as t import pandas as pd import json import ast import re In order to connect to the Metadata API, you will have to first create a personal access token in your Tableau Account settings. Then update the & with the token you just created. Also update with your Tableau site. If the connection is established successfully, then “Connected” will be printed in the output window. ### Connect to Tableau server using personal access token tableau_auth = t.PersonalAccessTokenAuth(“”, “”, site_id=””) server = t.Server(“https://dub01.online.tableau.com/”, use_server_version=True) with server.auth.sign_in(tableau_auth): print(“Connected”) Lets now get a list of all data sources that are published on your site. There are many attributes you can fetch, but for the current use case, lets keep it simple and only get the id, name and owner contact information for every data source. This will be our master list to which we will add in all other information. ############### Get all the list of data sources on your Site all_datasources_query = “”” { publishedDatasources { name id owner { name email } } }””” with server.auth.sign_in(tableau_auth): result = server.metadata.query( all_datasources_query ) Since I want this blog to be focussed on how to proactively identify which data sources are affected by a specific table, I’ll not be going into the nuances of Metadata API. To better understand how the query works, you can refer to a very detailed Tableau’s own Metadata API documentation. One thing to note is that the Metadata API returns data in a JSON format. Depending on what you are querying, you’ll end up with multiple nested json lists and it can get very tricky to convert this into a pandas dataframe. For the above metadata query, you will end up with a result which would like below (this is mock data just to give you an idea of what the output looks like): { “data”: { “publishedDatasources”: [ { “name”: “Sales Performance DataSource”, “id”: “f3b1a2c4-1234-5678-9abc-1234567890ab”, “owner”: { “name”: “Alice Johnson”, “email”: “[email protected]” } }, { “name”: “Customer Orders DataSource”, “id”: “a4d2b3c5-2345-6789-abcd-2345678901bc”, “owner”: { “name”: “Bob Smith”, “email”: “[email protected]” } }, { “name”: “Product Returns and Profitability”, “id”: “c5e3d4f6-3456-789a-bcde-3456789012cd”, “owner”: { “name”: “Alice Johnson”, “email”: “[email protected]” } }, { “name”: “Customer Segmentation Analysis”, “id”: “d6f4e5a7-4567-89ab-cdef-4567890123de”, “owner”: { “name”: “Charlie Lee”, “email”: “[email protected]” } }, { “name”: “Regional Sales Trends (Custom SQL)”, “id”: “e7a5f6b8-5678-9abc-def0-5678901234ef”, “owner”: { “name”: “Bob Smith”, “email”: “[email protected]” } } ] } } We need to convert this JSON response into a dataframe so that its easy to work with. Notice that we need to extract the name and email of the owner from inside the owner object.  ### We need to convert the response into dataframe for easy data manipulation col_names = result[‘data’][‘publishedDatasources’][0].keys() master_df = pd.DataFrame(columns=col_names) for i in result[‘data’][‘publishedDatasources’]: tmp_dt = {k:v for k,v in i.items()} master_df = pd.concat([master_df, pd.DataFrame.from_dict(tmp_dt, orient=’index’).T]) # Extract the owner name and email from the owner object master_df[‘owner_name’] = master_df[‘owner’].apply(lambda x: x.get(‘name’) if isinstance(x, dict) else None) master_df[‘owner_email’] = master_df[‘owner’].apply(lambda x: x.get(’email’) if isinstance(x, dict) else None) master_df.reset_index(inplace=True) master_df.drop([‘index’,’owner’], axis=1, inplace=True) print(‘There are ‘, master_df.shape[0] , ‘ datasources in your site’) This is how the structure of master_df would look like: Sample output of code Once we have the main list ready, we can go ahead and start getting the names of the tables embedded in the data sources. If you are an avid Tableau user, you know that there are two ways to selecting tables in a Tableau data source — one is to directly choose the tables and establish a relation between them and the other is to use a custom sql query with one or more tables to achieve a new resultant table. Therefore, we need to address both the cases. Processing of Custom SQL query tables Below is the query to get the list of all custom SQLs used in the site along with their data sources. Notice that I have filtered the list to get only first 500 custom sql queries. In case there are more in your org, you will have to use an offset to get the next set of custom sql queries. There is also an option of using cursor method in Pagination when you want to fetch large list of results (refer here). For the sake of simplicity, I just use the offset method as I know, as there are less than 500 custom sql queries used on the site. # Get the data sources and the table names from all the custom sql queries used on your Site custom_table_query = “”” { customSQLTablesConnection(first: 500){ nodes { id name downstreamDatasources { name } query } } } “”” with server.auth.sign_in(tableau_auth): custom_table_query_result = server.metadata.query( custom_table_query ) Based on our mock data, this is how our output would look like: { “data”: { “customSQLTablesConnection”: { “nodes”: [ { “id”: “csql-1234”, “name”: “RegionalSales_CustomSQL”, “downstreamDatasources”: [ { “name”: “Regional Sales Trends (Custom SQL)” } ], “query”: “SELECT r.region_name, SUM(s.sales_amount) AS total_sales FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Regions r ON s.region_id = r.region_id GROUP BY r.region_name” }, { “id”: “csql-5678”, “name”: “ProfitabilityAnalysis_CustomSQL”, “downstreamDatasources”: [ { “name”: “Product Returns and Profitability” } ], “query”: “SELECT p.product_category, SUM(s.profit) AS total_profit FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Products p ON s.product_id = p.product_id GROUP BY p.product_category” }, { “id”: “csql-9101”, “name”: “CustomerSegmentation_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Segmentation Analysis” } ], “query”: “SELECT c.customer_id, c.location, COUNT(o.order_id) AS total_orders FROM ecommerce.sales_data.Customers c JOIN ecommerce.sales_data.Orders o ON c.customer_id = o.customer_id GROUP BY c.customer_id, c.location” }, { “id”: “csql-3141”, “name”: “CustomerOrders_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Orders DataSource” } ], “query”: “SELECT o.order_id, o.customer_id, o.order_date, o.sales_amount FROM ecommerce.sales_data.Orders o WHERE o.order_status = ‘Completed'” }, { “id”: “csql-3142”, “name”: “CustomerProfiles_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Orders DataSource” } ], “query”: “SELECT c.customer_id, c.customer_name, c.segment, c.location FROM ecommerce.sales_data.Customers c WHERE c.active_flag = 1” }, { “id”: “csql-3143”, “name”: “CustomerReturns_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Orders DataSource” } ], “query”: “SELECT r.return_id, r.order_id, r.return_reason FROM ecommerce.sales_data.Returns r” } ] } } } Just like before when we were creating the master list of data sources, here also we have nested json for the downstream data sources where we would need to extract only the “name” part of it. In the “query” column, the entire custom sql is dumped. If we use regex pattern, we can easily search for the names of the table used in the query. We know that the table names always come after FROM or a JOIN clause and they generally follow the format … The is optional and most of the times not used. There were some queries I found which used this format and I ended up only getting the database and schema names, and not the complete table name. Once we have extracted the names of the data sources and the names of the tables, we need to merge the rows per data source as there can be multiple custom sql queries used in a single data source. ### Convert the custom sql response into dataframe col_names = custom_table_query_result[‘data’][‘customSQLTablesConnection’][‘nodes’][0].keys() cs_df = pd.DataFrame(columns=col_names) for i in custom_table_query_result[‘data’][‘customSQLTablesConnection’][‘nodes’]: tmp_dt = {k:v for k,v in i.items()} cs_df = pd.concat([cs_df, pd.DataFrame.from_dict(tmp_dt, orient=’index’).T]) # Extract the data source name where the custom sql query was used cs_df[‘data_source’] = cs_df.downstreamDatasources.apply(lambda x: x[0][‘name’] if x and ‘name’ in x[0] else None) cs_df.reset_index(inplace=True) cs_df.drop([‘index’,’downstreamDatasources’], axis=1,inplace=True) ### We need to extract the table names from the sql query. We know the table name comes after FROM or JOIN clause # Note that the name of table can be of the format .. # Depending on the format of how table is called, you will have to modify the regex expression def extract_tables(sql): # Regex to match database.schema.table or schema.table, avoid alias pattern = r'(?:FROM|JOIN)s+((?:[w+]|w+).(?:[w+]|w+)(?:.(?:[w+]|w+))?)b’ matches = re.findall(pattern, sql, re.IGNORECASE) return list(set(matches)) # Unique table names cs_df[‘customSQLTables’] = cs_df[‘query’].apply(extract_tables) cs_df = cs_df[[‘data_source’,’customSQLTables’]] # We need to merge datasources as there can be multiple custom sqls used in the same data source cs_df = cs_df.groupby(‘data_source’, as_index=False).agg({ ‘customSQLTables’: lambda x: list(set(item for sublist in x for item in sublist)) # Flatten & make unique }) print(‘There are ‘, cs_df.shape[0], ‘datasources with custom sqls used in it’) After we perform all the above operations, this is how the structure of cs_df would look like: Sample output of code Processing of regular Tables in Data Sources Now we need to get the list of all the regular tables used in a datasource which are not a part of custom SQL. There are two ways to go about it. Either use the publishedDatasources object and check for upstreamTables or use DatabaseTable and check for upstreamDatasources. I’ll go by the first method because I want the results at a data source level (basically, I want some code ready to reuse when I want to check a specific data source in further detail). Here again, for the sake of simplicity, instead of going for pagination, I’m looping through each datasource to ensure I have everything. We get the upstreamTables inside of the field object so that has to be cleaned out. ############### Get the data sources with the regular table names used in your site ### Its best to extract the tables information for every data source and then merge the results. # Since we only get the table information nested under fields, in case there are hundreds of fields # used in a single data source, we will hit the response limits and will not be able to retrieve all the data. data_source_list = master_df.name.tolist() col_names = [‘name’, ‘id’, ‘extractLastUpdateTime’, ‘fields’] ds_df = pd.DataFrame(columns=col_names) with server.auth.sign_in(tableau_auth): for ds_name in data_source_list: query = “”” { publishedDatasources (filter: { name: “”””+ ds_name + “””” }) { name id extractLastUpdateTime fields { name upstreamTables { name } } } } “”” ds_name_result = server.metadata.query( query ) for i in ds_name_result[‘data’][‘publishedDatasources’]: tmp_dt = {k:v for k,v in i.items() if k != ‘fields’} tmp_dt[‘fields’] = json.dumps(i[‘fields’]) ds_df = pd.concat([ds_df, pd.DataFrame.from_dict(tmp_dt, orient=’index’).T]) ds_df.reset_index(inplace=True) This is how the structure of ds_df would look: Sample output of code We can need to flatten out the fields object and extract the field names as well as the table names. Since the table names will be repeating multiple times, we would have to deduplicate to keep only the unique ones. # Function to extract the values of fields and upstream tables in json lists def extract_values(json_list, key): values = [] for item in json_list: values.append(item[key]) return values ds_df[“fields”] = ds_df[“fields”].apply(ast.literal_eval) ds_df[‘field_names’] = ds_df.apply(lambda x: extract_values(x[‘fields’],’name’), axis=1) ds_df[‘upstreamTables’] = ds_df.apply(lambda x: extract_values(x[‘fields’],’upstreamTables’), axis=1) # Function to extract the unique table names def extract_upstreamTable_values(table_list): values = set()a for inner_list in table_list: for item in inner_list: if ‘name’ in item: values.add(item[‘name’]) return list(values) ds_df[‘upstreamTables’] = ds_df.apply(lambda x: extract_upstreamTable_values(x[‘upstreamTables’]), axis=1) ds_df.drop([“index”,”fields”], axis=1, inplace=True) Once we do the above operations, the final structure of ds_df would look something like this: Sample output of code We have all the pieces and now we just have to merge them together: ###### Join all the data together master_data = pd.merge(master_df, ds_df, how=”left”, on=[“name”,”id”]) master_data = pd.merge(master_data, cs_df, how=”left”, left_on=”name”, right_on=”data_source”) # Save the results to analyse further master_data.to_excel(“Tableau Data Sources with Tables.xlsx”, index=False) This is our final master_data: Sample Output of code Table-level Impact Analysis Let’s say there were some schema changes on the “Sales” table and you want to know which data sources will be impacted. Then you can simply write a small function which checks if a table is present in either of the two columns — upstreamTables or customSQLTables like below. def filter_rows_with_table(df, col1, col2, target_table): “”” Filters rows in df where target_table is part of any value in either col1 or col2 (supports partial match). Returns full rows (all columns retained). “”” return df[ df.apply( lambda row: (isinstance(row[col1], list) and any(target_table in item for item in row[col1])) or (isinstance(row[col2], list) and any(target_table in item for item in row[col2])), axis=1 ) ] # As an example filter_rows_with_table(master_data, ‘upstreamTables’, ‘customSQLTables’, ‘Sales’) Below is the output. You can see that 3 data sources will be impacted by this change. You can also alert the data source owners Alice and Bob in advance about this so they can start working on a fix before something breaks on the Tableau dashboards. Sample output of code You can check out the complete version of the code in my Github repository here. This is just one of the potential use-cases of the Tableau Metadata API. You can also extract the field names used in custom sql queries and add to the dataset to get a field-level impact analysis. One can also monitor the stale data sources with the extractLastUpdateTime to see if those have any issues or need to be archived if they are not used any more. We can also use the dashboards object to fetch information at a dashboard level. Final Thoughts If you have come this far, kudos. This is just one use case of automating Tableau data management. It’s time to reflect on your own work and think which of those other tasks you could automate to make your life easier. I hope this mini-project served as an enjoyable learning experience to understand the power of Tableau Metadata API. If you liked reading this, you might also like another one of my blog posts about Tableau, on some of the challenges I faced when dealing with big . Also do check out my previous blog where I explored building an interactive, database-powered app with Python, Streamlit, and SQLite. Before you go… Follow me so you don’t miss any new posts I write in future; you will find more of my articles on my . You can also connect with me on LinkedIn or Twitter!

In today’s world, the reliability of data solutions is everything. When we build dashboards and reports, one expects that the numbers reflected there are correct and up-to-date. Based on these numbers, insights are drawn and actions are taken. For any unforeseen reason, if the dashboards are broken or if the numbers are incorrect — then it becomes a fire-fight to fix everything. If the issues are not fixed in time, then it damages the trust placed on the data team and their solutions. 

But why would dashboards be broken or have wrong numbers? If the dashboard was built correctly the first time, then 99% of the time the issue comes from the data that feeds the dashboards — from the data warehouse. Some possible scenarios are:

  • Few ETL pipelines failed, so the new data is not yet in
  • A table is replaced with another new one 
  • Some columns in the table are dropped or renamed
  • Schemas in data warehouse have changed
  • And many more.

There is still a chance that the issue is on the Tableau site, but in my experience, most of the times, it is always due to some changes in data warehouse. Even though we know the root cause, it’s not always straightforward to start working on a fix. There is no central place where you can check which Tableau data sources rely on specific tables. If you have the Tableau Data Management add-on, it could help, but from what I know, its hard to find dependencies of custom sql queries used in data sources.

Nevertheless, the add-on is too expensive and most companies don’t have it. The real pain begins when you have to go through all the data sources manually to start fixing it. On top of it, you have a string of users on your head impatiently waiting for a quick-fix. The fix itself might not be difficult, it would just be a time-consuming one.

What if we could anticipate these issues and identify impacted data sources before anyone notices a problem? Wouldn’t that just be great? Well, there is a way now with the Tableau Metadata API. The Metadata API uses GraphQL, a query language for APIs that returns only the data that you’re interested in. For more info on what’s possible with GraphQL, do check out GraphQL.org.

In this blog post, I’ll show you how to connect to the Tableau Metadata API using Python’s Tableau Server Client (TSC) library to proactively identify data sources using specific tables, so that you can act fast before any issues arise. Once you know which Tableau data sources are affected by a specific table, you can make some updates yourself or alert the owners of those data sources about the upcoming changes so they can be prepared for it.

Connecting to the Tableau Metadata API

Lets connect to the Tableau Server using TSC. We need to import in all the libraries we would need for the exercise!

### Import all required libraries
import tableauserverclient as t
import pandas as pd
import json
import ast
import re

In order to connect to the Metadata API, you will have to first create a personal access token in your Tableau Account settings. Then update the & with the token you just created. Also update with your Tableau site. If the connection is established successfully, then “Connected” will be printed in the output window.

### Connect to Tableau server using personal access token
tableau_auth = t.PersonalAccessTokenAuth("", "", 
                                           site_id="")
server = t.Server("https://dub01.online.tableau.com/", use_server_version=True)

with server.auth.sign_in(tableau_auth):
        print("Connected")

Lets now get a list of all data sources that are published on your site. There are many attributes you can fetch, but for the current use case, lets keep it simple and only get the id, name and owner contact information for every data source. This will be our master list to which we will add in all other information.

############### Get all the list of data sources on your Site

all_datasources_query = """ {
  publishedDatasources {
    name
    id
    owner {
    name
    email
    }
  }
}"""
with server.auth.sign_in(tableau_auth):
    result = server.metadata.query(
        all_datasources_query
    )

Since I want this blog to be focussed on how to proactively identify which data sources are affected by a specific table, I’ll not be going into the nuances of Metadata API. To better understand how the query works, you can refer to a very detailed Tableau’s own Metadata API documentation.

One thing to note is that the Metadata API returns data in a JSON format. Depending on what you are querying, you’ll end up with multiple nested json lists and it can get very tricky to convert this into a pandas dataframe. For the above metadata query, you will end up with a result which would like below (this is mock data just to give you an idea of what the output looks like):

{
  "data": {
    "publishedDatasources": [
      {
        "name": "Sales Performance DataSource",
        "id": "f3b1a2c4-1234-5678-9abc-1234567890ab",
        "owner": {
          "name": "Alice Johnson",
          "email": "[email protected]"
        }
      },
      {
        "name": "Customer Orders DataSource",
        "id": "a4d2b3c5-2345-6789-abcd-2345678901bc",
        "owner": {
          "name": "Bob Smith",
          "email": "[email protected]"
        }
      },
      {
        "name": "Product Returns and Profitability",
        "id": "c5e3d4f6-3456-789a-bcde-3456789012cd",
        "owner": {
          "name": "Alice Johnson",
          "email": "[email protected]"
        }
      },
      {
        "name": "Customer Segmentation Analysis",
        "id": "d6f4e5a7-4567-89ab-cdef-4567890123de",
        "owner": {
          "name": "Charlie Lee",
          "email": "[email protected]"
        }
      },
      {
        "name": "Regional Sales Trends (Custom SQL)",
        "id": "e7a5f6b8-5678-9abc-def0-5678901234ef",
        "owner": {
          "name": "Bob Smith",
          "email": "[email protected]"
        }
      }
    ]
  }
}

We need to convert this JSON response into a dataframe so that its easy to work with. Notice that we need to extract the name and email of the owner from inside the owner object. 

### We need to convert the response into dataframe for easy data manipulation

col_names = result['data']['publishedDatasources'][0].keys()
master_df = pd.DataFrame(columns=col_names)

for i in result['data']['publishedDatasources']:
    tmp_dt = {k:v for k,v in i.items()}
    master_df = pd.concat([master_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

# Extract the owner name and email from the owner object
master_df['owner_name'] = master_df['owner'].apply(lambda x: x.get('name') if isinstance(x, dict) else None)
master_df['owner_email'] = master_df['owner'].apply(lambda x: x.get('email') if isinstance(x, dict) else None)

master_df.reset_index(inplace=True)
master_df.drop(['index','owner'], axis=1, inplace=True)
print('There are ', master_df.shape[0] , ' datasources in your site')

This is how the structure of master_df would look like:

Sample output of code

Once we have the main list ready, we can go ahead and start getting the names of the tables embedded in the data sources. If you are an avid Tableau user, you know that there are two ways to selecting tables in a Tableau data source — one is to directly choose the tables and establish a relation between them and the other is to use a custom sql query with one or more tables to achieve a new resultant table. Therefore, we need to address both the cases.

Processing of Custom SQL query tables

Below is the query to get the list of all custom SQLs used in the site along with their data sources. Notice that I have filtered the list to get only first 500 custom sql queries. In case there are more in your org, you will have to use an offset to get the next set of custom sql queries. There is also an option of using cursor method in Pagination when you want to fetch large list of results (refer here). For the sake of simplicity, I just use the offset method as I know, as there are less than 500 custom sql queries used on the site.

# Get the data sources and the table names from all the custom sql queries used on your Site

custom_table_query = """  {
  customSQLTablesConnection(first: 500){
    nodes {
        id
        name
        downstreamDatasources {
        name
        }
        query
    }
  }
}
"""

with server.auth.sign_in(tableau_auth):
    custom_table_query_result = server.metadata.query(
        custom_table_query
    )

Based on our mock data, this is how our output would look like:

{
  "data": {
    "customSQLTablesConnection": {
      "nodes": [
        {
          "id": "csql-1234",
          "name": "RegionalSales_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Regional Sales Trends (Custom SQL)"
            }
          ],
          "query": "SELECT r.region_name, SUM(s.sales_amount) AS total_sales FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Regions r ON s.region_id = r.region_id GROUP BY r.region_name"
        },
        {
          "id": "csql-5678",
          "name": "ProfitabilityAnalysis_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Product Returns and Profitability"
            }
          ],
          "query": "SELECT p.product_category, SUM(s.profit) AS total_profit FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Products p ON s.product_id = p.product_id GROUP BY p.product_category"
        },
        {
          "id": "csql-9101",
          "name": "CustomerSegmentation_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Segmentation Analysis"
            }
          ],
          "query": "SELECT c.customer_id, c.location, COUNT(o.order_id) AS total_orders FROM ecommerce.sales_data.Customers c JOIN ecommerce.sales_data.Orders o ON c.customer_id = o.customer_id GROUP BY c.customer_id, c.location"
        },
        {
          "id": "csql-3141",
          "name": "CustomerOrders_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "query": "SELECT o.order_id, o.customer_id, o.order_date, o.sales_amount FROM ecommerce.sales_data.Orders o WHERE o.order_status = 'Completed'"
        },
        {
          "id": "csql-3142",
          "name": "CustomerProfiles_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "query": "SELECT c.customer_id, c.customer_name, c.segment, c.location FROM ecommerce.sales_data.Customers c WHERE c.active_flag = 1"
        },
        {
          "id": "csql-3143",
          "name": "CustomerReturns_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "query": "SELECT r.return_id, r.order_id, r.return_reason FROM ecommerce.sales_data.Returns r"
        }
      ]
    }
  }
}

Just like before when we were creating the master list of data sources, here also we have nested json for the downstream data sources where we would need to extract only the “name” part of it. In the “query” column, the entire custom sql is dumped. If we use regex pattern, we can easily search for the names of the table used in the query.

We know that the table names always come after FROM or a JOIN clause and they generally follow the format ..

. The is optional and most of the times not used. There were some queries I found which used this format and I ended up only getting the database and schema names, and not the complete table name. Once we have extracted the names of the data sources and the names of the tables, we need to merge the rows per data source as there can be multiple custom sql queries used in a single data source.

### Convert the custom sql response into dataframe
col_names = custom_table_query_result['data']['customSQLTablesConnection']['nodes'][0].keys()
cs_df = pd.DataFrame(columns=col_names)

for i in custom_table_query_result['data']['customSQLTablesConnection']['nodes']:
    tmp_dt = {k:v for k,v in i.items()}

    cs_df = pd.concat([cs_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

# Extract the data source name where the custom sql query was used
cs_df['data_source'] = cs_df.downstreamDatasources.apply(lambda x: x[0]['name'] if x and 'name' in x[0] else None)
cs_df.reset_index(inplace=True)
cs_df.drop(['index','downstreamDatasources'], axis=1,inplace=True)

### We need to extract the table names from the sql query. We know the table name comes after FROM or JOIN clause
# Note that the name of table can be of the format ..
# Depending on the format of how table is called, you will have to modify the regex expression

def extract_tables(sql):
    # Regex to match database.schema.table or schema.table, avoid alias
    pattern = r'(?:FROM|JOIN)s+((?:[w+]|w+).(?:[w+]|w+)(?:.(?:[w+]|w+))?)b'
    matches = re.findall(pattern, sql, re.IGNORECASE)
    return list(set(matches))  # Unique table names

cs_df['customSQLTables'] = cs_df['query'].apply(extract_tables)
cs_df = cs_df[['data_source','customSQLTables']]

# We need to merge datasources as there can be multiple custom sqls used in the same data source
cs_df = cs_df.groupby('data_source', as_index=False).agg({
    'customSQLTables': lambda x: list(set(item for sublist in x for item in sublist))  # Flatten & make unique
})

print('There are ', cs_df.shape[0], 'datasources with custom sqls used in it')

After we perform all the above operations, this is how the structure of cs_df would look like:

Sample output of code

Processing of regular Tables in Data Sources

Now we need to get the list of all the regular tables used in a datasource which are not a part of custom SQL. There are two ways to go about it. Either use the publishedDatasources object and check for upstreamTables or use DatabaseTable and check for upstreamDatasources. I’ll go by the first method because I want the results at a data source level (basically, I want some code ready to reuse when I want to check a specific data source in further detail). Here again, for the sake of simplicity, instead of going for pagination, I’m looping through each datasource to ensure I have everything. We get the upstreamTables inside of the field object so that has to be cleaned out.

############### Get the data sources with the regular table names used in your site

### Its best to extract the tables information for every data source and then merge the results.
# Since we only get the table information nested under fields, in case there are hundreds of fields 
# used in a single data source, we will hit the response limits and will not be able to retrieve all the data.

data_source_list = master_df.name.tolist()

col_names = ['name', 'id', 'extractLastUpdateTime', 'fields']
ds_df = pd.DataFrame(columns=col_names)

with server.auth.sign_in(tableau_auth):
    for ds_name in data_source_list:
        query = """ {
            publishedDatasources (filter: { name: """"+ ds_name + """" }) {
            name
            id
            extractLastUpdateTime
            fields {
                name
                upstreamTables {
                    name
                }
            }
            }
        } """
        ds_name_result = server.metadata.query(
        query
        )
        for i in ds_name_result['data']['publishedDatasources']:
            tmp_dt = {k:v for k,v in i.items() if k != 'fields'}
            tmp_dt['fields'] = json.dumps(i['fields'])
        ds_df = pd.concat([ds_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

ds_df.reset_index(inplace=True)

This is how the structure of ds_df would look:

Sample output of code

We can need to flatten out the fields object and extract the field names as well as the table names. Since the table names will be repeating multiple times, we would have to deduplicate to keep only the unique ones.

# Function to extract the values of fields and upstream tables in json lists
def extract_values(json_list, key):
    values = []
    for item in json_list:
        values.append(item[key])
    return values

ds_df["fields"] = ds_df["fields"].apply(ast.literal_eval)
ds_df['field_names'] = ds_df.apply(lambda x: extract_values(x['fields'],'name'), axis=1)
ds_df['upstreamTables'] = ds_df.apply(lambda x: extract_values(x['fields'],'upstreamTables'), axis=1)

# Function to extract the unique table names 
def extract_upstreamTable_values(table_list):
    values = set()a
    for inner_list in table_list:
        for item in inner_list:
            if 'name' in item:
                values.add(item['name'])
    return list(values)

ds_df['upstreamTables'] = ds_df.apply(lambda x: extract_upstreamTable_values(x['upstreamTables']), axis=1)
ds_df.drop(["index","fields"], axis=1, inplace=True)

Once we do the above operations, the final structure of ds_df would look something like this:

Sample output of code

We have all the pieces and now we just have to merge them together:

###### Join all the data together
master_data = pd.merge(master_df, ds_df, how="left", on=["name","id"])
master_data = pd.merge(master_data, cs_df, how="left", left_on="name", right_on="data_source")

# Save the results to analyse further
master_data.to_excel("Tableau Data Sources with Tables.xlsx", index=False)

This is our final master_data:

Sample Output of code

Table-level Impact Analysis

Let’s say there were some schema changes on the “Sales” table and you want to know which data sources will be impacted. Then you can simply write a small function which checks if a table is present in either of the two columns — upstreamTables or customSQLTables like below.

def filter_rows_with_table(df, col1, col2, target_table):
    """
    Filters rows in df where target_table is part of any value in either col1 or col2 (supports partial match).
    Returns full rows (all columns retained).
    """
    return df[
        df.apply(
            lambda row: 
                (isinstance(row[col1], list) and any(target_table in item for item in row[col1])) or
                (isinstance(row[col2], list) and any(target_table in item for item in row[col2])),
            axis=1
        )
    ]
# As an example 
filter_rows_with_table(master_data, 'upstreamTables', 'customSQLTables', 'Sales')

Below is the output. You can see that 3 data sources will be impacted by this change. You can also alert the data source owners Alice and Bob in advance about this so they can start working on a fix before something breaks on the Tableau dashboards.

Sample output of code

You can check out the complete version of the code in my Github repository here.

This is just one of the potential use-cases of the Tableau Metadata API. You can also extract the field names used in custom sql queries and add to the dataset to get a field-level impact analysis. One can also monitor the stale data sources with the extractLastUpdateTime to see if those have any issues or need to be archived if they are not used any more. We can also use the dashboards object to fetch information at a dashboard level.

Final Thoughts

If you have come this far, kudos. This is just one use case of automating Tableau data management. It’s time to reflect on your own work and think which of those other tasks you could automate to make your life easier. I hope this mini-project served as an enjoyable learning experience to understand the power of Tableau Metadata API. If you liked reading this, you might also like another one of my blog posts about Tableau, on some of the challenges I faced when dealing with big .

Also do check out my previous blog where I explored building an interactive, database-powered app with Python, Streamlit, and SQLite.


Before you go…

Follow me so you don’t miss any new posts I write in future; you will find more of my articles on my . You can also connect with me on LinkedIn or Twitter!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI dominates Gartner’s top strategic technology trends for 2026

“AI supercomputing platforms integrate CPUs, GPUs, AI ASICs, neuromorphic and alternative computing paradigms, enabling organizations to orchestrate complex workloads while unlocking new levels of performance, efficiency and innovation. These systems combine powerful processors, massive memory, specialized hardware, and orchestration software to tackle data-intensive workloads in areas like machine learning, simulation,

Read More »

IBM signs up Groq for speedy AI inferencing option

The technology involved in the partnership will let customers use watsonx capabilities in a familiar way and allow them to use their preferred tools while accelerating inference with GroqCloud, IBM stated. “This integration will address key AI developer needs, including inference orchestration, load balancing, and hardware acceleration, ultimately streamlining the

Read More »

Wi-Fi 8 is coming and it’s going to make AI a lot faster

Traditional Wi-Fi optimizes for 90/10 download-to-upload ratios. AI applications push toward 50/50 symmetry. Voice assistants, edge AI processing and sensor data all require consistent uplink capacity. “AI traffic looks different,” Szymanski explained. “It’s increasingly symmetric, with heavy uplink demands from these edge devices. These devices are pushing all this data

Read More »

Var Energi Sees Oil Stabilizing Next Year

Var Energi ASA, Norway’s third-biggest oil and gas company, sees oil’s supply and demand outlook stabilizing next year, with prices not dropping significantly below $60 a barrel. “We may see a short period of oversupply, but I think when you look into next year, you see that supply-demand balance coming back into line,” Chief Executive Officer Nick Walker told reporters on Tuesday. “Oil is going to be required for a long time, and the industry has not been investing enough.” Industry watchers, including the Paris-based International Energy Agency, have been predicting a flood of supplies for more than a year. Additional barrels from the Organization of the Petroleum Exporting Countries and its allies, as well as nations outside the group, are seen overwhelming cooling demand growth. Futures are heading for a third monthly loss and top traders are braced for a further slide. Lower oil prices will reduce investments and eventually slow output, Walker said, adding that “there seems to be a floor of about $60, it doesn’t go below that regardless of the volumes coming in.” Var Energi has seen several fields come online this year, including Johan Castberg in the Barents Sea and the startup of Balder X in June. Output from both fields will contribute to production rising to about 430,000 barrels of oil equivalent a day in the fourth quarter. To maintain barrels through the end of the decade, the company will sanction a total of ten new projects by year’s end, Walker said, with four already approved at break-evens of below $35 a barrel. The oil and gas company’s earnings before interest and tax climbed to $1.07 billion in the third quarter, beating analyst estimates. Var Energi plans to pay out $1.2 billion in dividends this year and in 2026. WHAT DO YOU THINK? Generated by readers, the comments included

Read More »

Utility identity fraud is a bigger problem than people think

David Maimon is head of fraud insights at SentiLink. Utility companies have become prime targets for identity-related fraud, yet the true scope of the threat remains largely hidden. In 2024, the FTC’s Consumer Sentinel Network logged about 28,000 reports of stolen identities used to open new utility accounts and another 2,000 tied to existing ones. Those numbers exceed pre-COVID levels but fall short of a pandemic-era spike. At first glance, that looks like stabilization. It’s not. The reality is worse: Utilities are not legally required to report fraud incidents, leaving much of the damage invisible. Instead, companies push customers to file complaints — something many never do, whether because the utility absorbs the loss, the consumer avoids the hassle, or, in the case of synthetic identities, no real victim exists to file at all. My research shows that the use of both stolen and synthetic identities to defraud utilities is persistent — and growing. Below I explain why utilities are prime targets, the tactics fraudsters use and how these crimes ripple across the broader fraud ecosystem. Why utilities? Utility providers are attractive targets for a mix of technical, regulatory and structural reasons. Low-friction onboarding: Compared to banks, many utilities still use limited identity verification. In the absence of strong “know your customer” rules, criminals can easily open accounts using stolen or synthetic identities. Delayed detection: Billing cycles stretch 30 to 60 days, giving scammers weeks of service before red flags appear. Fraudsters often exploit this delay and abandon accounts before utilities catch on. Service protections: Shutoff moratoriums — implemented during extreme weather or for medical “critical care” designations — protect vulnerable populations, but also shield fraudsters from enforcement. Once flagged as critical care, accounts can be nearly impossible to shut off even when fraud is suspected. Low legal risk: Utilities are not

Read More »

Repsol to Link 805-MW Wind Farms to Zaragoza Combined Cycle Plant

Repsol SA said it will hybridize its 818-megawatt (MW) combined cycle power plant in Escatron, Zaragoza, through 15 wind projects with a combined capacity of 805 MW. Repsol acquired the wind farms from Forestalia, which remains their developer until commissioning, Repsol said in a press release. Repsol plans to connect the hybridized plant to a proposed data center owned by a third party. The collaboration represents “the largest energy hybridization project in Spain and one of the largest worldwide in the northern Spanish region of Aragon”, the Spanish integrated energy company said. “This hybridization project will add more than 1,600 MW from two different energy sources, but sharing the same connection point to the grid, which will permit the complementarity of both sources to be exploited, stabilizing supply, maximizing the use of the connection point to the grid and optimizing infrastructure and maintenance”, Repsol said. “The wind farms that form part of the project are in the processing and development phase and have received a favorable Environmental Impact Statement. They will share the evacuation infrastructure to the power station, which optimizes resources and streamlines operational integration. “This project has clear synergies with the development, by a third party, of a data center in the vicinity of the power plant. For this data center, Repsol has already secured grid connection approval from the Spanish system operator, Red Electrica de España, for the connection of 402 MW of renewable energy self-consumption. This will be complemented by more than 800 MW of support energy provided through hybridization, giving this future data center one of the highest power capacities in the country”. In a separate renewable energy agreement, Repsol and Norwegian Cruise Line Holdings Ltd (NCLH) said Monday Repsol will supply renewable fuels to NCLH vessels at the Port of Barcelona for eight years.

Read More »

Phillips 66 and Kinder Plan 1st Ever California-Bound Fuel Pipe

Phillips 66 and Kinder Morgan Inc. plan to build a new pipeline system and reverse the flow on some existing conduits to haul gasoline and other fuels to California, Arizona and Nevada. As California’s in-state refining capacity dwindles, the regional market is becoming increasingly reliant on imported fuels, especially gasoline. The pipeline project hatched by Phillips 66 and Kinder will carry fuels from as far away as the Midwest to augment supplies sent by refiners in Washington State and Asia.  The project, slated for completion around 2029, would be the first pipeline system to deliver motor fuels into California, a state long considered an island disconnected from the major refining hubs of the Gulf Coast and Midwest. The investment comes amid a wave of California refinery closures or conversions — including the imminent shutdown of Phillips 66’s Los Angeles-area plant — that threaten to squeeze motorists with shrinking fuel supplies and higher pump prices. Consumers in some neighboring states also are expected to feel the impacts.  Phillips 66 and Kinder announced Monday what is known as an open season on their proposed Western Gateway Pipeline, during which bids are solicited from shippers that would reserve space on the conduit for moving fuels. The bidding window will close on Dec. 19, Phillips 66 wrote in the statement. The Western Gateway project would involve building a new line between Borger, Texas, where Phillips 66 operates a refinery, and Phoenix. Meanwhile, the flow on Kinder’s existing California-to-Arizona pipe known as SFPP will be reversed so more fuel can flow to the Golden State.  Phillips 66 also plans to reverse the flow on the Gold Pipeline that carries supplies from Texas to the St. Louis region so Midwest fuel can move west. The refiner operates fuel-making plants along the Gold Pipeline route including in

Read More »

NatGas Posts Biggest Single Day Gain Since 2022

In an EBW Analytics Group report sent to Rigzone by the EBW team on Tuesday, Eli Rubin, an energy analyst at the company, highlighted that natural gas posted the largest single day gain since 2022 on Monday. The report pointed out that the November natural gas contract closed at $3.397 per million British thermal units (MMBtu) yesterday. This marked a 38.9 cent, or 12.9 percent, increase from Friday’s close, the report outlined. “Yesterday’s 38.9¢/MMBtu price spike was the largest single-session gain for a NYMEX natural gas front-month contract since the rampant price volatility of 2022,” Rubin said in the report. “The initial price surge was due to de-risking of severe early-November price weakness, with cooler weather lessening chances of an early-November supply glut,” he added. “After clearing key technical resistance at $3.22-3.24/MMBtu, however, continued short-covering – catalyzed by cooler weather – led to further gains,” he continued. In the report, Rubin said supportive weather added 23 billion cubic feet of demand in 24 hours for the last week of October and first week of November. He pointed out, however, that “forecasts are ‘less bearish’ rather than outright supportive”. “Early-cycle production nominations dropped sharply this morning but will likely be revised higher,” Rubin noted in the report. “Weekly average LNG demand continues to flirt with all-time highs. Prices could retest the double-top from earlier this month at $3.55-3.58/MMBtu – but shorts pushed to the sidelines during yesterday’s move higher could reenter the market later this week,” he added. In an EBW report sent to Rigzone by the EBW team on Monday, Rubin, noted that a “bullish weather shift spark[ed]… [a] natural gas revival”. “After testing as low as $2.893 per MMBtu intraday Friday – and with weekend Henry Hub spot prices falling to $2.66 [per MMBtu] – a cooler weekend weather

Read More »

New Mexico, Wyoming Propose New Gas Routes for US, Asian Markets

The governors of New Mexico and Wyoming on Monday unveiled proposals to deliver Rocky Mountain natural gas to meet growing domestic and Asian demands driven by data centers and electrification. “The roadmap outlines two main routes – the Pacific Northwest Pathway and the Southwest Pathway – that leverage existing infrastructure and rights-of-way to efficiently move Rocky Mountain natural gas to expanding U.S. markets, while also offering the quickest and most affordable connections to Asian markets without relying on the Panama Canal”, said a statement posted on the website of the New Mexico Governor’s Office. The proposals, contained in the “Rocky Mountain Gas Roadmap & Implementation Playbook”, was released at a roundtable with Japanese Ambassador Shigeo Yamada. The “playbook” was an undertaking of the Western States and Tribal Nations Energy (WSTN) Initiative. “This is a very informative and detailed introduction to the potential of Rocky Mountain natural gas, and it is very much appreciated by the Japanese”, Yamada said. “I hope today’s meeting will lead to further cooperation and economic partnership between Japan and the State of New Mexico”. New Mexico Governor Lujan Grisham said, “New Mexico is ready to lead the way in unlocking the Rocky Mountain’s potential to create jobs and opportunity, strengthen international collaboration and write the next chapter in global energy”. “Putting the Rockies’ abundant energy to work benefits red and blue states by fueling our economies and the transition to clean energy”, Grisham, a Democrat, added. Wyoming Republican Governor Mark Gordon said, “We can provide natural gas resources very near-term – almost immediately – and this is just good business. It’s about moving into a future that meets climate concerns while making sure energy is there as demand continues to grow enormously”. WSTN Initiative chair Jason Sandel said, “This playbook is the culmination of years of work and is designed as a

Read More »

Why cloud and AI projects take longer and how to fix the holdups

No. 2 problem: Unrealistic expectations lead to problematic requirements Early planning and business case validation show that the requirements set for the project can’t be met, which then requires a period of redefinition before real work can start. This situation – reported by 69% of enterprises – leads to an obvious question: Is it the requirements or the project that’s the problem? Enterprises who cite this issue say it’s the former, and that it’s how the requirements are set that’s usually the cause. In the case of the cloud, the problem is that senior management thinks that the cloud is always cheaper, that you can always cut costs by moving to the cloud. This is despite the recent stories on “repatriation,” or moving cloud applications back into the data center. In the case of cloud projects, most enterprise IT organizations now understand how to assess a cloud project for cost/benefit, so most of the cases where impossible cost savings are promised are caught in the planning phase. For AI, both senior management and line department management have high expectations with respect to the technology, and in the latter case may also have some experience with AI in the form of as-a-service generative AI models available online. About a quarter of these proposals quickly run afoul of governance policies because of problems with data security, and half of this group dies at this point. For the remaining proposals, there is a whole set of problems that emerge. Most enterprises admit that they really don’t understand what AI can do, which obviously makes it hard to frame a realistic AI project. The biggest gap identified is between an AI business goal and a specific path leading to it. One CIO calls the projects offered by user organizations as “invitations to AI fishing

Read More »

Riverbed tackles AI data bottleneck with new Oracle-based service

“Customers are looking for faster, more secure ways to move massive datasets so they can bring AI initiatives to life,” said Sachin Menon, Oracle’s vice president of cloud engineering, in a statement. “With Riverbed Data Express Service deployed on OCI, organizations will be able to accelerate time to value, reduce costs, and help ensure that their data remains protected.” Riverbed’s Aras explains that its Data Express Service uses post-quantum cryptography (PQC) to move petabyte-scale datasets through secure VPN tunnels to ensure that customer data remains protected during the transfer process. The technology is based on Riverbed’s SteelHead acceleration platform running RiOS 10 software. “Our cloud-optimized technology design delivers much higher data retrieval, data movement across the network, and data write rates, through highly performant data mover instances, instance parallelization and matched network fabric configurations. The design is tailored for each cloud, to ensure maximal performance can be achieved using cloud-specific product adjustments,” Aras says. “The time for preventing harvest-now, decrypt-later is now,” Aras says, referring to the security threat where encrypted data is intercepted and stored for decryption once quantum computers become powerful enough. The Riverbed service addresses use cases spanning AI model training, inference operations, and emerging agentic AI applications. Data Express is initially deployed on Oracle Cloud Infrastructure, but Riverbed said the service will orchestrate data movement across AWS, Azure, and Google Cloud Platform, as well as on-premises data centers. General availability is planned for Q4 2025.

Read More »

Roundup: Digital Realty Marks Major Milestones in AI, Quantum Computing, Data Center Development

Key features of the DRIL include: • High-Density AI and HPC Testing. The DRIL supports AI and high-performance computing (HPC) workloads with high-density colocation, accommodating workloads up to 150 kW per cabinet. • AI Infrastructure Optimization. The ePlus AI Experience Center lets businesses explore AI-specific power, cooling, and GPU resource requirements in an environment optimized for AI infrastructure. • Hybrid Cloud Validation. With direct cloud connectivity, users can refine hybrid strategies and onboard through cross connects. • AI Workload Orchestration. Customers can orchestrate AI workloads across Digital Realty’s Private AI Exchange (AIPx) for seamless integration and performance. • Latency Testing Across Locations. Enterprises can test latency scenarios for seamless performance across multiple locations and cloud destinations. The firm’s Northern Virginia campus is the primary DRIL location, but companies can also test latency scenarios between there and other remote locations. DRIL rollout to other global locations is already in progress, and London is scheduled to go live in early 2026. Digital Realty, Redeployable Launch Pathway for Veteran Technical Careers As new data centers are created, they need talented workers. To that end, Digital Realty has partnered with Redeployable, an AI-powered career platform for veterans, to expand access to technical careers in the United Kingdom and United States. The collaboration launched a Site Engineer Pathway, now live on the Redeployable platform. It helps veterans explore, prepare for, and transition into roles at Digital Realty. Nearly half of veterans leave their first civilian role within a year, often due to unclear expectations, poor skill translation, and limited support, according to Redeployable. The Site Engineer Pathway uses real-world relevance and replaces vague job descriptions with an experience-based view of technical careers. Veterans can engage in scenario-based “job drops” simulating real facility and system challenges so they can assess their fit for the role before applying. They

Read More »

BlackRock’s $40B data center deal opens a new infrastructure battle for CIOs

Everest Group partner Yugal Joshi said, “CIOs are under significant pressure to clearly define their data center strategy beyond traditional one-off leases. Given most of the capacity is built and delivered by fewer players, CIOs need to prepare for a higher-price market with limited negotiation power.” The numbers bear this out. Global data center costs rose to $217.30 per kilowatt per month in the first quarter of 2025, with major markets seeing increases of 17-18% year-over-year, according to CBRE. Those prices are at levels last seen in 2011-2012, and analysts expect them to remain elevated. Gogia said, “The combination of AI demand, energy scarcity, and environmental regulation has permanently rewritten the economics of running workloads. Prices that once looked extraordinary have now become baseline.” Hyperscalers get first dibs The consolidation problem is compounded by the way capacity is being allocated. North America’s data center vacancy rate fell to 1.6% in the first half of 2025, with Northern Virginia posting just 0.76%, according to CBRE Research. More troubling for enterprises: 74.3% of capacity currently under construction is already preleased, primarily to cloud and AI providers. “The global compute market is no longer governed by open supply and demand,” Gogia said. “It is increasingly shaped by pre-emptive control. Hyperscalers and AI majors are reserving capacity years in advance, often before the first trench for power is dug. This has quietly created a two-tier world: one in which large players guarantee their future and everyone else competes for what remains.” That dynamic forces enterprises into longer planning cycles. “CIOs must forecast their infrastructure requirements with the same precision they apply to financial budgets and talent pipelines,” Gogia said. “The planning horizon must stretch to three or even five years.”

Read More »

Nvidia, Infineon partner for AI data center power overhaul

The solution is to convert power right at the GPU on the server board and to upgrade the backbone to 800 volts. That should squeeze more reliability and efficiency out of the system while dealing with the heat, Infineon stated.   Nvidia announced the 800 Volt direct current (VDC) power architecture at Computex 2025 as a much-needed replacement for the 54 Volt backbone currently in use, which is overwhelmed by the demand of AI processors and increasingly prone to failure. “This makes sense with the power needs of AI and how it is growing,” said Alvin Nguyen, senior analyst with Forrester Research. “This helps mitigate power losses seen from lower voltage and AC systems, reduces the need for materials like copper for wiring/bus bars, better reliability, and better serviceability.” Infineon says a shift to a centralized 800 VDC architecture allows for reduced power losses, higher efficiency and reliability. However, the new architecture requires new power conversion solutions and safety mechanisms to prevent potential hazards and costly server downtimes such as service and maintenance.

Read More »

Meta details cutting-edge networking technologies for AI infrastructure

ESUN initiative As part of its standardization efforts, Meta said it would be a key player in the new Ethernet for Scale-Up Networking (ESUN) initiative that brings together AMD, Arista, ARM, Broadcom, Cisco, HPE Networking, Marvell, Microsoft, NVIDIA, OpenAI and Oracle to advance the networking technology to handle the growing scale-up domain for AI systems. ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will focus on the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP wrote in a blog. ESUN will actively engage with other organizations such as Ultra-Ethernet Consortium (UEC) and long-standing IEEE 802.3 Ethernet to align open standards, incorporate best practices, and accelerate innovation, the OCP stated. Data center networking milestones The launch of ESUN is just one of the AI networking developments Meta shared at the event. Meta engineers also announced three data center networking innovations aimed at making its infrastructure more flexible, scalable, and efficient: The evolution of Meta’s Disaggregated Scheduled Fabric (DSF) to support scale-out interconnect for large AI clusters that span entire data center buildings. A new Non-Scheduled Fabric (NSF) architecture based entirely on shallow-buffer, disaggregated Ethernet switches that will support our largest AI clusters like Prometheus. The addition of Minipack3N, based on Nvidia’s Ethernet Spectrum-4 ASIC, to Meta’s portfolio of 51Tbps OCP switches that use OCP’s Switch Abstraction Interface and Meta’s Facebook Open Switching System (FBOSS) software stack. DSF is Meta’s open networking fabric that completely separates switch hardware, NICs, endpoints, and other networking components from the underlying network and uses OCP-SAI and FBOSS to achieve that, according to Meta. It supports Ethernet-based RoCE RDMA over Converged Ethernet (RoCE/RDMA)) to endpoints, accelerators and NICs from multiple vendors, such as Nvidia,

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »