Stay Ahead, Stay ONMINE

No More Tableau Downtime: Metadata API for Proactive Data Health

In today’s world, the reliability of data solutions is everything. When we build dashboards and reports, one expects that the numbers reflected there are correct and up-to-date. Based on these numbers, insights are drawn and actions are taken. For any unforeseen reason, if the dashboards are broken or if the numbers are incorrect — then it becomes a fire-fight to fix everything. If the issues are not fixed in time, then it damages the trust placed on the data team and their solutions.  But why would dashboards be broken or have wrong numbers? If the dashboard was built correctly the first time, then 99% of the time the issue comes from the data that feeds the dashboards — from the data warehouse. Some possible scenarios are: Few ETL pipelines failed, so the new data is not yet in A table is replaced with another new one  Some columns in the table are dropped or renamed Schemas in data warehouse have changed And many more. There is still a chance that the issue is on the Tableau site, but in my experience, most of the times, it is always due to some changes in data warehouse. Even though we know the root cause, it’s not always straightforward to start working on a fix. There is no central place where you can check which Tableau data sources rely on specific tables. If you have the Tableau Data Management add-on, it could help, but from what I know, its hard to find dependencies of custom sql queries used in data sources. Nevertheless, the add-on is too expensive and most companies don’t have it. The real pain begins when you have to go through all the data sources manually to start fixing it. On top of it, you have a string of users on your head impatiently waiting for a quick-fix. The fix itself might not be difficult, it would just be a time-consuming one. What if we could anticipate these issues and identify impacted data sources before anyone notices a problem? Wouldn’t that just be great? Well, there is a way now with the Tableau Metadata API. The Metadata API uses GraphQL, a query language for APIs that returns only the data that you’re interested in. For more info on what’s possible with GraphQL, do check out GraphQL.org. In this blog post, I’ll show you how to connect to the Tableau Metadata API using Python’s Tableau Server Client (TSC) library to proactively identify data sources using specific tables, so that you can act fast before any issues arise. Once you know which Tableau data sources are affected by a specific table, you can make some updates yourself or alert the owners of those data sources about the upcoming changes so they can be prepared for it. Connecting to the Tableau Metadata API Lets connect to the Tableau Server using TSC. We need to import in all the libraries we would need for the exercise! ### Import all required libraries import tableauserverclient as t import pandas as pd import json import ast import re In order to connect to the Metadata API, you will have to first create a personal access token in your Tableau Account settings. Then update the & with the token you just created. Also update with your Tableau site. If the connection is established successfully, then “Connected” will be printed in the output window. ### Connect to Tableau server using personal access token tableau_auth = t.PersonalAccessTokenAuth(“”, “”, site_id=””) server = t.Server(“https://dub01.online.tableau.com/”, use_server_version=True) with server.auth.sign_in(tableau_auth): print(“Connected”) Lets now get a list of all data sources that are published on your site. There are many attributes you can fetch, but for the current use case, lets keep it simple and only get the id, name and owner contact information for every data source. This will be our master list to which we will add in all other information. ############### Get all the list of data sources on your Site all_datasources_query = “”” { publishedDatasources { name id owner { name email } } }””” with server.auth.sign_in(tableau_auth): result = server.metadata.query( all_datasources_query ) Since I want this blog to be focussed on how to proactively identify which data sources are affected by a specific table, I’ll not be going into the nuances of Metadata API. To better understand how the query works, you can refer to a very detailed Tableau’s own Metadata API documentation. One thing to note is that the Metadata API returns data in a JSON format. Depending on what you are querying, you’ll end up with multiple nested json lists and it can get very tricky to convert this into a pandas dataframe. For the above metadata query, you will end up with a result which would like below (this is mock data just to give you an idea of what the output looks like): { “data”: { “publishedDatasources”: [ { “name”: “Sales Performance DataSource”, “id”: “f3b1a2c4-1234-5678-9abc-1234567890ab”, “owner”: { “name”: “Alice Johnson”, “email”: “[email protected]” } }, { “name”: “Customer Orders DataSource”, “id”: “a4d2b3c5-2345-6789-abcd-2345678901bc”, “owner”: { “name”: “Bob Smith”, “email”: “[email protected]” } }, { “name”: “Product Returns and Profitability”, “id”: “c5e3d4f6-3456-789a-bcde-3456789012cd”, “owner”: { “name”: “Alice Johnson”, “email”: “[email protected]” } }, { “name”: “Customer Segmentation Analysis”, “id”: “d6f4e5a7-4567-89ab-cdef-4567890123de”, “owner”: { “name”: “Charlie Lee”, “email”: “[email protected]” } }, { “name”: “Regional Sales Trends (Custom SQL)”, “id”: “e7a5f6b8-5678-9abc-def0-5678901234ef”, “owner”: { “name”: “Bob Smith”, “email”: “[email protected]” } } ] } } We need to convert this JSON response into a dataframe so that its easy to work with. Notice that we need to extract the name and email of the owner from inside the owner object.  ### We need to convert the response into dataframe for easy data manipulation col_names = result[‘data’][‘publishedDatasources’][0].keys() master_df = pd.DataFrame(columns=col_names) for i in result[‘data’][‘publishedDatasources’]: tmp_dt = {k:v for k,v in i.items()} master_df = pd.concat([master_df, pd.DataFrame.from_dict(tmp_dt, orient=’index’).T]) # Extract the owner name and email from the owner object master_df[‘owner_name’] = master_df[‘owner’].apply(lambda x: x.get(‘name’) if isinstance(x, dict) else None) master_df[‘owner_email’] = master_df[‘owner’].apply(lambda x: x.get(’email’) if isinstance(x, dict) else None) master_df.reset_index(inplace=True) master_df.drop([‘index’,’owner’], axis=1, inplace=True) print(‘There are ‘, master_df.shape[0] , ‘ datasources in your site’) This is how the structure of master_df would look like: Sample output of code Once we have the main list ready, we can go ahead and start getting the names of the tables embedded in the data sources. If you are an avid Tableau user, you know that there are two ways to selecting tables in a Tableau data source — one is to directly choose the tables and establish a relation between them and the other is to use a custom sql query with one or more tables to achieve a new resultant table. Therefore, we need to address both the cases. Processing of Custom SQL query tables Below is the query to get the list of all custom SQLs used in the site along with their data sources. Notice that I have filtered the list to get only first 500 custom sql queries. In case there are more in your org, you will have to use an offset to get the next set of custom sql queries. There is also an option of using cursor method in Pagination when you want to fetch large list of results (refer here). For the sake of simplicity, I just use the offset method as I know, as there are less than 500 custom sql queries used on the site. # Get the data sources and the table names from all the custom sql queries used on your Site custom_table_query = “”” { customSQLTablesConnection(first: 500){ nodes { id name downstreamDatasources { name } query } } } “”” with server.auth.sign_in(tableau_auth): custom_table_query_result = server.metadata.query( custom_table_query ) Based on our mock data, this is how our output would look like: { “data”: { “customSQLTablesConnection”: { “nodes”: [ { “id”: “csql-1234”, “name”: “RegionalSales_CustomSQL”, “downstreamDatasources”: [ { “name”: “Regional Sales Trends (Custom SQL)” } ], “query”: “SELECT r.region_name, SUM(s.sales_amount) AS total_sales FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Regions r ON s.region_id = r.region_id GROUP BY r.region_name” }, { “id”: “csql-5678”, “name”: “ProfitabilityAnalysis_CustomSQL”, “downstreamDatasources”: [ { “name”: “Product Returns and Profitability” } ], “query”: “SELECT p.product_category, SUM(s.profit) AS total_profit FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Products p ON s.product_id = p.product_id GROUP BY p.product_category” }, { “id”: “csql-9101”, “name”: “CustomerSegmentation_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Segmentation Analysis” } ], “query”: “SELECT c.customer_id, c.location, COUNT(o.order_id) AS total_orders FROM ecommerce.sales_data.Customers c JOIN ecommerce.sales_data.Orders o ON c.customer_id = o.customer_id GROUP BY c.customer_id, c.location” }, { “id”: “csql-3141”, “name”: “CustomerOrders_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Orders DataSource” } ], “query”: “SELECT o.order_id, o.customer_id, o.order_date, o.sales_amount FROM ecommerce.sales_data.Orders o WHERE o.order_status = ‘Completed'” }, { “id”: “csql-3142”, “name”: “CustomerProfiles_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Orders DataSource” } ], “query”: “SELECT c.customer_id, c.customer_name, c.segment, c.location FROM ecommerce.sales_data.Customers c WHERE c.active_flag = 1” }, { “id”: “csql-3143”, “name”: “CustomerReturns_CustomSQL”, “downstreamDatasources”: [ { “name”: “Customer Orders DataSource” } ], “query”: “SELECT r.return_id, r.order_id, r.return_reason FROM ecommerce.sales_data.Returns r” } ] } } } Just like before when we were creating the master list of data sources, here also we have nested json for the downstream data sources where we would need to extract only the “name” part of it. In the “query” column, the entire custom sql is dumped. If we use regex pattern, we can easily search for the names of the table used in the query. We know that the table names always come after FROM or a JOIN clause and they generally follow the format … The is optional and most of the times not used. There were some queries I found which used this format and I ended up only getting the database and schema names, and not the complete table name. Once we have extracted the names of the data sources and the names of the tables, we need to merge the rows per data source as there can be multiple custom sql queries used in a single data source. ### Convert the custom sql response into dataframe col_names = custom_table_query_result[‘data’][‘customSQLTablesConnection’][‘nodes’][0].keys() cs_df = pd.DataFrame(columns=col_names) for i in custom_table_query_result[‘data’][‘customSQLTablesConnection’][‘nodes’]: tmp_dt = {k:v for k,v in i.items()} cs_df = pd.concat([cs_df, pd.DataFrame.from_dict(tmp_dt, orient=’index’).T]) # Extract the data source name where the custom sql query was used cs_df[‘data_source’] = cs_df.downstreamDatasources.apply(lambda x: x[0][‘name’] if x and ‘name’ in x[0] else None) cs_df.reset_index(inplace=True) cs_df.drop([‘index’,’downstreamDatasources’], axis=1,inplace=True) ### We need to extract the table names from the sql query. We know the table name comes after FROM or JOIN clause # Note that the name of table can be of the format .. # Depending on the format of how table is called, you will have to modify the regex expression def extract_tables(sql): # Regex to match database.schema.table or schema.table, avoid alias pattern = r'(?:FROM|JOIN)s+((?:[w+]|w+).(?:[w+]|w+)(?:.(?:[w+]|w+))?)b’ matches = re.findall(pattern, sql, re.IGNORECASE) return list(set(matches)) # Unique table names cs_df[‘customSQLTables’] = cs_df[‘query’].apply(extract_tables) cs_df = cs_df[[‘data_source’,’customSQLTables’]] # We need to merge datasources as there can be multiple custom sqls used in the same data source cs_df = cs_df.groupby(‘data_source’, as_index=False).agg({ ‘customSQLTables’: lambda x: list(set(item for sublist in x for item in sublist)) # Flatten & make unique }) print(‘There are ‘, cs_df.shape[0], ‘datasources with custom sqls used in it’) After we perform all the above operations, this is how the structure of cs_df would look like: Sample output of code Processing of regular Tables in Data Sources Now we need to get the list of all the regular tables used in a datasource which are not a part of custom SQL. There are two ways to go about it. Either use the publishedDatasources object and check for upstreamTables or use DatabaseTable and check for upstreamDatasources. I’ll go by the first method because I want the results at a data source level (basically, I want some code ready to reuse when I want to check a specific data source in further detail). Here again, for the sake of simplicity, instead of going for pagination, I’m looping through each datasource to ensure I have everything. We get the upstreamTables inside of the field object so that has to be cleaned out. ############### Get the data sources with the regular table names used in your site ### Its best to extract the tables information for every data source and then merge the results. # Since we only get the table information nested under fields, in case there are hundreds of fields # used in a single data source, we will hit the response limits and will not be able to retrieve all the data. data_source_list = master_df.name.tolist() col_names = [‘name’, ‘id’, ‘extractLastUpdateTime’, ‘fields’] ds_df = pd.DataFrame(columns=col_names) with server.auth.sign_in(tableau_auth): for ds_name in data_source_list: query = “”” { publishedDatasources (filter: { name: “”””+ ds_name + “””” }) { name id extractLastUpdateTime fields { name upstreamTables { name } } } } “”” ds_name_result = server.metadata.query( query ) for i in ds_name_result[‘data’][‘publishedDatasources’]: tmp_dt = {k:v for k,v in i.items() if k != ‘fields’} tmp_dt[‘fields’] = json.dumps(i[‘fields’]) ds_df = pd.concat([ds_df, pd.DataFrame.from_dict(tmp_dt, orient=’index’).T]) ds_df.reset_index(inplace=True) This is how the structure of ds_df would look: Sample output of code We can need to flatten out the fields object and extract the field names as well as the table names. Since the table names will be repeating multiple times, we would have to deduplicate to keep only the unique ones. # Function to extract the values of fields and upstream tables in json lists def extract_values(json_list, key): values = [] for item in json_list: values.append(item[key]) return values ds_df[“fields”] = ds_df[“fields”].apply(ast.literal_eval) ds_df[‘field_names’] = ds_df.apply(lambda x: extract_values(x[‘fields’],’name’), axis=1) ds_df[‘upstreamTables’] = ds_df.apply(lambda x: extract_values(x[‘fields’],’upstreamTables’), axis=1) # Function to extract the unique table names def extract_upstreamTable_values(table_list): values = set()a for inner_list in table_list: for item in inner_list: if ‘name’ in item: values.add(item[‘name’]) return list(values) ds_df[‘upstreamTables’] = ds_df.apply(lambda x: extract_upstreamTable_values(x[‘upstreamTables’]), axis=1) ds_df.drop([“index”,”fields”], axis=1, inplace=True) Once we do the above operations, the final structure of ds_df would look something like this: Sample output of code We have all the pieces and now we just have to merge them together: ###### Join all the data together master_data = pd.merge(master_df, ds_df, how=”left”, on=[“name”,”id”]) master_data = pd.merge(master_data, cs_df, how=”left”, left_on=”name”, right_on=”data_source”) # Save the results to analyse further master_data.to_excel(“Tableau Data Sources with Tables.xlsx”, index=False) This is our final master_data: Sample Output of code Table-level Impact Analysis Let’s say there were some schema changes on the “Sales” table and you want to know which data sources will be impacted. Then you can simply write a small function which checks if a table is present in either of the two columns — upstreamTables or customSQLTables like below. def filter_rows_with_table(df, col1, col2, target_table): “”” Filters rows in df where target_table is part of any value in either col1 or col2 (supports partial match). Returns full rows (all columns retained). “”” return df[ df.apply( lambda row: (isinstance(row[col1], list) and any(target_table in item for item in row[col1])) or (isinstance(row[col2], list) and any(target_table in item for item in row[col2])), axis=1 ) ] # As an example filter_rows_with_table(master_data, ‘upstreamTables’, ‘customSQLTables’, ‘Sales’) Below is the output. You can see that 3 data sources will be impacted by this change. You can also alert the data source owners Alice and Bob in advance about this so they can start working on a fix before something breaks on the Tableau dashboards. Sample output of code You can check out the complete version of the code in my Github repository here. This is just one of the potential use-cases of the Tableau Metadata API. You can also extract the field names used in custom sql queries and add to the dataset to get a field-level impact analysis. One can also monitor the stale data sources with the extractLastUpdateTime to see if those have any issues or need to be archived if they are not used any more. We can also use the dashboards object to fetch information at a dashboard level. Final Thoughts If you have come this far, kudos. This is just one use case of automating Tableau data management. It’s time to reflect on your own work and think which of those other tasks you could automate to make your life easier. I hope this mini-project served as an enjoyable learning experience to understand the power of Tableau Metadata API. If you liked reading this, you might also like another one of my blog posts about Tableau, on some of the challenges I faced when dealing with big . Also do check out my previous blog where I explored building an interactive, database-powered app with Python, Streamlit, and SQLite. Before you go… Follow me so you don’t miss any new posts I write in future; you will find more of my articles on my . You can also connect with me on LinkedIn or Twitter!

In today’s world, the reliability of data solutions is everything. When we build dashboards and reports, one expects that the numbers reflected there are correct and up-to-date. Based on these numbers, insights are drawn and actions are taken. For any unforeseen reason, if the dashboards are broken or if the numbers are incorrect — then it becomes a fire-fight to fix everything. If the issues are not fixed in time, then it damages the trust placed on the data team and their solutions. 

But why would dashboards be broken or have wrong numbers? If the dashboard was built correctly the first time, then 99% of the time the issue comes from the data that feeds the dashboards — from the data warehouse. Some possible scenarios are:

  • Few ETL pipelines failed, so the new data is not yet in
  • A table is replaced with another new one 
  • Some columns in the table are dropped or renamed
  • Schemas in data warehouse have changed
  • And many more.

There is still a chance that the issue is on the Tableau site, but in my experience, most of the times, it is always due to some changes in data warehouse. Even though we know the root cause, it’s not always straightforward to start working on a fix. There is no central place where you can check which Tableau data sources rely on specific tables. If you have the Tableau Data Management add-on, it could help, but from what I know, its hard to find dependencies of custom sql queries used in data sources.

Nevertheless, the add-on is too expensive and most companies don’t have it. The real pain begins when you have to go through all the data sources manually to start fixing it. On top of it, you have a string of users on your head impatiently waiting for a quick-fix. The fix itself might not be difficult, it would just be a time-consuming one.

What if we could anticipate these issues and identify impacted data sources before anyone notices a problem? Wouldn’t that just be great? Well, there is a way now with the Tableau Metadata API. The Metadata API uses GraphQL, a query language for APIs that returns only the data that you’re interested in. For more info on what’s possible with GraphQL, do check out GraphQL.org.

In this blog post, I’ll show you how to connect to the Tableau Metadata API using Python’s Tableau Server Client (TSC) library to proactively identify data sources using specific tables, so that you can act fast before any issues arise. Once you know which Tableau data sources are affected by a specific table, you can make some updates yourself or alert the owners of those data sources about the upcoming changes so they can be prepared for it.

Connecting to the Tableau Metadata API

Lets connect to the Tableau Server using TSC. We need to import in all the libraries we would need for the exercise!

### Import all required libraries
import tableauserverclient as t
import pandas as pd
import json
import ast
import re

In order to connect to the Metadata API, you will have to first create a personal access token in your Tableau Account settings. Then update the & with the token you just created. Also update with your Tableau site. If the connection is established successfully, then “Connected” will be printed in the output window.

### Connect to Tableau server using personal access token
tableau_auth = t.PersonalAccessTokenAuth("", "", 
                                           site_id="")
server = t.Server("https://dub01.online.tableau.com/", use_server_version=True)

with server.auth.sign_in(tableau_auth):
        print("Connected")

Lets now get a list of all data sources that are published on your site. There are many attributes you can fetch, but for the current use case, lets keep it simple and only get the id, name and owner contact information for every data source. This will be our master list to which we will add in all other information.

############### Get all the list of data sources on your Site

all_datasources_query = """ {
  publishedDatasources {
    name
    id
    owner {
    name
    email
    }
  }
}"""
with server.auth.sign_in(tableau_auth):
    result = server.metadata.query(
        all_datasources_query
    )

Since I want this blog to be focussed on how to proactively identify which data sources are affected by a specific table, I’ll not be going into the nuances of Metadata API. To better understand how the query works, you can refer to a very detailed Tableau’s own Metadata API documentation.

One thing to note is that the Metadata API returns data in a JSON format. Depending on what you are querying, you’ll end up with multiple nested json lists and it can get very tricky to convert this into a pandas dataframe. For the above metadata query, you will end up with a result which would like below (this is mock data just to give you an idea of what the output looks like):

{
  "data": {
    "publishedDatasources": [
      {
        "name": "Sales Performance DataSource",
        "id": "f3b1a2c4-1234-5678-9abc-1234567890ab",
        "owner": {
          "name": "Alice Johnson",
          "email": "[email protected]"
        }
      },
      {
        "name": "Customer Orders DataSource",
        "id": "a4d2b3c5-2345-6789-abcd-2345678901bc",
        "owner": {
          "name": "Bob Smith",
          "email": "[email protected]"
        }
      },
      {
        "name": "Product Returns and Profitability",
        "id": "c5e3d4f6-3456-789a-bcde-3456789012cd",
        "owner": {
          "name": "Alice Johnson",
          "email": "[email protected]"
        }
      },
      {
        "name": "Customer Segmentation Analysis",
        "id": "d6f4e5a7-4567-89ab-cdef-4567890123de",
        "owner": {
          "name": "Charlie Lee",
          "email": "[email protected]"
        }
      },
      {
        "name": "Regional Sales Trends (Custom SQL)",
        "id": "e7a5f6b8-5678-9abc-def0-5678901234ef",
        "owner": {
          "name": "Bob Smith",
          "email": "[email protected]"
        }
      }
    ]
  }
}

We need to convert this JSON response into a dataframe so that its easy to work with. Notice that we need to extract the name and email of the owner from inside the owner object. 

### We need to convert the response into dataframe for easy data manipulation

col_names = result['data']['publishedDatasources'][0].keys()
master_df = pd.DataFrame(columns=col_names)

for i in result['data']['publishedDatasources']:
    tmp_dt = {k:v for k,v in i.items()}
    master_df = pd.concat([master_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

# Extract the owner name and email from the owner object
master_df['owner_name'] = master_df['owner'].apply(lambda x: x.get('name') if isinstance(x, dict) else None)
master_df['owner_email'] = master_df['owner'].apply(lambda x: x.get('email') if isinstance(x, dict) else None)

master_df.reset_index(inplace=True)
master_df.drop(['index','owner'], axis=1, inplace=True)
print('There are ', master_df.shape[0] , ' datasources in your site')

This is how the structure of master_df would look like:

Sample output of code

Once we have the main list ready, we can go ahead and start getting the names of the tables embedded in the data sources. If you are an avid Tableau user, you know that there are two ways to selecting tables in a Tableau data source — one is to directly choose the tables and establish a relation between them and the other is to use a custom sql query with one or more tables to achieve a new resultant table. Therefore, we need to address both the cases.

Processing of Custom SQL query tables

Below is the query to get the list of all custom SQLs used in the site along with their data sources. Notice that I have filtered the list to get only first 500 custom sql queries. In case there are more in your org, you will have to use an offset to get the next set of custom sql queries. There is also an option of using cursor method in Pagination when you want to fetch large list of results (refer here). For the sake of simplicity, I just use the offset method as I know, as there are less than 500 custom sql queries used on the site.

# Get the data sources and the table names from all the custom sql queries used on your Site

custom_table_query = """  {
  customSQLTablesConnection(first: 500){
    nodes {
        id
        name
        downstreamDatasources {
        name
        }
        query
    }
  }
}
"""

with server.auth.sign_in(tableau_auth):
    custom_table_query_result = server.metadata.query(
        custom_table_query
    )

Based on our mock data, this is how our output would look like:

{
  "data": {
    "customSQLTablesConnection": {
      "nodes": [
        {
          "id": "csql-1234",
          "name": "RegionalSales_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Regional Sales Trends (Custom SQL)"
            }
          ],
          "query": "SELECT r.region_name, SUM(s.sales_amount) AS total_sales FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Regions r ON s.region_id = r.region_id GROUP BY r.region_name"
        },
        {
          "id": "csql-5678",
          "name": "ProfitabilityAnalysis_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Product Returns and Profitability"
            }
          ],
          "query": "SELECT p.product_category, SUM(s.profit) AS total_profit FROM ecommerce.sales_data.Sales s JOIN ecommerce.sales_data.Products p ON s.product_id = p.product_id GROUP BY p.product_category"
        },
        {
          "id": "csql-9101",
          "name": "CustomerSegmentation_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Segmentation Analysis"
            }
          ],
          "query": "SELECT c.customer_id, c.location, COUNT(o.order_id) AS total_orders FROM ecommerce.sales_data.Customers c JOIN ecommerce.sales_data.Orders o ON c.customer_id = o.customer_id GROUP BY c.customer_id, c.location"
        },
        {
          "id": "csql-3141",
          "name": "CustomerOrders_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "query": "SELECT o.order_id, o.customer_id, o.order_date, o.sales_amount FROM ecommerce.sales_data.Orders o WHERE o.order_status = 'Completed'"
        },
        {
          "id": "csql-3142",
          "name": "CustomerProfiles_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "query": "SELECT c.customer_id, c.customer_name, c.segment, c.location FROM ecommerce.sales_data.Customers c WHERE c.active_flag = 1"
        },
        {
          "id": "csql-3143",
          "name": "CustomerReturns_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "query": "SELECT r.return_id, r.order_id, r.return_reason FROM ecommerce.sales_data.Returns r"
        }
      ]
    }
  }
}

Just like before when we were creating the master list of data sources, here also we have nested json for the downstream data sources where we would need to extract only the “name” part of it. In the “query” column, the entire custom sql is dumped. If we use regex pattern, we can easily search for the names of the table used in the query.

We know that the table names always come after FROM or a JOIN clause and they generally follow the format ..

. The is optional and most of the times not used. There were some queries I found which used this format and I ended up only getting the database and schema names, and not the complete table name. Once we have extracted the names of the data sources and the names of the tables, we need to merge the rows per data source as there can be multiple custom sql queries used in a single data source.

### Convert the custom sql response into dataframe
col_names = custom_table_query_result['data']['customSQLTablesConnection']['nodes'][0].keys()
cs_df = pd.DataFrame(columns=col_names)

for i in custom_table_query_result['data']['customSQLTablesConnection']['nodes']:
    tmp_dt = {k:v for k,v in i.items()}

    cs_df = pd.concat([cs_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

# Extract the data source name where the custom sql query was used
cs_df['data_source'] = cs_df.downstreamDatasources.apply(lambda x: x[0]['name'] if x and 'name' in x[0] else None)
cs_df.reset_index(inplace=True)
cs_df.drop(['index','downstreamDatasources'], axis=1,inplace=True)

### We need to extract the table names from the sql query. We know the table name comes after FROM or JOIN clause
# Note that the name of table can be of the format ..
# Depending on the format of how table is called, you will have to modify the regex expression

def extract_tables(sql):
    # Regex to match database.schema.table or schema.table, avoid alias
    pattern = r'(?:FROM|JOIN)s+((?:[w+]|w+).(?:[w+]|w+)(?:.(?:[w+]|w+))?)b'
    matches = re.findall(pattern, sql, re.IGNORECASE)
    return list(set(matches))  # Unique table names

cs_df['customSQLTables'] = cs_df['query'].apply(extract_tables)
cs_df = cs_df[['data_source','customSQLTables']]

# We need to merge datasources as there can be multiple custom sqls used in the same data source
cs_df = cs_df.groupby('data_source', as_index=False).agg({
    'customSQLTables': lambda x: list(set(item for sublist in x for item in sublist))  # Flatten & make unique
})

print('There are ', cs_df.shape[0], 'datasources with custom sqls used in it')

After we perform all the above operations, this is how the structure of cs_df would look like:

Sample output of code

Processing of regular Tables in Data Sources

Now we need to get the list of all the regular tables used in a datasource which are not a part of custom SQL. There are two ways to go about it. Either use the publishedDatasources object and check for upstreamTables or use DatabaseTable and check for upstreamDatasources. I’ll go by the first method because I want the results at a data source level (basically, I want some code ready to reuse when I want to check a specific data source in further detail). Here again, for the sake of simplicity, instead of going for pagination, I’m looping through each datasource to ensure I have everything. We get the upstreamTables inside of the field object so that has to be cleaned out.

############### Get the data sources with the regular table names used in your site

### Its best to extract the tables information for every data source and then merge the results.
# Since we only get the table information nested under fields, in case there are hundreds of fields 
# used in a single data source, we will hit the response limits and will not be able to retrieve all the data.

data_source_list = master_df.name.tolist()

col_names = ['name', 'id', 'extractLastUpdateTime', 'fields']
ds_df = pd.DataFrame(columns=col_names)

with server.auth.sign_in(tableau_auth):
    for ds_name in data_source_list:
        query = """ {
            publishedDatasources (filter: { name: """"+ ds_name + """" }) {
            name
            id
            extractLastUpdateTime
            fields {
                name
                upstreamTables {
                    name
                }
            }
            }
        } """
        ds_name_result = server.metadata.query(
        query
        )
        for i in ds_name_result['data']['publishedDatasources']:
            tmp_dt = {k:v for k,v in i.items() if k != 'fields'}
            tmp_dt['fields'] = json.dumps(i['fields'])
        ds_df = pd.concat([ds_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

ds_df.reset_index(inplace=True)

This is how the structure of ds_df would look:

Sample output of code

We can need to flatten out the fields object and extract the field names as well as the table names. Since the table names will be repeating multiple times, we would have to deduplicate to keep only the unique ones.

# Function to extract the values of fields and upstream tables in json lists
def extract_values(json_list, key):
    values = []
    for item in json_list:
        values.append(item[key])
    return values

ds_df["fields"] = ds_df["fields"].apply(ast.literal_eval)
ds_df['field_names'] = ds_df.apply(lambda x: extract_values(x['fields'],'name'), axis=1)
ds_df['upstreamTables'] = ds_df.apply(lambda x: extract_values(x['fields'],'upstreamTables'), axis=1)

# Function to extract the unique table names 
def extract_upstreamTable_values(table_list):
    values = set()a
    for inner_list in table_list:
        for item in inner_list:
            if 'name' in item:
                values.add(item['name'])
    return list(values)

ds_df['upstreamTables'] = ds_df.apply(lambda x: extract_upstreamTable_values(x['upstreamTables']), axis=1)
ds_df.drop(["index","fields"], axis=1, inplace=True)

Once we do the above operations, the final structure of ds_df would look something like this:

Sample output of code

We have all the pieces and now we just have to merge them together:

###### Join all the data together
master_data = pd.merge(master_df, ds_df, how="left", on=["name","id"])
master_data = pd.merge(master_data, cs_df, how="left", left_on="name", right_on="data_source")

# Save the results to analyse further
master_data.to_excel("Tableau Data Sources with Tables.xlsx", index=False)

This is our final master_data:

Sample Output of code

Table-level Impact Analysis

Let’s say there were some schema changes on the “Sales” table and you want to know which data sources will be impacted. Then you can simply write a small function which checks if a table is present in either of the two columns — upstreamTables or customSQLTables like below.

def filter_rows_with_table(df, col1, col2, target_table):
    """
    Filters rows in df where target_table is part of any value in either col1 or col2 (supports partial match).
    Returns full rows (all columns retained).
    """
    return df[
        df.apply(
            lambda row: 
                (isinstance(row[col1], list) and any(target_table in item for item in row[col1])) or
                (isinstance(row[col2], list) and any(target_table in item for item in row[col2])),
            axis=1
        )
    ]
# As an example 
filter_rows_with_table(master_data, 'upstreamTables', 'customSQLTables', 'Sales')

Below is the output. You can see that 3 data sources will be impacted by this change. You can also alert the data source owners Alice and Bob in advance about this so they can start working on a fix before something breaks on the Tableau dashboards.

Sample output of code

You can check out the complete version of the code in my Github repository here.

This is just one of the potential use-cases of the Tableau Metadata API. You can also extract the field names used in custom sql queries and add to the dataset to get a field-level impact analysis. One can also monitor the stale data sources with the extractLastUpdateTime to see if those have any issues or need to be archived if they are not used any more. We can also use the dashboards object to fetch information at a dashboard level.

Final Thoughts

If you have come this far, kudos. This is just one use case of automating Tableau data management. It’s time to reflect on your own work and think which of those other tasks you could automate to make your life easier. I hope this mini-project served as an enjoyable learning experience to understand the power of Tableau Metadata API. If you liked reading this, you might also like another one of my blog posts about Tableau, on some of the challenges I faced when dealing with big .

Also do check out my previous blog where I explored building an interactive, database-powered app with Python, Streamlit, and SQLite.


Before you go…

Follow me so you don’t miss any new posts I write in future; you will find more of my articles on my . You can also connect with me on LinkedIn or Twitter!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

SATORP halts processing activities at Jubail refinery

Saudi Aramco Total Refinery & Petrochemicals Co.—a joint venture of Saudi Aramco (62.5%) and TotalEnergies SE (37.5%)—has temporarily shuttered units at its 460,000 b/d full-conversion refinery complex at Jubail, on Saudi Arabia’s eastern coast, following disruptions resulting from the ongoing war in the Middle East. In an Apr. 10 update

Read More »

Intel secures Google cloud and AI infrastructure deal

“Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand,” said Lip-Bu Tan, CEO  of Intel in a statement. Google does offer custom Armv9-based Axion processors as an alternative to x86 based instances

Read More »

Broadcom strikes chip deals with Google, Anthropic

Anthropic said this week that the AI startup’s annual revenue run rate has now crossed $30 billion, up from about $9 billion the previous year. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, CFO of Anthropic, in a

Read More »

BW Energy granted 25-year extension of license offshore Gabon

BW Energy Gabon has received approval from the Ministry of Oil and Gas of the Gabonese Republic to extend the Dussafu Marin production license offshore Gabon, West Africa. The license period has been extended to 2053 from 2028, inclusive of three 5-year option periods from 2038 onwards. The prior contract was until 2038 inclusive of two 5-year option periods from 2028 onwards. The extra time “provides long-term visibility for production, investments, and reserve development” of the operator’s “core producing asset,” the company said in a release Apr. 7. Ongoing license projects include MaBoMo Phase 2, with planned first oil in second-half 2026, and the Bourdon development following its discovery last year. The timeline also “strengthens the foundation for future infrastructure‑led growth opportunities across the adjacent Niosi and Guduma licenses, both operated by BW Energy,” the company continued. The Dussafu Marin permit is a development and exploitation license with multiple discoveries and prospects lying within a proven oil and gas play fairway within Southern Gabon basin. To the northwest of the block is the Etame-Ebouri Trend, a collection of fields producing from the pre-salt Gamba and Dentale sandstones, and to the north are Lucina and M’Bya fields which produce from the syn-rift Lucina sandstones beneath the Gamba. Oil fields within the Dussafu Permit include Moubenga, Walt Whitman, Ruche, Ruche North East, Tortue, Hibiscus, and Hibiscus North. BW Energy Gabon is operator at Dussafu (73.50%) with partners Panoro Energy ASA (17.5%) and Gabon Oil Co. (9%). Dussafu.

Read More »

Santos plans development of North Slope’s Quokka Unit

Santos Ltd. has started development planning in the Quokka Unit on Alaska’s North Slope after further delineating the Nanushuk reservoir. The Quokka-1 appraisal well spudded on Jan. 1, 2026, about 6 six miles from the Mitquq-1 discovery well drilled in 2020. It was drilled to 4,787 ft TD and encountered a high-quality reservoir with about 143 ft of net oil pay in the Nanushuk formation, demonstrating an average porosity of 19%. Following a single stage fracture stimulation, the well achieved a flow rate of 2,190 bo/d. Reservoir sands correlated between the two discoveries, coupled with fluid analyses, confirm the presence of high‑quality, light‑gravity oil, supporting strong well performance and improved pricing relative to Pikka oil. Together with additional geological data, these results underpin the potential for a two‑drill‑site development with production capacity comparable to Pikka phase 1, the company said.  Rate and resource potential for the two-drill-site development is being evaluated. Resource estimation is ongoing and appraisal results will be evaluated as part of the FY26 contingent resource assessment. In FY25, Santos reported 2C contingent resources of 177 MMboe for the Quokka Unit. Based on these results, Santos has started development planning, including the initiation of key permitting activities. Santos is operator of the Quokka Unit (51%) with partner Repsol (49%).

Read More »

Fluor, Axens secure contracts for US grassroots refinery project

Fluor Corp. and Axens Group have been awarded key contracts for America First Refining’s (AFR) proposed grassroots refinery at the Port of Brownsville, Tex., advancing development of what would be the first new US refinery to be built in more than 50 years. Fluor will execute front-end engineering and design (FEED) for the project, while Axens will serve as technology licensor of core refining process technologies to be used at the site, the service providers said in separate Apr. 7 releases. The AFR refinery is designed to process more than 60 million bbl/year—or about 164,400 b/d—of US light shale crude into transportation fuels, including gasoline, diesel, and jet fuel. Contract details Without disclosing a specific value of its contract, Fluor said the scope of its FEED study will cover early-stage engineering and design required to define project execution, cost, and schedule based on a complex that will incorporate commercially proven technologies to improve efficiency and emissions performance while processing domestic shale crude. As technology licensor, Axens said it will deliver process technologies for key refining units at the site, including those for: Naphtha, diesel hydrotreating. Continuous catalytic reforming. Isomerization. Alongside supporting improved fuel-quality specifications, the unspecified technologies to be supplied for the refinery will also help to reduce overall energy consumption at the site. Axens—which confirmed its involvement since 2017 in working with AFR on early-stage development of the project—said this latest licensing agreement will also cover engineering support, equipment, catalysts, and services across the refinery’s process configuration. Project background, commercial framework Upon first announcing the project in March 2026, AFR said the proposed development came alongside an already signed 20-year offtake agreement with a global integrated oil company covering 1.2 billion bbl of US light shale crude, as well as capital investment to support construction. As part of the

Read More »

EIA: US crude inventories up 3.1 million bbl

US crude oil inventories for the week ended Apr. 3, excluding the Strategic Petroleum Reserve, increased by 3.1 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 464.7 million bbl, US crude oil inventories are about 2% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 1.6 million bbl from last week and are about 3% above the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.1 million bbl last week and are about 5% below the 5-year average for this time of year. Propane-propylene inventories increased by 600,000 bbl from last week and are 71% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.3 million b/d for the week ended Apr. 3, which was 129,000 b/d less than the previous week’s average. Refineries operated at 92% of capacity. Gasoline production decreased, averaging 9.4 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.3 million b/d, down 130,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.6 million b/d, 9.1% more than the same 4-week period last year. Total motor gasoline imports averaged 571,000 b/d. Distillate fuel imports averaged 152,000 b/d.

Read More »

Oil prices plunge as Iran war tensions ease amid tentative Hormuz reopening

Crude oil prices plunged sharply on Apr. 7 after US President Donald Trump announced a conditional 2-week ceasefire agreement with Iran, contingent on reopening the Strait of Hormuz and restoring safe passage for energy shipments. Both Brent and WTI crude oil fell towards $95/bbl, marking their largest single-day decline since 2020. Under the agreement, Iran signaled willingness to halt attacks on shipping and allow transit through Hormuz while broader negotiations continue. The US also indicated it would assist in clearing a backlog of tankers and stabilizing maritime traffic. Benchmark crude prices initially surged above $110/bbl in early April amid fears of prolonged supply disruption after Iran effectively restricted traffic through the strait—a corridor responsible for roughly 20% of global oil flows. The blockade, triggered by escalating US-Iran hostilities, caused tanker traffic to collapse and stranded millions of barrels of crude and refined products in the region. Despite the price correction, analysts caution that supply disruptions and infrastructure damage will continue to constrain markets. The conflict has already impaired regional energy assets, including LNG infrastructure in Qatar, and forced producers across the Middle East to curtail output or delay exports. The US Energy Information Administration (EIA) warned that fuel prices may remain elevated for months even if flows normalize, citing logistical bottlenecks, depleted inventories, and continued geopolitical uncertainty. “In theory, the 10–13 million b/d of crude oil and product supply stranded behind the Strait should now be gradually released. Whether the pre-March status quo will be re-established depends entirely on whether the truce can be turned into a permanent peace during the negotiations in Pakistan,” said Tamas Varga, analyst, PVM Oil Associates. “What appears evident, at least for now, is that the current quarter, the April–June period, will be the tightest, as the scarcity of available oil, both crude and refined

Read More »

EIA: Brent crude to reach $115/bbl in second-quarter 2026

Global oil markets have entered a period of acute volatility, with prices expected to surge into second-quarter 2026 as war-driven supply disruptions in the Middle East constrain flows through the Strait of Hormuz, according to the US Energy Information Administration (EIA)’s April Short-Term Energy Outlook. The agency estimates that Brent crude averaged $103/bbl in March and will climb further to a quarterly peak of about $115/bbl in second-quarter 2026, reflecting a sharp tightening in global supply following widespread production shut-ins across key Gulf producers. The disruption stems from the effective closure of the Strait of Hormuz, a critical chokepoint that typically carries nearly 20% of global oil supply. The US-Iran war in the region has forced producers including Saudi Arabia, Iraq, Kuwait, and the UAE to curtail output significantly. EIA estimates that crude production shut-ins averaged 7.5 million b/d in March and will rise to a peak of 9.1 million b/d in April. In this outlook, EIA assumes the conflict does not persist past April and that traffic through the Strait of Hormuz gradually resumes. Under those assumptions, EIA expects production shut-ins will fall to 6.7 million b/d in May and return close to pre-conflict levels in late 2026. The scale of the outage has rapidly flipped the market from prior expectations of oversupply into a pronounced deficit, with global inventories drawing sharply during the second quarter. Despite an assumption that the conflict does not persist beyond April, the agency warns that supply chains will take months to normalize, keeping a geopolitical risk premium embedded in prices through late 2026. EIA forecasts the Brent crude oil price will fall below $90/bbl in fourth-quarter 2026 and average $76/bbl in 2027, about $23/bbl higher than in its February STEO forecast. This price forecast is highly dependent on EIA’s assumptions of both the

Read More »

OpenAI puts part of Stargate project on hold over runaway power costs

OpenAI has postponed plans to open one of the data centers central to its Stargate project. It announced its plan to open the data center in the UK with great fanfare last September, when it was regarded as a major boost for the country’s nascent AI industry, as well as proving a step up for OpenAI’s international credentials. At the time, Sam Altman, CEO of OpenAI, said, “The UK has been a longstanding pioneer of AI, and is now home to world-class researchers, millions of ChatGPT users, and a government that quickly recognized the potential of this technology.” All of that has been quietly forgotten. The plans for the data center in Northumberland, in the Northeast of England, have been put on hold, with the project ready to be revived when the conditions are ripe for major infrastructure investment, according to a report by the BBC.

Read More »

Neoclouds gain momentum in a supply-constrained world

And since they used the same hardware, both neoclouds and traditional cloud providers are subject to the same shortage problem. Component suppliers are reporting significant shortages due to demand for AI data centers and Synergy sees neoclouds also experiencing delays just like traditional cloud providers. “Demand is currently outstripping supply,” said Dinsmore. “It will take a while before that starts to come into more balance.” Among neoclouds, CoreWeave stands out as the most direct challenger to traditional hyperscale cloud providers. Meanwhile, OpenAI and Anthropic represent a distinct but increasingly important category, and that is platform-centric providers offering cloud-like access to foundational models and AI development environments. Synergy says that as demand for AI infrastructure accelerates, neoclouds are positioning themselves as focused alternatives to traditional hyperscale providers such as Amazon, Microsoft and Google.

Read More »

What is AI networking? How it adds intelligence to your infrastructure

The end goal is to make networks more reliable, efficient and performant. Enterprises are already seeing notable results when AI is applied to IT operations, including shorter deployment times, a decrease in trouble tickets, and faster time to resolution. With the help of AI, networks  will become more autonomous and self-healing (that is, able to address issues without the need for human intervention). In fact, Tier 1 and Tier 2 infrastructure is moving toward ‘no human in the loop,’ Nick Lippis, co-founder and co-chair of enterprise user community ONUG, recently told Network World. In time, humans will only need to step in for policy exceptions and high-risk decisions. “Layering in AI capabilities makes LAN management applications easier to use and more accessible across an organization,” Dell’Oro Group analyst Sian Morgan said. Gartner predicts that, by 2030, AI agents will drive most network activities, up from “minimal adoption” in 2025. The firm emphasizes that leaders who overlook the AI networking shift “risk higher MTTR [meantime to repair], rising costs, and growing security exposure.” The core components of AI networking It’s important to note that the use of AI and machine learning (ML) in network management is not new. AI for IT operations (AIOps), for instance, is a common practice that uses automation to improve broader IT operations. AI networking is specific to the network itself, covering domains including multi-cloud software, wired and wireless LAN, data center switching, SD-WAN and managed network services (MNS). The incorporation of generative AI, in particular, has brought AI networking to the fore, as enterprise leaders are rethinking every single aspect of their business, networking included.

Read More »

Aria Networks raises $125M and debuts its approach for AI-optimized networks

That embedded telemetry feeds adaptive tuning of Dynamic Load Balancing parameters, Data Center Quantized Congestion Notification (DCQCN) and failover logic without waiting for a threshold breach or a manual intervention. The platform architecture is layered. At the lowest levels, agents react in microseconds to link-level events such as transceiver flaps, rerouting leaf-spine traffic in milliseconds. At higher layers, agents make more strategic decisions about flow placement across the cluster. At the cloud layer, a large language model-based agent surfaces correlated insights to operators in natural language, allowing them to ask questions about specific jobs or alert conditions and receive context-aware responses. Karam argued that simply bolting an LLM onto an existing architecture does not deliver the same result. “If you ask it to do anything, it could hallucinate and bring down the network,” he said. “It doesn’t have any of the context or the data that’s required for this approach to be made safe.” Aria also exposes an MCP server, allowing external systems such as job schedulers and LLM routers to query network state directly and integrate it into their own decision-making. MFU and token efficiency as the target metrics Traditional networking is often evaluated in terms of bandwidth and latency. Aria is centering its platform around two metrics: Model FLOPS Utilization (MFU) and token efficiency. MFU is defined as the ratio of achieved FLOPS per accelerator to the theoretical peak. In practice, Karam said, MFU for training workloads typically runs between 33% and 45%, and inference often comes in below 30%. “The network has a major impact on the MFU, and therefore the token efficiency, because the network touches every aspect, every other component in your cluster,” Karam said.

Read More »

New v2 UALink specification aims to catch up to NVLink

But given there are no products currently available using UALink 1.0, UALink 2.0 might be viewed as a premature launch Need to play catch up David Harold, senior analyst with Jon Peddie Research, was guarded in his reaction. “While 2.0 is a significant step forward from 1.0, we need to bear in mind that even 1.0 solutions aren’t shipping yet – they aren’t due until later this year. So, Nvidia is way ahead of the open alternatives on connectivity, indeed ahead of the proprietary or Ethernet based solutions too,” he said. What this means, he added, is that non-Nvidia alternatives are currently lagging in the market. “They need to play catch up on several fronts, not just networking. … I can’t think of a single shipping product that meaningfully has advantages over a Nvidia solution,” he said. “Ultimately UALink remains desirable since it will enable heterogeneous, multi-vendor environments but it’s quite a way behind NVLink today. ” There are plenty of signs that organizations will find it hard to break free of the Nvidia dominance, however. A couple of months ago, RISC-V pioneer SiFive signed a deal with Nvidia to incorporate Nvidia NVLink Fusion into its data center products, a departure for RISC companies. According to Harold, other companies could be joining it. “Custom ASIC company MediaTek is an NVLink partner, and they told me last week that they are planning to integrate it directly into next-generation custom silicon for AI applications,” he said. “This will enable a wider range of companies to use NVLink as their high-speed interconnect.” Other options And, Harold noted, Nvidia is already looking at other options. “Nvidia is now shifting to look at the copper limit for networking speed, with an interest in using optical connectivity instead,” said Harold.

Read More »

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny

Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Financial services

This page brings together essential resources to help financial institutions evaluate, adopt, and scale AI in regulated environments. Whether you’re exploring early use cases or

Read More »

Using skills

DepartmentReusable process skillTool-based skillConventions/standards skillMarketingcampaign-brief-builder Create a skill that turns a rough marketing idea into a complete campaign brief, including the goal, target audience, key

Read More »