Stay Ahead, Stay ONMINE

Practical SQL Puzzles That Will Level Up Your Skill

There are some Sql patterns that, once you know them, you start seeing them everywhere. The solutions to the puzzles that I will show you today are actually very simple SQL queries, but understanding the concept behind them will surely unlock new solutions to the queries you write on a day-to-day basis. These challenges are all based on real-world scenarios, as over the past few months I made a point of writing down every puzzle-like query that I had to build. I also encourage you to try them for yourself, so that you can challenge yourself first, which will improve your learning! All queries to generate the datasets will be provided in a PostgreSQL and DuckDB-friendly syntax, so that you can easily copy and play with them. At the end I will also provide you a link to a GitHub repo containing all the code, as well as the answer to the bonus challenge I will leave for you! I organized these puzzles in order of increasing difficulty, so, if you find the first ones too easy, at least take a look at the last one, which uses a technique that I truly believe you won’t have seen before. Okay, let’s get started. I love this puzzle because of how short and simple the final query is, even though it deals with many edge cases. The data for this challenge shows tickets moving in between Kanban stages, and the objective is to find how long, on average, tickets stay in the Doing stage. The data contains the ID of the ticket, the date the ticket was created, the date of the move, and the “from” and “to” stages of the move. The stages present are New, Doing, Review, and Done. Some things you need to know (edge cases): Tickets can move backwards, meaning tickets can go back to the Doing stage. You should not include tickets that are still stuck in the Doing stage, as there is no way to know how long they will stay there for. Tickets are not always created in the New stage. “`SQL CREATE TABLE ticket_moves ( ticket_id INT NOT NULL, create_date DATE NOT NULL, move_date DATE NOT NULL, from_stage TEXT NOT NULL, to_stage TEXT NOT NULL ); “` “`SQL INSERT INTO ticket_moves (ticket_id, create_date, move_date, from_stage, to_stage) VALUES — Ticket 1: Created in “New”, then moves to Doing, Review, Done. (1, ‘2024-09-01’, ‘2024-09-03’, ‘New’, ‘Doing’), (1, ‘2024-09-01’, ‘2024-09-07’, ‘Doing’, ‘Review’), (1, ‘2024-09-01’, ‘2024-09-10’, ‘Review’, ‘Done’), — Ticket 2: Created in “New”, then moves: New → Doing → Review → Doing again → Review. (2, ‘2024-09-05’, ‘2024-09-08’, ‘New’, ‘Doing’), (2, ‘2024-09-05’, ‘2024-09-12’, ‘Doing’, ‘Review’), (2, ‘2024-09-05’, ‘2024-09-15’, ‘Review’, ‘Doing’), (2, ‘2024-09-05’, ‘2024-09-20’, ‘Doing’, ‘Review’), — Ticket 3: Created in “New”, then moves to Doing. (Edge case: no subsequent move from Doing.) (3, ‘2024-09-10’, ‘2024-09-16’, ‘New’, ‘Doing’), — Ticket 4: Created already in “Doing”, then moves to Review. (4, ‘2024-09-15’, ‘2024-09-22’, ‘Doing’, ‘Review’); “` A summary of the data: Ticket 1: Created in the New stage, moves normally to Doing, then Review, and then Done. Ticket 2: Created in New, then moves: New → Doing → Review → Doing again → Review. Ticket 3: Created in New, moves to Doing, but it is still stuck there. Ticket 4: Created in the Doing stage, moves to Review afterward. It might be a good idea to stop for a bit and think how you would deal with this. Can you find out how long a ticket stays on a single stage? Honestly, this sounds intimidating at first, and it looks like it will be a nightmare to deal with all the edge cases. Let me show you the full solution to the problem, and then I will explain what is happening afterward. “`SQL WITH stage_intervals AS (     SELECT         ticket_id,         from_stage,         move_date          – COALESCE(             LAG(move_date) OVER (                 PARTITION BY ticket_id                  ORDER BY move_date             ),              create_date         ) AS days_in_stage     FROM         ticket_moves ) SELECT     SUM(days_in_stage) / COUNT(DISTINCT ticket_id) as avg_days_in_doing FROM     stage_intervals WHERE     from_stage = ‘Doing’; “` The first CTE uses the LAG function to find the previous move of the ticket, which will be the time the ticket entered that stage. Calculating the duration is as simple as subtracting the previous date from the move date. What you should notice is the use of the COALESCE in the previous move date. What that does is that if a ticket doesn’t have a previous move, then it uses the date of creation of the ticket. This takes care of the cases of tickets being created directly into the Doing stage, as it still will properly calculate the time it took to leave the stage. This is the result of the first CTE, showing the time spent in each stage. Notice how the Ticket 2 has two entries, as it visited the Doing stage in two separate occasions. With this done, it’s just a matter of getting the average as the SUM of total days spent in doing, divided by the distinct number of tickets that ever left the stage. Doing it this way, instead of simply using the AVG, makes sure that the two rows for Ticket 2 get properly accounted for as a single ticket. Not so bad, right? The goal of this second challenge is to find the most recent contract sequence of every employee. A break of sequence happens when two contracts have a gap of more than one day between them.  In this dataset, there are no contract overlaps, meaning that a contract for the same employee either has a gap or ends a day before the new one starts. “`SQL CREATE TABLE contracts (     contract_id integer PRIMARY KEY,     employee_id integer NOT NULL,     start_date date NOT NULL,     end_date date NOT NULL ); INSERT INTO contracts (contract_id, employee_id, start_date, end_date) VALUES      — Employee 1: Two continuous contracts     (1, 1, ‘2024-01-01’, ‘2024-03-31’),     (2, 1, ‘2024-04-01’, ‘2024-06-30’),     — Employee 2: One contract, then a gap of three days, then two contracts     (3, 2, ‘2024-01-01’, ‘2024-02-15’),     (4, 2, ‘2024-02-19’, ‘2024-04-30’),     (5, 2, ‘2024-05-01’, ‘2024-07-31’),     — Employee 3: One contract     (6, 3, ‘2024-03-01’, ‘2024-08-31’); “` As a summary of the data: Employee 1: Has two continuous contracts. Employee 2: One contract, then a gap of three days, then two contracts. Employee 3: One contract. The expected result, given the dataset, is that all contracts should be included except for the first contract of Employee 2, which is the only one that has a gap. Before explaining the logic behind the solution, I would like you to think about what operation can be used to join the contracts that belong to the same sequence. Focus only on the second row of data, what information do you need to know if this contract was a break or not? I hope it’s clear that this is the perfect situation for window functions, again. They are incredibly useful for solving problems like this, and understanding when to use them helps a lot in finding clean solutions to problems. First thing to do, then, is to get the end date of the previous contract for the same employee with the LAG function. Doing that, it’s simple to compare both dates and check if it was a break of sequence. “`SQL WITH ordered_contracts AS (     SELECT         *,         LAG(end_date) OVER (PARTITION BY employee_id ORDER BY start_date) AS previous_end_date     FROM         contracts ), gapped_contracts AS (     SELECT         *,         — Deals with the case of the first contract, which won’t have         — a previous end date. In this case, it’s still the start of a new         — sequence.         CASE WHEN previous_end_date IS NULL             OR previous_end_date < start_date – INTERVAL '1 day' THEN             1         ELSE             0         END AS is_new_sequence     FROM         ordered_contracts ) SELECT * FROM gapped_contracts ORDER BY employee_id ASC; “` An intuitive way to continue the query is to number the sequences of each employee. For example, an employee who has no gap, will always be on his first sequence, but an employee who had 5 breaks in contracts will be on his 5th sequence. Funnily enough, this is done by another window function. “`SQL — — Previous CTEs — sequences AS (     SELECT         *,         SUM(is_new_sequence) OVER (PARTITION BY employee_id ORDER BY start_date) AS sequence_id FROM     gapped_contracts ) SELECT * FROM sequences ORDER BY employee_id ASC; “` Notice how, for Employee 2, he starts his sequence #2 after the first gapped value. To finish this query, I grouped the data by employee, got the value of their most recent sequence, and then did an inner join with the sequences to keep only the most recent one. “`SQL — — Previous CTEs — max_sequence AS (     SELECT         employee_id,         MAX(sequence_id) AS max_sequence_id FROM     sequences GROUP BY     employee_id ), latest_contract_sequence AS (     SELECT         c.contract_id,         c.employee_id,         c.start_date,         c.end_date     FROM         sequences c         JOIN max_sequence m ON c.sequence_id = m.max_sequence_id             AND c.employee_id = m.employee_id         ORDER BY             c.employee_id,             c.start_date ) SELECT     * FROM     latest_contract_sequence; “` As expected, our final result is basically our starting query just with the first contract of Employee 2 missing!  Finally, the last puzzle — I’m glad you made it this far.  For me, this is the most mind-blowing one, as when I first encountered this problem I thought of a completely different solution that would be a mess to implement in SQL. For this puzzle, I’ve changed the context from what I had to deal with for my job, as I think it will make it easier to explain.  Imagine you’re a data analyst at an event venue, and you’re analyzing the talks scheduled for an upcoming event. You want to find the time of day where there will be the highest number of talks happening at the same time. This is what you should know about the schedules: Rooms are booked in increments of 30min, e.g. from 9h-10h30. The data is clean, there are no overbookings of meeting rooms. There can be back-to-back meetings in a single meeting room. Meeting schedule visualized (this is the actual data).  “`SQL CREATE TABLE meetings (     room TEXT NOT NULL,     start_time TIMESTAMP NOT NULL,     end_time TIMESTAMP NOT NULL ); INSERT INTO meetings (room, start_time, end_time) VALUES     — Room A meetings     ('Room A', '2024-10-01 09:00', '2024-10-01 10:00'),     ('Room A', '2024-10-01 10:00', '2024-10-01 11:00'),     ('Room A', '2024-10-01 11:00', '2024-10-01 12:00'),     — Room B meetings     ('Room B', '2024-10-01 09:30', '2024-10-01 11:30'),     — Room C meetings     ('Room C', '2024-10-01 09:00', '2024-10-01 10:00'),     ('Room C', '2024-10-01 11:30', '2024-10-01 12:00'); “` The way to solve this is using what is called a Sweep Line Algorithm, or also known as an event-based solution. This last name actually helps to understand what will be done, as the idea is that instead of dealing with intervals, which is what we have in the original data, we deal with events instead. To do this, we need to transform every row into two separate events. The first event will be the Start of the meeting, and the second event will be the End of the meeting. “`SQL WITH events AS (   — Create an event for the start of each meeting (+1)   SELECT      start_time AS event_time,      1 AS delta   FROM meetings   UNION ALL   — Create an event for the end of each meeting (-1)   SELECT     — Small trick to work with the back-to-back meetings (explained later)     end_time – interval '1 minute' as end_time,     -1 AS delta   FROM meetings ) SELECT * FROM events; “` Take the time to understand what is happening here. To create two events from a single row of data, we’re simply unioning the dataset on itself; the first half uses the start time as the timestamp, and the second part uses the end time. You might already notice the delta column created and see where this is going. When an event starts, we count it as +1, when it ends, we count it as -1. You might even be already thinking of another window function to solve this, and you’re actually right! But before that, let me just explain the trick I used in the end dates. As I don’t want back-to-back meetings to count as two concurrent meetings, I’m subtracting a single minute of every end date. This way, if a meeting ends and another starts at 10h30, it won’t be assumed that two meetings are concurrently happening at 10h30. Okay, back to the query and yet another window function. This time, though, the function of choice is a rolling SUM. “`SQL — — Previous CTEs — ordered_events AS (   SELECT     event_time,     delta,     SUM(delta) OVER (ORDER BY event_time, delta DESC) AS concurrent_meetings   FROM events ) SELECT * FROM ordered_events ORDER BY event_time DESC; “` The rolling SUM at the Delta column is essentially walking down every record and finding how many events are active at that time. For example, at 9 am sharp, it sees two events starting, so it marks the number of concurrent meetings as two! When the third meeting starts, the count goes up to three. But when it gets to 9h59 (10 am), then two meetings end, bringing the counter back to one. With this data, the only thing missing is to find when the highest value of concurrent meetings happens. “`SQL — — Previous CTEs — max_events AS (   — Find the maximum concurrent meetings value   SELECT      event_time,      concurrent_meetings,     RANK() OVER (ORDER BY concurrent_meetings DESC) AS rnk   FROM ordered_events ) SELECT event_time, concurrent_meetings FROM max_events WHERE rnk = 1; “` That’s it! The interval of 9h30–10h is the one with the largest number of concurrent meetings, which checks out with the schedule visualization above! This solution looks incredibly simple in my opinion, and it works for so many situations. Every time you are dealing with intervals now, you should think if the query wouldn’t be easier if you thought about it in the perspective of events. But before you move on, and to really nail down this concept, I want to leave you with a bonus challenge, which is also a common application of the Sweep Line Algorithm. I hope you give it a try! Bonus challenge The context for this one is still the same as the last puzzle, but now, instead of trying to find the period when there are most concurrent meetings, the objective is to find bad scheduling. It seems that there are overlaps in the meeting rooms, which need to be listed so it can be fixed ASAP. How would you find out if the same meeting room has two or more meetings booked at the same time? Here are some tips on how to solve it: It’s still the same algorithm. This means you will still do the UNION, but it will look slightly different. You should think in the perspective of each meeting room. You can use this data for the challenge: “`SQL CREATE TABLE meetings_overlap (     room TEXT NOT NULL,     start_time TIMESTAMP NOT NULL,     end_time TIMESTAMP NOT NULL ); INSERT INTO meetings_overlap (room, start_time, end_time) VALUES     — Room A meetings     ('Room A', '2024-10-01 09:00', '2024-10-01 10:00'),     ('Room A', '2024-10-01 10:00', '2024-10-01 11:00'),     ('Room A', '2024-10-01 11:00', '2024-10-01 12:00'),     — Room B meetings     ('Room B', '2024-10-01 09:30', '2024-10-01 11:30'),     — Room C meetings     ('Room C', '2024-10-01 09:00', '2024-10-01 10:00'),     — Overlaps with previous meeting.     ('Room C', '2024-10-01 09:30', '2024-10-01 12:00'); “` If you’re interested in the solution to this puzzle, as well as the rest of the queries, check this GitHub repo. The first takeaway from this blog post is that window functions are overpowered. Ever since I got more comfortable with using them, I feel that my queries have gotten so much simpler and easier to read, and I hope the same happens to you. If you’re interested in learning more about them, you would probably enjoy reading this other blog post I’ve written, where I go over how you can understand and use them effectively. The second takeaway is that these patterns used in the challenges really do happen in many other places. You might need to find sequences of subscriptions, customer retention, or you might need to find overlap of tasks. There are many situations when you will need to use window functions in a very similar fashion to what was done in the puzzles. The third thing I want you to remember is about this solution to using events besides dealing with intervals. I’ve looked at some problems I solved a long time ago that I could’ve used this pattern on to make my life easier, and unfortunately, I didn’t know about it at the time. I really do hope you enjoyed this post and gave a shot to the puzzles yourself. And I’m sure that if you made it this far, you either learned something new about SQL or strengthened your knowledge of window functions!  Thank you so much for reading. If you have questions or just want to get in touch with me, don’t hesitate to contact me at mtrentz.com. All images by the author unless stated otherwise.

There are some Sql patterns that, once you know them, you start seeing them everywhere. The solutions to the puzzles that I will show you today are actually very simple SQL queries, but understanding the concept behind them will surely unlock new solutions to the queries you write on a day-to-day basis.

These challenges are all based on real-world scenarios, as over the past few months I made a point of writing down every puzzle-like query that I had to build. I also encourage you to try them for yourself, so that you can challenge yourself first, which will improve your learning!

All queries to generate the datasets will be provided in a PostgreSQL and DuckDB-friendly syntax, so that you can easily copy and play with them. At the end I will also provide you a link to a GitHub repo containing all the code, as well as the answer to the bonus challenge I will leave for you!

I organized these puzzles in order of increasing difficulty, so, if you find the first ones too easy, at least take a look at the last one, which uses a technique that I truly believe you won’t have seen before.

Okay, let’s get started.

I love this puzzle because of how short and simple the final query is, even though it deals with many edge cases. The data for this challenge shows tickets moving in between Kanban stages, and the objective is to find how long, on average, tickets stay in the Doing stage.

The data contains the ID of the ticket, the date the ticket was created, the date of the move, and the “from” and “to” stages of the move. The stages present are New, Doing, Review, and Done.

Some things you need to know (edge cases):

  • Tickets can move backwards, meaning tickets can go back to the Doing stage.
  • You should not include tickets that are still stuck in the Doing stage, as there is no way to know how long they will stay there for.
  • Tickets are not always created in the New stage.
```SQL

CREATE TABLE ticket_moves (
    ticket_id INT NOT NULL,
    create_date DATE NOT NULL,
    move_date DATE NOT NULL,
    from_stage TEXT NOT NULL,
    to_stage TEXT NOT NULL
);

```
```SQL

INSERT INTO ticket_moves (ticket_id, create_date, move_date, from_stage, to_stage)
    VALUES
        -- Ticket 1: Created in "New", then moves to Doing, Review, Done.
        (1, '2024-09-01', '2024-09-03', 'New', 'Doing'),
        (1, '2024-09-01', '2024-09-07', 'Doing', 'Review'),
        (1, '2024-09-01', '2024-09-10', 'Review', 'Done'),
        -- Ticket 2: Created in "New", then moves: New → Doing → Review → Doing again → Review.
        (2, '2024-09-05', '2024-09-08', 'New', 'Doing'),
        (2, '2024-09-05', '2024-09-12', 'Doing', 'Review'),
        (2, '2024-09-05', '2024-09-15', 'Review', 'Doing'),
        (2, '2024-09-05', '2024-09-20', 'Doing', 'Review'),
        -- Ticket 3: Created in "New", then moves to Doing. (Edge case: no subsequent move from Doing.)
        (3, '2024-09-10', '2024-09-16', 'New', 'Doing'),
        -- Ticket 4: Created already in "Doing", then moves to Review.
        (4, '2024-09-15', '2024-09-22', 'Doing', 'Review');
```

A summary of the data:

  • Ticket 1: Created in the New stage, moves normally to Doing, then Review, and then Done.
  • Ticket 2: Created in New, then moves: New → Doing → Review → Doing again → Review.
  • Ticket 3: Created in New, moves to Doing, but it is still stuck there.
  • Ticket 4: Created in the Doing stage, moves to Review afterward.

It might be a good idea to stop for a bit and think how you would deal with this. Can you find out how long a ticket stays on a single stage?

Honestly, this sounds intimidating at first, and it looks like it will be a nightmare to deal with all the edge cases. Let me show you the full solution to the problem, and then I will explain what is happening afterward.

```SQL

WITH stage_intervals AS (
    SELECT
        ticket_id,
        from_stage,
        move_date 
        - COALESCE(
            LAG(move_date) OVER (
                PARTITION BY ticket_id 
                ORDER BY move_date
            ), 
            create_date
        ) AS days_in_stage
    FROM
        ticket_moves
)
SELECT
    SUM(days_in_stage) / COUNT(DISTINCT ticket_id) as avg_days_in_doing
FROM
    stage_intervals
WHERE
    from_stage = 'Doing';
```

The first CTE uses the LAG function to find the previous move of the ticket, which will be the time the ticket entered that stage. Calculating the duration is as simple as subtracting the previous date from the move date.

What you should notice is the use of the COALESCE in the previous move date. What that does is that if a ticket doesn’t have a previous move, then it uses the date of creation of the ticket. This takes care of the cases of tickets being created directly into the Doing stage, as it still will properly calculate the time it took to leave the stage.

This is the result of the first CTE, showing the time spent in each stage. Notice how the Ticket 2 has two entries, as it visited the Doing stage in two separate occasions.

With this done, it’s just a matter of getting the average as the SUM of total days spent in doing, divided by the distinct number of tickets that ever left the stage. Doing it this way, instead of simply using the AVG, makes sure that the two rows for Ticket 2 get properly accounted for as a single ticket.

Not so bad, right?

The goal of this second challenge is to find the most recent contract sequence of every employee. A break of sequence happens when two contracts have a gap of more than one day between them. 

In this dataset, there are no contract overlaps, meaning that a contract for the same employee either has a gap or ends a day before the new one starts.

```SQL
CREATE TABLE contracts (
    contract_id integer PRIMARY KEY,
    employee_id integer NOT NULL,
    start_date date NOT NULL,
    end_date date NOT NULL
);

INSERT INTO contracts (contract_id, employee_id, start_date, end_date)
VALUES 
    -- Employee 1: Two continuous contracts
    (1, 1, '2024-01-01', '2024-03-31'),
    (2, 1, '2024-04-01', '2024-06-30'),
    -- Employee 2: One contract, then a gap of three days, then two contracts
    (3, 2, '2024-01-01', '2024-02-15'),
    (4, 2, '2024-02-19', '2024-04-30'),
    (5, 2, '2024-05-01', '2024-07-31'),
    -- Employee 3: One contract
    (6, 3, '2024-03-01', '2024-08-31');
```

As a summary of the data:

  • Employee 1: Has two continuous contracts.
  • Employee 2: One contract, then a gap of three days, then two contracts.
  • Employee 3: One contract.

The expected result, given the dataset, is that all contracts should be included except for the first contract of Employee 2, which is the only one that has a gap.

Before explaining the logic behind the solution, I would like you to think about what operation can be used to join the contracts that belong to the same sequence. Focus only on the second row of data, what information do you need to know if this contract was a break or not?

I hope it’s clear that this is the perfect situation for window functions, again. They are incredibly useful for solving problems like this, and understanding when to use them helps a lot in finding clean solutions to problems.

First thing to do, then, is to get the end date of the previous contract for the same employee with the LAG function. Doing that, it’s simple to compare both dates and check if it was a break of sequence.

```SQL
WITH ordered_contracts AS (
    SELECT
        *,
        LAG(end_date) OVER (PARTITION BY employee_id ORDER BY start_date) AS previous_end_date
    FROM
        contracts
),
gapped_contracts AS (
    SELECT
        *,
        -- Deals with the case of the first contract, which won't have
        -- a previous end date. In this case, it's still the start of a new
        -- sequence.
        CASE WHEN previous_end_date IS NULL
            OR previous_end_date < start_date - INTERVAL '1 day' THEN
            1
        ELSE
            0
        END AS is_new_sequence
    FROM
        ordered_contracts
)
SELECT * FROM gapped_contracts ORDER BY employee_id ASC;
```

An intuitive way to continue the query is to number the sequences of each employee. For example, an employee who has no gap, will always be on his first sequence, but an employee who had 5 breaks in contracts will be on his 5th sequence. Funnily enough, this is done by another window function.

```SQL
--
-- Previous CTEs
--
sequences AS (
    SELECT
        *,
        SUM(is_new_sequence) OVER (PARTITION BY employee_id ORDER BY start_date) AS sequence_id
FROM
    gapped_contracts
)
SELECT * FROM sequences ORDER BY employee_id ASC;
```

Notice how, for Employee 2, he starts his sequence #2 after the first gapped value. To finish this query, I grouped the data by employee, got the value of their most recent sequence, and then did an inner join with the sequences to keep only the most recent one.

```SQL
--
-- Previous CTEs
--
max_sequence AS (
    SELECT
        employee_id,
        MAX(sequence_id) AS max_sequence_id
FROM
    sequences
GROUP BY
    employee_id
),
latest_contract_sequence AS (
    SELECT
        c.contract_id,
        c.employee_id,
        c.start_date,
        c.end_date
    FROM
        sequences c
        JOIN max_sequence m ON c.sequence_id = m.max_sequence_id
            AND c.employee_id = m.employee_id
        ORDER BY
            c.employee_id,
            c.start_date
)
SELECT
    *
FROM
    latest_contract_sequence;
```

As expected, our final result is basically our starting query just with the first contract of Employee 2 missing! 

Finally, the last puzzle — I’m glad you made it this far. 

For me, this is the most mind-blowing one, as when I first encountered this problem I thought of a completely different solution that would be a mess to implement in SQL.

For this puzzle, I’ve changed the context from what I had to deal with for my job, as I think it will make it easier to explain. 

Imagine you’re a data analyst at an event venue, and you’re analyzing the talks scheduled for an upcoming event. You want to find the time of day where there will be the highest number of talks happening at the same time.

This is what you should know about the schedules:

  • Rooms are booked in increments of 30min, e.g. from 9h-10h30.
  • The data is clean, there are no overbookings of meeting rooms.
  • There can be back-to-back meetings in a single meeting room.

Meeting schedule visualized (this is the actual data). 

```SQL
CREATE TABLE meetings (
    room TEXT NOT NULL,
    start_time TIMESTAMP NOT NULL,
    end_time TIMESTAMP NOT NULL
);

INSERT INTO meetings (room, start_time, end_time) VALUES
    -- Room A meetings
    ('Room A', '2024-10-01 09:00', '2024-10-01 10:00'),
    ('Room A', '2024-10-01 10:00', '2024-10-01 11:00'),
    ('Room A', '2024-10-01 11:00', '2024-10-01 12:00'),
    -- Room B meetings
    ('Room B', '2024-10-01 09:30', '2024-10-01 11:30'),
    -- Room C meetings
    ('Room C', '2024-10-01 09:00', '2024-10-01 10:00'),
    ('Room C', '2024-10-01 11:30', '2024-10-01 12:00');
```

The way to solve this is using what is called a Sweep Line Algorithm, or also known as an event-based solution. This last name actually helps to understand what will be done, as the idea is that instead of dealing with intervals, which is what we have in the original data, we deal with events instead.

To do this, we need to transform every row into two separate events. The first event will be the Start of the meeting, and the second event will be the End of the meeting.

```SQL
WITH events AS (
  -- Create an event for the start of each meeting (+1)
  SELECT 
    start_time AS event_time, 
    1 AS delta
  FROM meetings
  UNION ALL
  -- Create an event for the end of each meeting (-1)
  SELECT 
   -- Small trick to work with the back-to-back meetings (explained later)
    end_time - interval '1 minute' as end_time,
    -1 AS delta
  FROM meetings
)
SELECT * FROM events;
```

Take the time to understand what is happening here. To create two events from a single row of data, we’re simply unioning the dataset on itself; the first half uses the start time as the timestamp, and the second part uses the end time.

You might already notice the delta column created and see where this is going. When an event starts, we count it as +1, when it ends, we count it as -1. You might even be already thinking of another window function to solve this, and you’re actually right!

But before that, let me just explain the trick I used in the end dates. As I don’t want back-to-back meetings to count as two concurrent meetings, I’m subtracting a single minute of every end date. This way, if a meeting ends and another starts at 10h30, it won’t be assumed that two meetings are concurrently happening at 10h30.

Okay, back to the query and yet another window function. This time, though, the function of choice is a rolling SUM.

```SQL
--
-- Previous CTEs
--
ordered_events AS (
  SELECT
    event_time,
    delta,
    SUM(delta) OVER (ORDER BY event_time, delta DESC) AS concurrent_meetings
  FROM events
)
SELECT * FROM ordered_events ORDER BY event_time DESC;
```

The rolling SUM at the Delta column is essentially walking down every record and finding how many events are active at that time. For example, at 9 am sharp, it sees two events starting, so it marks the number of concurrent meetings as two!

When the third meeting starts, the count goes up to three. But when it gets to 9h59 (10 am), then two meetings end, bringing the counter back to one. With this data, the only thing missing is to find when the highest value of concurrent meetings happens.

```SQL
--
-- Previous CTEs
--
max_events AS (
  -- Find the maximum concurrent meetings value
  SELECT 
    event_time, 
    concurrent_meetings,
    RANK() OVER (ORDER BY concurrent_meetings DESC) AS rnk
  FROM ordered_events
)
SELECT event_time, concurrent_meetings
FROM max_events
WHERE rnk = 1;
```

That’s it! The interval of 9h30–10h is the one with the largest number of concurrent meetings, which checks out with the schedule visualization above!

This solution looks incredibly simple in my opinion, and it works for so many situations. Every time you are dealing with intervals now, you should think if the query wouldn’t be easier if you thought about it in the perspective of events.

But before you move on, and to really nail down this concept, I want to leave you with a bonus challenge, which is also a common application of the Sweep Line Algorithm. I hope you give it a try!

Bonus challenge

The context for this one is still the same as the last puzzle, but now, instead of trying to find the period when there are most concurrent meetings, the objective is to find bad scheduling. It seems that there are overlaps in the meeting rooms, which need to be listed so it can be fixed ASAP.

How would you find out if the same meeting room has two or more meetings booked at the same time? Here are some tips on how to solve it:

  • It’s still the same algorithm.
  • This means you will still do the UNION, but it will look slightly different.
  • You should think in the perspective of each meeting room.

You can use this data for the challenge:

```SQL
CREATE TABLE meetings_overlap (
    room TEXT NOT NULL,
    start_time TIMESTAMP NOT NULL,
    end_time TIMESTAMP NOT NULL
);

INSERT INTO meetings_overlap (room, start_time, end_time) VALUES
    -- Room A meetings
    ('Room A', '2024-10-01 09:00', '2024-10-01 10:00'),
    ('Room A', '2024-10-01 10:00', '2024-10-01 11:00'),
    ('Room A', '2024-10-01 11:00', '2024-10-01 12:00'),
    -- Room B meetings
    ('Room B', '2024-10-01 09:30', '2024-10-01 11:30'),
    -- Room C meetings
    ('Room C', '2024-10-01 09:00', '2024-10-01 10:00'),
    -- Overlaps with previous meeting.
    ('Room C', '2024-10-01 09:30', '2024-10-01 12:00');
```

If you’re interested in the solution to this puzzle, as well as the rest of the queries, check this GitHub repo.

The first takeaway from this blog post is that window functions are overpowered. Ever since I got more comfortable with using them, I feel that my queries have gotten so much simpler and easier to read, and I hope the same happens to you.

If you’re interested in learning more about them, you would probably enjoy reading this other blog post I’ve written, where I go over how you can understand and use them effectively.

The second takeaway is that these patterns used in the challenges really do happen in many other places. You might need to find sequences of subscriptions, customer retention, or you might need to find overlap of tasks. There are many situations when you will need to use window functions in a very similar fashion to what was done in the puzzles.

The third thing I want you to remember is about this solution to using events besides dealing with intervals. I’ve looked at some problems I solved a long time ago that I could’ve used this pattern on to make my life easier, and unfortunately, I didn’t know about it at the time.


I really do hope you enjoyed this post and gave a shot to the puzzles yourself. And I’m sure that if you made it this far, you either learned something new about SQL or strengthened your knowledge of window functions! 

Thank you so much for reading. If you have questions or just want to get in touch with me, don’t hesitate to contact me at mtrentz.com.

All images by the author unless stated otherwise.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI agent traffic drives first profitable year for Fastly

Fetcher bots, which retrieve content in real time when users make queries to AI assistants, show different concentration patterns. OpenAI’s ChatGPT and related bots generated 68% of fetcher bot requests. In some cases, fetcher bot request volumes exceeded 39,000 requests per minute to individual sites. AI agents check multiple websites

Read More »

Oil Posts Second Straight Weekly Drop

Oil notched its first back-to-back weekly drop this year as traders weighed the prospect of expanded OPEC+ supplies against US-Iran nuclear talks and recent weakness in wider markets. West Texas Intermediate fell 1% for the week and ended the day little changed on Friday. President Donald Trump said the US deployed an additional aircraft carrier to the Middle East in case a nuclear deal is not reached with Iran. “If we don’t have a deal, we’ll need it,” Trump said at the White House. He added he thinks negotiations will ultimately be successful. Traders have been watching for any uptick in tensions between Washington and Tehran that could pose a threat to supply from the Middle East. The commodity was down earlier as OPEC+ members see scope for output increases to resume in April, believing concerns about a glut are overblown, delegates said. The group has not yet committed to any course of action or begun formal discussions for a March 1 meeting, they added. A second weekly decline in the futures market stands to snap a long run of gains for early 2026, when recurrent bouts of geopolitical tension including the US stand-off with Iran supported oil prices. At an energy conference in London this week, attendees flagged that they expect worldwide supplies to top demand this year, potentially feeding into higher inventories in the Atlantic basin, the region where global prices are set. Still, a pile-up of sanctioned oil coupled with supply disruptions in various nations has limited the impact thus far. Trading may be thinner ahead of the Presidents’ Day holiday in the US, contributing to exaggerated price swings. Oil Prices WTI for March delivery settled up 0.1% at $62.89 a barrel in New York. Brent for April settlement edged 0.3% higher to $67.75 a barrel. What

Read More »

Reliance Gets USA License to Directly Buy VEN Crude

Indian refiner Reliance Industries Ltd. has received a general license from the US government that will allow it to purchase Venezuelan oil directly, according to a person familiar with the matter.  Reliance, owned by billionaire Mukesh Ambani, applied for the permit last month and received it from the Treasury Department a few days ago, the person said, asking not to be named as the matter is not public. The move comes immediately on the heels of a trade deal with the US that slashes punitive tariffs for Indian exports but demands that the country stop importing discounted Russian oil. The Indian government has asked state-owned refiners to consider buying more Venezuelan crude, as well as oil from the US.  Venezuela is unlikely to produce large volumes of crude anytime soon, but even limited supplies provide a fallback option for India’s largest refiner. The US — which has stepped up involvement in Venezuela’s oil sector after capturing the country’s president last month — has been considering general licenses to permit purchases, trading and investment in a sprawling but threadbare industry. Reliance is the first Indian refiner to receive clearance in the current push.  Reliance has historically been an important consumer of the country’s heavy crude, having struck a term deal to secure as much as 400,000 barrels a day from Petroleos de Venezuela SA in 2012. It is among only a handful of refiners in India that have the capacity to process the high-viscosity, sour oil, which is difficult to extract and refine without diluent.  The Indian refining giant took about 25% of Venezuela’s exports in 2019, before its term deal got suspended in 2019 due to US sanctions. It last received a general license in 2024 and took crude until that expired last year, and was not renewed. Reuters first reported the issuance of

Read More »

Baker Hughes Explores $1.5B Sale of Waygate Unit

Baker Hughes Co. is exploring a potential sale of its Waygate Technologies unit, which provides industrial testing and inspection equipment, people with knowledge of the matter said.  The world’s second-biggest oilfield contractor is working with advisers to study a possible divestment of the Waygate business, which could fetch around $1.5 billion, according to the people. A sale process could kick off in the next few months and attract interest from private equity firms, the people said, asking not to be identified because the information is private.  Deliberations are ongoing and there’s no certainty they will lead to a transaction, the people said. A representative for Baker Hughes declined to comment.  Waygate, based in Hürth, Germany, makes radiographic testing systems, industrial CT scanners, remote visual inspection machines and ultrasonic testing devices. It operates in more than 80 countries and is known for brands including Krautkrämer, phoenix|x-ray, Seifert, Everest and Agfa NDT.  The company was started in 2004 as GE Inspection Technologies. It’s been under the current ownership since 2017, when General Electric Co. combined its oil and gas division with Baker Hughes in a $32 billion deal.  Baker Hughes is selling the non-core asset after agreeing last year to buy industrial equipment maker Chart Industries Inc. for about $9.6 billion in one of its biggest-ever acquisitions. Chief Executive Officer Lorenzo Simonelli said in October last year that Baker Hughes is undertaking a “comprehensive evaluation” of its capital allocation focus following the Chart deal in order to boost shareholder value.  The pending sale would join other sizeable corporate divestments in Europe. Volkswagen AG has launched the sale of a majority stake in its heavy diesel engine maker Everllence, while Continental AG is selling its Contitech business. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions

Read More »

EIA Raises 2026 WTI Forecast, Lowers 2027 Projection

The U.S. Energy Information Administration (EIA) increased its 2026 West Texas Intermediate (WTI) crude oil average spot price forecast, and lowered its 2027 projection, in its latest short term energy outlook (STEO). According to the EIA’s February STEO, which was released on February 10, the EIA now sees the WTI spot price averaging $53.42 per barrel this year and $49.34 per barrel next year. In its previous STEO, which was released in January, the EIA projected that the WTI spot price would average $52.21 per barrel in 2026 and $50.36 per barrel in 2027. A quarterly breakdown included in the EIA’s latest STEO projected that the WTI average spot price will come in at $58.62 per barrel in the first quarter of this year, $53.65 per barrel in the second quarter, $51.69 per barrel in the third quarter, $50.00 per barrel in the fourth quarter, $49.00 per barrel in the first quarter of next year, $49.66 per barrel in the second quarter, $49.68 per barrel in the third quarter, and $49.00 per barrel in the fourth quarter of 2027. In its previous STEO, the EIA forecast that the WTI spot price would average $54.93 per barrel in the first quarter of this year, $52.67 per barrel in the second quarter, $52.03 per barrel in the third quarter, $49.34 per barrel in the fourth quarter, $49.00 per barrel in the first quarter of next year, $50.66 per barrel in the second quarter, $50.68 per barrel in the third quarter, and $51.00 per barrel in the fourth quarter of 2027. In a BMI report sent to Rigzone by the Fitch Group on Friday, BMI projected that the front month WTI crude price will average $64.00 per barrel in 2026 and $68.00 per barrel in 2027. Standard Chartered sees the NYMEX WTI nearby

Read More »

Some OPEC+ Members See Scope to Resume Hikes in April

Some OPEC+ members see scope for the alliance to resume supply increases in April, believing concerns of a glut in global oil markets to be overblown. The group led by Saudi Arabia and Russia hasn’t committed to any course of action or begun formal discussions ahead of its meeting on March 1, according to several delegates, who asked not to be identified as the process is private. Their ultimate decision may depend on whether US President Donald Trump launches military action against — or reaches a nuclear deal with — OPEC member Iran, one added.  Nonetheless, some nations in the Organization of the Petroleum Exporting Countries and its allies said they see room to resume the output increases the coalition paused during the seasonal demand slowdown of the first quarter.  Trump’s assertive stance toward OPEC members Venezuela and Iran, along with disruptions spanning from North America to Kazakhstan, drove oil prices to a strong start of the year despite warnings of a supply glut. Several top traders have said that prices are supported by tightness in key markets, as many of the surplus barrels are from producers subject to sanctions like Russia and Iran, and thus remain unavailable to a wider pool of buyers. That has made the market surprisingly resilient. Brent futures are up 11% this year, after spiking to a six-month high near $72 a barrel at the end of January over concerns a conflict might erupt in the Middle East. Oil inventories piled up last year at the fastest pace since the 2020 pandemic amid swelling output from both OPEC+ and its competitors in the Americas, according to the International Energy Agency, though the impact on prices was tempered as China scooped up barrels for its strategic reserves. Last April, the Saudis stunned crude traders by steering OPEC+ to

Read More »

ADNOC Drilling Delivers ‘Best Year on Record’

In a release sent to Rigzone on Thursday, ADNOC Drilling said it had delivered its “best year on record with 2025 net profit of $1.45 billion”. “ADNOC Drilling Company PJSC announced today record fourth quarter and full year 2025 results, marking a step change in scale, technology-enabled performance and excellence in execution,” the company noted in the release. “This performance represents the strongest in the company’s history, reflecting high asset utilization and continued growth across integrated drilling and oilfield services, and driven by strong operational execution across the fleet,” it added. ADNOC Drilling highlighted in the release that its 2025 net profit marked an 11 percent year on year increase. In 2025, the company reported revenue of $4.9 billion, which it pointed out was a 22 percent increase year on year, and EBITDA of $2.2 billion, which it highlighted was an increase of nine percent year on year. “2025 was a defining year for ADNOC Drilling,” ADNOC Drilling CEO Abdulla Ateya Al Messabi said in the release. “Our record breaking results were delivered by our people, whose discipline, innovation and commitment to operational excellence and safety underpin every milestone we achieve,” Messabi added. “Our resilience as a business, built on strong systems, disciplined operations and the ability to adapt at pace, continues to reinforce our competitive strength,” the CEO continued. “Through execution excellence, technology‑led efficiency and a disciplined approach to capital allocation and operations, we continue our transformation into the region’s most advanced energy services company,” Messabi said. “By expanding across the GCC, pioneering AI‑driven operations and setting new benchmarks in sustainability, we are unlocking value and helping power the UAE’s energy future. This is just the beginning of a new era of growth, innovation and impact,” Messabi went on to note. In a release posted on its site in November last year, ADNOC Drilling announced that

Read More »

FTC digs deeper into Microsoft’s bundling and licensing practices

Relationship with OpenAI While much of the initial query, and subsequent ones, have focused on licensing and bundling, the FTC is also looking into the company’s relationship with OpenAI, and raising questions about Microsoft’s data centers, capacity constraints, and AI spending and research. Notably, the tech giant’s initial $1 billion investment in OpenAI has grown into a multi-billion-dollar partnership, with Microsoft rolling out ChatGPT-powered features across its product line in 2023. The FTC is examining whether the relationship is an undisclosed merger that should have been subject to antitrust review. Further, the federal agency is scrutinizing Microsoft’s alleged decision to scale back its own AI research following the OpenAI investment, potentially reducing competition. Ultimately, all of this recalls the industry-shaping 1990s US federal investigation into Microsoft’s monopoly of desktop software and web browsers. A federal judge ruled at the time that the company deliberately built the Internet Explorer (IE) browser into Windows to edge out rivals like the now-defunct Netscape. And, analysts note, it’s an indication that Microsoft hasn’t learned from those past lessons. “While technology and trends may have evolved since Microsoft’s first anti-trust case in 1998, where they were forced to unbundle IE from Windows OS, their tactics have stayed remarkably the same,” Bickley noted.

Read More »

Nvidia: Latest news and insights

Nvidia: ‘Graphics 3.0’ will drive physical AI productivity August 15, 2025: Nvidia has floated the idea of “Graphics 3.0” with the hope of making AI-generated graphics central to physical productivity. The concept revolves around graphics created by genAI tools. Nvidia say AI-generated graphics could help in training robots to do their jobs in the physical world or by helping AI assistants automate the creation of equipment and structures. Nvidia launches Blackwell-powered RTX Pro GPUs for compact AI workstations August 12, 2025: Nvidia announced two new professional GPUs, the RTX Pro 4000 Small Form Factor (SFF) and the RTX Pro 2000. Built on its Blackwell architecture, Nvidia’s new GPUs aim to deliver powerful AI capabilities in compact desktop and workstation deployments. Nvidia’s new genAI model helps robots think like humans August 11, 2025: Nvidia has developed a genAI model to help robots make human-like decisions by analyzing surrounding scenes. The Cosmos Reason model in robots can take in information from video and graphics input, analyze the data, and use its understanding to make decisions. Nvidia patches critical Triton server bugs that threaten AI model security August 5, 2025: A surprising attack chain in Nvidia’s Triton Inference Server, starting with a seemingly minor memory-name leak, could allow full remote server takeover without user authentication. China demands ‘security evidence’ from Nvidia over H20 chip backdoor fears August 4, 2025: China escalated pressure on Nvidia with the state-controlled People’s Daily publishing an opinion piece titled “Nvidia, how can I trust you?” — a day after regulators summoned company officials over alleged security vulnerabilities in H20 artificial intelligence chips. Nvidia to restart H20 exports to China, unveils new export-compliant GPU July 15, 2025: Nvidia will restart H20 AI chip sales to China and release a new GPU model compliant with export rules, a move that could impact global AI

Read More »

Starcloud prepares to launch AWS Outpost into space

One executive skeptical of the idea of data centers in space is AWS’s own CEO, Matt Garman. “There are not enough rockets to launch a million satellites yet, so we’re, like, pretty far from that. If you think about the cost of getting a payload in space today, it’s massive,” Garman told attendees at the Cisco AI summit, according to a Reuters report. Garman is just one of many critics of the notion that data centers can be a viable alternative. Issues such as collisions with space debris, the difficulty of supplying water as a coolant, the impossibility of fixing hardware issues and latency have all been highlighted as potential problems.

Read More »

Data center capex to hit $1.7 trillion by 2030 due to AI boom

Asked on Thursday about the fact that capex is expected to approach the $1 trillion mark in 2026, he said it is somewhat surprising. “Last year, I thought it would take at least three years to get to that trillion dollar mark,” he said. “It seems increases are supported by the result of larger models needed for training infrastructure, and in turn, you need inference as well. You also need a supporting infrastructure in storage, networking, power, and cooling.” AI, he said, has become “the tide that lift all boats, meaning that in addition to the core accelerated compute, AI also positively impacts complementary infrastructure, such as storage, networking, and physical infrastructure.” Fung added that while much of the achievement of projected spending estimates will depend on whether or not this growth is sustainable, he pointed out, “it seems like the large hyperscalers have a lot of weight in optimizing cash flow and cost structures. They’re trying to get as creative as possible, generally moving towards a more vertical, integrated stack with their own custom networking and external financing, which would help  [create] more sustainable deployments and operations.” Enterprises thinking of expanding their own infrastructure can learn from this growth. In a recent article on the hyper spending of hyperscalers, Greyhound Research chief analyst Sanchit Vir Gogia said their capex spending  levels can help pinpoint where the hyperscalers are expecting bottlenecks, which is useful information for enterprises planning their own cloud strategy across multiple geographies. These and other factors can help enterprises plan their own execution timelines, he said. This article originally appeared on CIO.com.

Read More »

Cisco highlights memory costs, Silicon One growth in Q2 recap

“AI infrastructure orders taken from hyperscalers totaled $2.1 billion in Q2 compared to $1.3 billion just last quarter and equal to the total orders taken in all of fiscal year ’25, marking another significant acceleration in growth across our silicon, systems and optics,” Robbins said. “Given the strong demand for our Silicon One systems and optics, we now expect to take AI orders in excess of $5 billion and to recognize over $3 billion in AI infrastructure revenue from hyperscalers in FY ’26.” Regarding enterprise uptake, Robbins said Cisco took in $350 million in AI orders from enterprise customers in Q2 and has a pipeline in excess of $2.5 billion for its high-performance AI infrastructure portfolio. Cisco is seeing early enterprise use cases for AI around fraud detection and video analytics in sectors such as financial, manufacturing and pharmaceuticals, for example. “I also see examples in retail, where customers are leveraging agents on mobile devices in retail to help their staff do a better job engaging with their customers. We’re seeing a combination of both investment in cloud-based architectures as well as on prem,” Robbins said. Networking rules Cisco is experiencing a faster-than-historical ramp-up of next-generation platforms, including its Catalyst 9K, Wi-Fi 7, and smart switches, stated Sebastien Naji, a research analyst with William Blair, in a report after the call. He attributed it to three factors: an accelerated refresh cycle in the data center; early AI-readiness efforts in the enterprise; and end-of-support for legacy Catalyst and Nexus switches.  “We are seeing strong demand for our next-generation switching, routing and wireless products, which continue to ramp faster than prior product launches. We’re delivering AI-native capabilities across these products, including weaving security into the fabric of the network and modernizing the operational stack of campus networks,” Robbins said. Co-packaged optics? When asked

Read More »

Energy providers seek flexible load strategies for data center operations

“In theory, yes, they’d have to wait a little bit longer while their queries are routed to a data center that has capacity,” said Lawrence. The one thing the industry cannot do is operate like it has in the past, where data center power was tuned and then forgotten for six months. Previously, data centers would test their power sources once or twice a year. They don’t have that luxury anymore. They need to check their power sources and loads far more regularly, according to Lawrence. “I think that for that for the data center industry to continue to survive like we all need it, there’s going to have to be some realignment on the incentives to why somebody would become flexible,” said Lawrence. The survey suggests that utilities and load operators expect to expand their demand response activities and budgets in the near term. Sixty-three percent of respondents anticipate DR program funding to grow by 50% or more over the next three years. While they remain a major source of load growth and system strain, 57% of respondents indicate that onsite power generation from data centers will be most important to improving grid stability over the next five years. One of the proposed fixes to the power shortage has been small modular nuclear reactors. These have gained a lot of traction in the marketplace even if they have nothing to sell yet. But Lawrence said that that’s not an ideal solution for existing power generators, ironically enough.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »