Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Rebuilding the data stack for AI
In partnership withInfosys Topaz Artificial intelligence may be dominating boardroom agendas, but many enterprises are discovering that the biggest obstacle to meaningful adoption is the state of their data. While consumer-facing AI tools have dazzled users with speed and ease, enterprise leaders are discovering that deploying AI at scale requires something far less glamorous but far more consequential: data infrastructure that is unified, governed, and fit for purpose. That gap between AI ambition and enterprise readiness is becoming one of the defining challenges of this next phase of digital transformation. As Bavesh Patel, senior vice president of Databricks, puts it, “the quality of that AI and how effective that AI is, is really dependent on information in your organization.” Yet in many companies, that information remains fragmented across legacy systems, siloed applications, and disconnected formats, making it nearly impossible for AI systems to generate trustworthy, context-rich outputs. “Really, the big competitive differentiator for most organizations is their own data and then their third-party data that they can add to it,” says Patel. For enterprise AI to deliver value, data must be consolidated into open formats, governed with precision, and made accessible across functions. Without that foundation, businesses risk “terrible AI,” as Patel bluntly describes it. That means moving beyond siloed SaaS platforms and disconnected dashboards toward a unified, open data architecture capable of combining structured and unstructured data, preserving real-time context, and enforcing rigorous access controls. When the groundwork is laid correctly, organizations can move toward measurable outcomes, unlocking efficiencies, automating complex workflows, and even launching entirely new lines of business.
That value focus is critical, says Rajan Padmanabhan, unit technology officer at Infosys, especially as enterprises seek precision in the outputs driving business decisions. Rather than treating AI initiatives as isolated innovation projects, leading companies are tying AI deployment directly to business metrics, using governance frameworks to determine what delivers results and what should be abandoned quickly. “We see this big opportunity just with AI literacy with business users, where they’re very eager to understand how they should be thinking about AI,” adds Patel. “What does AI mean when you peel the covers? What are the pieces and the building blocks that you need to put in place, both from a technology and a training and an enablement standpoint?”
The possibilities ahead are substantial. As AI agents evolve from copilots into autonomous operators capable of managing workflows and transactions, the organizations that win will be those that build the right foundation now. “What we are seeing as a new way of thinking is moving from a system of execution or a system of engagement to a system of action,” notes Padmanabhan. “That is the new way we see the road ahead.” The future of AI in the enterprise will be determined by whether businesses can turn fragmented information into a strategic asset capable of powering both smarter decisions and entirely new ways of operating. This episode of Business Lab is produced in partnership with Infosys Topaz. Full Transcript: Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. This episode is produced in partnership with Infosys Topaz.Now, recent advancements in AI may have unlocked some compelling new industrial applications, but a reliance on inadequate data models means that many enterprises are hitting a brick wall. AI and agentic AI in particular place a whole new set of demands on data. The technology requires greater access, context, and guardrails to operate effectively. Existing data models often fall short. They’re too fragmented or siloed. Data itself often lacks quality. To bridge the gap, they require an AI-ready upgrade. Two words for you: data reconfigured.My guest today, are Bavesh Patel, senior vice president for Go-to-Market at Databricks, and Rajan Padmanabhan, unit technology officer for data analytics and AI at Infosys.
Welcome, Bavesh and Rajan. Rajan Padmanabhan: Thank you. Thanks for having us. Bavesh Patel: Thanks for having us. Megan: Fantastic. Thank you both so much for joining us today. Bavesh, if I could come to you first, when we talk about AI-ready data, what exactly do we mean? What new demands does AI place on data, and how does this impact the way it needs to be structured and used? Bavesh: Yeah. Great question. Appreciate you hosting us today. I think that obviously the whole world is enamored with AI because of all of the power that we can all see as users. AI is now democratized across hundreds of millions of users. And when we think about enterprises and businesses using AI, the quality of that AI and how effective that AI is really dependent on information in your organization, and that’s data. And what we found is that most enterprises, their data is kind of locked away in these different applications and different systems. And it’s very difficult to get a good view of, what is all my data? How trustworthy is it? How recent and fresh is it? And all of that is being injected into the AI. Unless you have a proper understanding of your data, the ability to ensure that it’s data that’s accurate and that can be used so that the AI can take advantage of it, you’re actually going to end up having terrible AI. We see a lot of customers spend time on cleansing their data, organizing their data, making sure it’s access controlled correctly, and that tends to be the fuel of good AI. Megan: Yeah. It’s such a foundational thing, isn’t it? But it can be missed, I think, quite easily. Rajan, what difference can having AI-ready data really make for enterprises as they unlock that full potential of AI and its applications? Rajan: First and foremost, thanks for having us. It’s a pleasure. I think in continuation of what Bavesh talked about, see, data and AI is pretty synonymous. And similarly, the consumer AI and enterprise AI and enterprise agentic AI are different because first and foremost, the business needs to have the context. That context from your enterprise information, which is not only structured, both structured and unstructured and user-generated contents and all forms of data is going to be very, very critical to really get the context right, and really get any model that you pick. That’s where the platforms like Databricks really help with the plethora of models or whether you want to build your own models or whether you want to ground the model based on your data. That is going to be very, very critical. That is where getting the data for AI is going to be very, very critical.
The third critical part, and this actually will be one of the roadblocks for adoption of AI. That’s why if you see the AI adoption on the consumer side is skyrocketing, but on the enterprise side, the enterprises are struggling is primarily around the precision of their output, because you are taking a business decisions where you are taking a buy decision, you are taking a sell decision, or you are trying to recommend something, recommend the content. It could be 20 different use cases. For that, the precision is going to be very critical. We are seeing our customers, the successful customers, definitely for the precision to be more than 92% is not aspiration, that is a must-have. If you have that, definitely being that AI data is going to be the entrepreneur right now for that. Megan: And I suppose if we’ve outlined there how critical this is, where should enterprises start then, professional perhaps, the level, what are the foundations when it comes to building an AI-ready data model?
Bavesh: Yeah. And I think Rajan hit the nail on the head. I mean, enterprises are grappling with a different set of problems than consumer AI. The first thing is that you’ve got to get a handle on your data. As I mentioned, a lot of the data is locked in. Ensuring that you have ability to put your data in a place where you can understand the holistic view of as much of your data as possible. That kind of starts with putting your data in open formats. A lot of the valuable data today in an organization is locked away in some proprietary SaaS app or some system, and all the datasets aren’t connected together to form that context. The first step is to really do an analysis of what is your data estate? What are the critical pieces of data that need to be put into a place where you can start to understand them and how they’re connected to one another? Thinking about how do you set up your data catalog, thinking about how do the relationships between the data assets work, putting data governance around it, that seems to be the first step. And if you think about how ChatGPT was built, it took all the data on the internet and then aggregated it, synthesized it, and then built these transformer models, while enterprises, they don’t really have a handle of all their data within the organization. That’s the first foundation that you really want to think about. The second thing is that you don’t want to just go ad hoc, go and do random AI projects. You really need to be thinking about business value. A lot of our customers are looking at AI much more strategically in that they want to be able to get projects on the board with wins and then generate business value. Building an AI value roadmap, which is connected to how well your data is organized, those two things seem to be foundational to how do you launch AI successfully in your organization. Megan: That value piece is so important, isn’t it? And as I understand it, Infosys and Databricks have worked closely together to guide organizations through this transformation. I wondered, can you share some examples of the impact you’ve seen enterprises you’ve worked with, Rajan, what difference has it made to the ways in which they can integrate more sophisticated AI and agentic AI applications? Rajan: Well, that’s a very, very good question. What both Databricks and Infosys has done is we have come up with, a kind of a framework first. First and foremost, it all needs to start with the value. One of the largest food products company where we collaborated together, what we have done is we have applied this framework. The framework consists of six different things. First and foremost, very critical is the value management, which Bavesh touched upon. We have worked together to come up with a 3M measurement framework, what we call adaptability, business value, and then responsible. You can’t just go and do a garage project. It has to be measurable. It should be responsible, follow all those things. That is going to be very critical. And we helped this client to prioritize, which will give them the most value for money, the investments that they are making. The second critical part here is it is not like most of the enterprises today are not everybody’s AI-born companies. Most of them were born during analog days; most of them were born in digital days. There are companies which are applying AI for modernization, because a lot of your historical information, which is actually helping you to build that long-term context. And that is where we have worked closely with some of the native tools of Databricks, like Lakebridge or the AI assistants that are there, and then create composable services on top of it to help the clients unlock the value bringing into Databricks. And then the second part where we help the client is exactly to the point, the readying of data. Now you brought in the data, now you have to bring both the structured, unstructured, analytical and all these aspects.
And that is where the third layer, we closely work with the Databricks, which is part of leveraging all the great capabilities within the Databricks, be it Unity Catalog, be it the open formats, or be it the gateways and other aspects. We were able to make the data available for this client. What has really helped our client, the third part, is Agent Bricks, which is one of the differentiatiors. It gives you the flavor for the enterprise. That is where we have closely worked, and we built some of our industry-specific agents, be it CPG, be it energy, be it FS. And for this client, what we have done is we have taken some of those CPG-specific use cases. Either it could be on the HR space or the procurement space or on the marketing space. And this has really helped our client be able to build a business capability surrounding this and unlock eight to nine use cases, we call it as a products, agentic AI products, which can really drive more value for them, solving the real business problems. And this kind of a comprehensive set of frameworks plus set of suites of services, plus our solution assets, Infosys solution assets, as well asunlocking the value from Databricks has really helped these clients. And we see similar patents for a lot of these successful engagements where we were able to continuously drive the value by applying this framework actually. Megan: Right. Sounds like it made a real material difference. Rajan mentioned a few of the tools in Databricks catalog there, Bavesh. I know you’ve recently worked to launch an operational database for AI agents and apps. I wonder how does a platform like that help organizations in this journey? What makes it different from some of the other platforms out there right now? Bavesh: Databricks has come to market with a new offering called Lakebase, which is really an OLTP database where you can build your AI apps. And if you think about it, there’s really two main types of data in an enterprise. There’s all the historical data, which is all the things that have happened, and that’s really what your analytics is based on. You have an old app system where you have put all your historical data and Databricks has come to market with what we call the Lakehouse, which is essentially a data warehouse with all of your data that is not operational in nature. It’s historical data. And I think that Lakehouse concept is really pushing forward with AI because a lot of our customers have thousands of users within their business and they need to get data. And what they’ve done is they’ve actually gone down the BI route, which is really building a dashboard or a report.
Most organizations have had thousands of these dashboards and reports proliferate across the organization and then they need to be customized. It just takes a long time for users inside of the business to actually get access to the data. AI now is really making that a lot easier from just the analytics perspective where we can now democratize access to the data, which has really been the holy grail for most data teams. They really want to get out of the way and just give the right data to the right people inside of the business with the right access. With a product like Genie at Databricks, you can just use English language or whatever your language is to ask questions of the data. And it’ll give you back data that answers your questions in context. It’ll give you not just what ChatGPT will give you, which is information about a topic that’s on the internet, but it will actually tell you, “Well, why did my sales numbers not reflect what I expected in the month of April?” It’ll give you some root cause analysis based on your enterprise data. Genie is going to be one of these things that’s really important where it’s going to truly kind of democratize data inside of the business. That’s kind of this OLAP world, which is what the Lakehouse is. More recently, we’ve come to market with what we call the Lakebase, which is the OLTP world. What we’re finding is that agents are now being deployed in these organizations, and those agents need a place to keep all of their orchestration, all of the context of what’s happening in that particular workflow. On the one hand, you’ve got users just asking questions. On the other hand, the next chapter is going to be around automating an entire business process. If you’re taking a function like generating a campaign in marketing, right? There are a lot of tools you use and a lot of steps you use. An agent can come in and really automate a lot of that. But on the back end of that agent, you’re going to need to stand up a real-time database to keep track of all the things that the agent is doing. That’s what Databricks has brought to market, which is this OLTP Lakebase solution. The innovation that we have brought to market is that it’s a modern kind of Postgres database where we have separated the compute and storage, very much like what we did with the data Lakehouse with the data warehouse. But on the Lakebase, the data is on one copy inside of your cloud storage, and then the compute is separated and it’s serverless. You can do things like branching and you can start up the OLTP database really quickly. What we found is that agents are actually starting these Lakebases because they can very quickly go start one up, keep it running, put it down when it needs to, make a copy of it. Agents are doing this, then they need the velocity, they need a cost-effective solution. And the beauty of all this is when you take the OLTP, which is all around the Lakebase and the real time, and you take the OLAP, you now have one system for all your data. You don’t have to copy the data around, you don’t have to manage all the permissions, you can set the context against it. We see these AI apps being really the future of how businesses run, where they’re going to take away all of the bottlenecks that humans are having to do repetitive work and automate these using LLMs and all these new technologies. We want to be the default for powering all that because we believe that our Lakebase technology is going to be faster, cheaper, and more secure for an AI database. Megan: Sounds like a real game changer. And we’ve touched on this a couple of times already, I mean, this idea of value. We know that engaging the commercial value of investments into AI is really high on the priorities right now for senior leaders. How important is this value measure piece when it comes to creating AI-ready data systems, Rajan? How can organizations ensure they’re monitoring what is delivering and what isn’t? Rajan: This is the paramount importance and most of the successful AI implementations or agentic AI implementations really required this value measurement. I’ll just extend the client example that I talked about, the large food products company, the global products company, to explain this question. I just want to create a metaphor. When the initial digital world came, we have a lot of these analytics primarily around defining those performance management KPIs, fact-based decisioning and other things were evolving over a period of time. Typically, a lot of these metrics are going to be very critical for them to measure how a function, how a business is doing. On a similar line for the value measurement, if I take the same example of the client, what is very critical for an organization is actually to map your outcome that you are expecting. Iin this case, how do I optimize my spend on direct and indirect purchases? So by applying AI, I would like to identify the areas where I can optimize the spend. That means one of the critical measures that you have is, what is your indirect expense classification and what spends you have been classified and how much you are able to reduce by bringing in this. Establishing these measures and the metrics is going to be very, very critical. And once you establish these base metrics and the measurement, and the beauty of it is some of these metrics, to just extend what Bavesh was talking about, the capabilities that Databricks gives you, like metrics view, features, tools, and other things would actually help you to translate those AI telemetries, business telemetries that is coming from your applications into a measurable metrics in terms of an outcome, which you can actually measure using the Genie room for value management measurement. Then what happens is two things that you can take, the use case, the products that as I said for this client, the products that we build either on the procurement side or on the marketing research side, if you find there is a value either because of VAC, they identify that they’re able to optimize or it is able to reachability, what is the reach, you can either accelerate that use case and further fine tune that product to expand it. Or there are, if you find it is not really driving the value or I’m not able to see the value that it is going to deliver, you can very well do a fast failure method rather than trying to make it work, you can understand and then you can take a call to pivot it to something else different. There are three aspects here. What we see from our experience, not only with this client across some of our other clients from industrial manufacturing or FS or in the energy, is by setting up this metrics-driven valuation method upfront and then leveraging the capabilities to establish, transform these telemetries, signals into a measurement, what we call an AI compass room so that you really measure the business stakeholders, whether it is coming from a marketing office or whether it is coming from supply chain office or whether it is coming from a CFO office where they can say, “Hey, this is what it is intended to do, this is what the current measurement, and this is where it’s failing that can help them to pivot.” And this will actually drive and democratize AI, all the agent decay across the enterprise, and that really drives the value. This is going to be one of the critical part that enterprise needs to do it. And that is where the six part framework that I talked about, applying that framework like value office, applying the ready for AI, applying the transformation fabric. Then the third part is the governance, which is going to be the entrepreneur of this. Then running your operations, not based on SLA, based on the experience level agreements and business metrics for you to continually measure, bringing all these six layers is going to be very critical. That’s when we see the organizations are very successful, and some of our proven examples exactly do the same that this is going to be very critical for organizations from a measurement standpoint. Megan: Lots of tangible ways there that you can actually gauge value here. And you touched on governance and the impact of AI on governance is another huge talking point among senior leaders and interactions with data are a core part of that. To what extent is having the right governance and security protocols an integral part of having AI-ready data? To Bavesh, what scenarios do these systems need to handle? What does that mean for data models? Bavesh: This is becoming kind of the prerequisite to deploying a successful AI project. I think MIT produced a report that said 95% of these new AI projects fail to actually generate business value. A big reason for that is you can go and prototype and stand up and vibe code a pilot, but when you’re actually moving a workload into production, you realize that governance becomes so critical. So what do we really mean by governance? I think the first thing is getting your data in order, like I said, in open formats. Most companies realize now that the way they engage with their customers, the way they develop a drug, the way they approve a person for a credit limit increase, all of that enterprise information is actually their competitive advantage. Because you can go and use a frontier model like ChatGPT or Claude that everybody has access to. Really the big competitive differentiator for most organizations is their own data and then their third-party data that they can add to it. Getting your data into an open format so you can understand your data and understanding your data is where governance comes in. Because when you think about governance, you really want to be able to find the data. If I’m an end user or if I’m building an AI product, I want to know what data’s available to me. Can I trust the data? How fresh is the data? Is it coming from my analytics world or do I need a real-time system like a OLTP system? I need to find the data. I also need to make sure that access is controlled in a way that doesn’t cause any huge headaches from my organization. This becomes critical. If I have a whole bunch of PDFs that have purchase orders in them, who actually has access to all that data? In a clinical trial, for example, in healthcare, you really want to ensure that people across trials don’t have visibility to patient data. Maybe the model that was used to build that was running across trial. Who has access to all the data? Who has access to only parts of the data? You really have to think about this. We also look at semantics of the data. Rajan brought this up right at the beginning of this, which is what is the context? How do we think about the metrics and all the things that the business users know in their head? We need to start codifying that somewhere. We have a product at Databricks called Unity Catalog where you can do the discovery, the access and the business semantics. You also want to share the data. And in the world of agents, what we see is something called agent sprawl. In a very short order, just like how SaaS applications became very prevalent within any organization where they really solved a business problem. You go to a line of business and you say, “I need to be able to do credit underwriting” or “I am doing a prior authorization use case or pick thousands of use cases.” There’s a SaaS app for that. Much like that, there’s going to be this world in which agents are going to come into play, and most organizations are going to have lots of agents running all the time, but the reality of it is that how did that agent perform? What was the feedback loop from the user? What was the cost of running that workload and is it going up dramatically? And if you don’t have a way to monitor, to understand, and trace all the questions and answers and responses at scale, you’re going to find yourself in a big pickle. This actually could hurt your organization because users will be very confused about what to do. When you look at governance, most organizations are recognizing that they have to start to understand what is it that they have put in place from a systems, from a process, from a tooling standpoint, focus on one use case, build out the governance for that, but build it in a way that’s going to allow you to become repeatable. AI is not going to be about one use case or two use cases. It’s whoever builds the flywheel of building many use cases in a safe, secure way, in a cost-effective way that’s driving a business outcome. If you don’t apply governance, it’s going to be very hard. At Databricks, we made a big bet on governance four or five years ago. This is one of the main reasons our company’s growing right now because we can ensure that there’s quality data that’s going into all of your AI. You can use things like Genie and you can use things like Agent Bricks and you can build apps using Lakebase. None of that really works without governance. It’s really what we call the brain inside of Databricks. Most of our customers spend a lot of time inside of Unity Catalog. And the great news is that AI is helping governance get set up much more quickly. We have a customer that three years ago, they were trying to get all of the data assets across all their domains from the customer, from the loyalty app, from the e-commerce engine. They had to go and map out all this data assets. AI is now doing a lot of their work for them. The human in the loop is just checking things. We’ve made this much easier with AI. We always think about AI as a business use case and an outcome, which I think is going to be where the biggest value is. But at Databricks, we’re using AI inside of our platform to make it much easier to operate and to make it much easier to provide all the right things for your business. This is a super critical part of how we plan to innovate as AI takes fruition in the market. Megan: And Rajan, Bavesh touched on this a little bit there, but does the integration of Agentic AI add another layer of complexity here too? What new consideration around governance does that raise? Rajan: That’s a very, very valid question. I would like to take a metaphor to really explain. We are getting into the world of self-driving cars, robotaxis, and other things. While that takes us to the autonomous world, but still there are rules that you need to adhere to when you are driving on a road. The reason I’m bringing this metaphor is because what is actually required is actually adhering to the rules and different topographies, different things, depends upon where you are driving is going to be very, very critical. The complexity that agents are going to add is basically how you operate with those constraints. For example, as a UTO, I can do 10 things, but say if I cannot approve a discount for more than 70% or I cannot give something as a bonus for someone because that is a part of the CFO, which an agent should be aware of. That is one aspect, applying the constraints around it and making sure that the agents are adhering to the constraints. The second set of complexity that it builds is the tools to access. As a business, in today’s world, when you define a process, certain processes need a certain set of tools to really actionize it. There are certain entitlements, only people entitled to do certain things based on their identity, based on the need or the situation need, you need to govern. The third is information sharing. While MCP and other aspects are great, UCP and other aspects are great, but one critical thing is what you need to share, what you don’t need to share. And those are the critical considerations. The last part is learning and relearning. Sometimes when you learn good things, you should keep something. Sometimes it is better for you to completely remove it and reevaluate in a newer way, relearn it in a newer way. These are all the critical things that are required. On the similar line for agents, it is going to be paramount, because when you are operating agents for an enterprise, you need to know, learn, and adhere to certain compliance related rules, business related constraints, and then the entitlement identity, and then sharing whatever that apply to a physical human will also start applying to an agent. That is where this is going to be very critical. This requires a new set of operating systems. That doesn’t really mean now get out of a new thing. That is where I’m just interpreting how Bavesh touched upon the Unity Catalog. The best part that which we see and some of our clients that which are implementing is extending the Unity Catalog and the capabilities like now you can catalog the tools, catalog the MCP as well as catalog these agents, and then govern those agents based on the constraints, ground them based on the constraints. It’s going to be very, very critical. Doing it not later, but starting that as part of your strategy and enforcing this as one of the critical dimensions of when you measure the value is also going to be very critical for an organization. It is like making sure that not only building the autonomous car, but as well as making sure that the car drives as per the rules of the road, not going rogue. Megan: Lots to think about there. Fascinating stuff. Thank you. Just to close, with a quick look ahead, we all know the pace of development in AI and Agentic AI is so rapid. For those organizations that can prioritize AI-ready data now, what are the most compelling use cases for the technology that you can see coming to the fore in the next few years, Bavesh? Bavesh: I think the excitement level is at its peak. We’ve seen so much investment in AI. I think the reason why there’s a lot of excitement is because you can look at the early adopters and you can see massive amounts of gains that these organizations are seeing. The one thing I will tell you is that the companies that there’s really three categories and the companies that I think are doing well, a lot of them started out with just copilots and things that are just giving people quick answers. Think about it as making an individual productive. That is the first phase. And the ROI on that has been somewhat questionable. With something like Genie, it makes it a lot more effective because it’s actually on your data and your data is contextualized in your organization. I think that’s one level of area that we’re going to see a lot of innovation. We’ll see most organizations just start to get the right information to the right person at the right time. And that has been a dream for a lot of organizations. The second one is around automating entire business processes. We see functions within marketing, like I described earlier, or whether you’re going through a process of rebates for a company. There’s a whole bunch of steps involved where you have to go into three different apps and export data from Excel and put it over here. There’s thousands of people doing very laborious, monotonous, repeatable work. These agents are really going to help get an immense amount of not only productivity for the business process, but it’s just going to make things faster. Processes that took weeks are now going to take days. Processes that took days are going to take hours and minutes now. One trend we’ve seen is that the AI world is so dynamic. In a world where you got lots of different players, you want to think about first principles, what are the foundations? You want to think about owning your data, making sure you have a handle on your structured and unstructured data. You want to put governance on that. But the other thing that you want to make sure that you don’t do is lock yourself in. Today, if you think about it, Gemini is really good with multimodal. Anytime you have pictures or videos or things like that, Gemini just is super good. Whereas if you’re writing code, Claude is really good. If you’re just doing certain types of questions around introspection, ChatGPT is really good. What you really want is an open data platform where you can build your open AI on multiple clouds, which is what we built at Databricks. I think that’ll help with the second piece, which is you can pick and choose because when you build these agents, you don’t have to be locked into just one. You should be picking the best quality and the best security and the best ROI and cost for a particular workload. One workload may use multiple of these models, and they might be even specific industry models. You need a system and a platform that can really handle this complexity. I think the third category is business reimagination. A lot of people talk about this where, yes, you’re going to go and take the data and make it available and give everybody access to the data. You’re going to make existing processes much more efficient. But the third thing is there’s going to be brand new things that come out of it. We have a very large customer who’s a bank and they have built a product that they didn’t have a year ago. Essentially, it’s machine learning and LLMs helping treasury departments forecast what their balances are going to be because they have more data at their fingertips. Historically, it took a long time for the data to get to the bankers. They were not able to really predict what a balance would be for a treasury department. Think about this for a big enterprise company, they have now built a brand new data AI solution that they’re monetizing and it’s generated hundreds of millions of dollars in the first six months. We’re seeing brand new lines of business open up and that is going to be really exciting because that’s where a lot of the transformation is going to happen. There’s going to be productivity. There’s going to be kind of automation at the business process level. Then there’s going to be these big new things that we didn’t even imagine that people are going to come up with. We are actually seeing the early signals of this in every industry. We see retailers getting data at the hourly and the minute level so that they can integrate much more closely with their supply chains. We’re seeing much more targeted customer 360-degree use cases where as retailers or as consumers, we get annoyed by ads, but now it’s so contextualized and you have so much information about what really matters to your target customer, you’re giving them value added kind of information and that’s engaging them more. There’s a whole bunch of innovation happening with agentic commerce and things like concierge and virtualized shopping. You look at any industry, there’s definitely new ways of doing things. This is what’s really exciting about AI, but you really have to not get too far ahead without thinking about what are the foundational things. You mentioned this earlier, which is open data platform, making sure you have governance correctly, making sure you think about your historical analytical data and your application data that’s going to be real time, having a good foundation to build on, that’s going to allow you to scale and move more quickly and compete in this new world. We’re very excited about what we’re seeing with our customers and what they’re building. And honestly, that’s the best part about being in my role at Databricks, which is our teams really go to customers and say, “What are the outcomes you’re driving?” The early signals have been super positive. We’re seeing companies that get serious about all the foundational elements and really are methodical about building really outcome-based AI solutions, that 5% of projects that are being successful, those are wildly successful. That’s why we’re growing as a company because once you get a good project under your belt, that gets visibility within executives. The last thing is that historically, a lot of tech has been in the IT department. You get the business designing how they want to go to market and how they’re going to compete and what products and services they want to offer. IT was the enabler and in many cases became the cost center and was relegated to rationalizing the portfolio of spend and tools. But now we’re seeing the business kind of take the lead with AI where they want to understand, they want to know, “Hey, what can I be doing now that was not possible before?” We see this big opportunity just with AI literacy with business users where they’re very eager to understand how they should be thinking about AI. What does AI mean when you peel the covers? What are the pieces and the building blocks that you need to put in place, both from a technology and a training and an enablement standpoint? We’re spending a lot of time with executives helping them along this journey. We definitely see a lot of amazing opportunities ahead. Megan: Yeah. So much innovation going on. And finally, how about yourself, Rajan? What on the horizon is exciting you the most? Rajan: I think Bavesh covered quite a bit, but I think the way I’m seeing is today predominantly we are talking about labor shift. That means unlocking the potential of human or shifting the current way of working to the new way of working with the more efficiency game. It’s predominantly more of an efficiency game. I think that is what we are seeing now and the majority of the successful use cases around the labor shift. But what is pretty promising is the two kinds of shift, the business shifts. What we are seeing as a new way of thinking or the new thing that is coming up is moving from system of execution or a system of engagement to system of action. That is the new way we see the road ahead. That is where some of the points that I touched upon. The business wants to have access to it, but how does it really make the real difference for it? One classical example that I could clearly see which we have implemented for one of our customers primarily in the manufacturing space, is around the lifecycle of creation of a product and then publishing the content around the product in line with their different B2B marketplaces. Some of those, you are not just talking about recommending, creating, but actually you are able to reimagine this process, which used to involve five different departments, now can be done much faster, but at the same time gives you that veracity in terms of the decisioning that you are able to do and as far as how you’re able to actionize. That is the second thing which we are seeing. The third part I think is also going to be is the way how the commerce has evolved. There is also not beyond that agentic commerce, but I think what we are seeing is that agent to agent commerce, agent to human commerce and agent to agent payments, agent to human payments, and then the content monetization. These are the new set of business opportunities like building new business agentic products. It could be for family techs, it could be for on the consumer side, or it could be on the industrial technology side. These are going to be what I’m calling the economy shift, labor shift, business shift, because that is going to bring a new set of system of actions, moving them from the system of executions or the typical SaaS application with the bolt-on agentic, the so called agentic application. That is going to be a major transformation, and we are underway. But on the technology side, what is very critical for entrepreneuring is in today’s world you have data, analytical data, operational data, and then there is intelligence, there are different facets of it. I think both this analytical core and operational core is going to really come into one. That’s why we are so gung-ho about the releases of Lakebase and other things because that is the way the future is going to drive. When they are really thinking about being ready for AI technology use cases, they should really think, how do you really create this unified core for the newer world? The second part is people have to reimagine today, if I take SAP as an example, you do hundreds of edge applications, business applications needed to integrate another thing. Typically, we create sprawl of these integrations. One technology use case, people can say, “Hey, how do I really create a domain-based service mesh on top of this unified core and how do I make it more agentic integration ready?” That is one of the technology use cases that we are advising to the client. I think now with a lot of the new areas that are coming around SAP, BDC with the Databricks, and this zero-based integration, that makes them rethink the way they need to integrate, the way they need to do things. The third part, I think from a technology investment and technology, the use cases that most come for the technology that I would talk about is don’t just talk about now. This is the time that you have to, the way you own the people, the FTEs for your organizations. Agents are going to be your new FTEs. That means that some of the new technology paradigm is going to be you will end up creating these co-intellects within your organization. That means you need to invest on what we call this agentic grid, where it becomes like a unified agentic fabric where every other agents can really collaborate and integrate and building on top of the same, the unified operational analytical core, the unified agentic integration on top of it, which is going to create a new set of experiences, agentic experiences rather than the traditional experiences or conversational experiences. Then the new collaboration methods are going to be some of the critical aspects from a technology side that people have to really think from a technology standpoint. To start with, I would say you start looking at it from a data standpoint, building that unified core, building that unified integration and building that collaboration layer for both sharing and collaborating with intelligence as well as the agentic collaboration all governed under single umbrella. That is going to be the one critical use case which no one will feel bad about, and they are going to get really a 100X of their investments out of it. Megan: Certainly no shortage of exciting developments on the horizon. Thank you both so much for that conversation. That was Bavesh Patel, senior vice president for Go-to-Market at Databricks and Rajan Padmanabhan, unit technology officer for data analytics and AI at Infosys, whom I spoke with from Brighton, England.That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.This show is available wherever you get your podcasts, and if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Three reasons why DeepSeek’s new model matters On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to a new design that handles large amounts of text more efficiently. While the model remains open source, its performance matches leading closed-source rivals from Anthropic, OpenAI, and Google. It is also DeepSeek’s first release optimized Huawei’s Ascend chips—a key test of China’s dependence on Nvidia. Here are three ways V4 could shake up AI.
—Caiwei Chen The rise of world models AI systems have already gained impressive mastery over the digital world, but the physical world remains humanity’s domain. As it turns out, building an AI that composes novels or code apps is far easier than developing one to fold laundry or navigate city streets. To bridge this gap, many researchers believe you need something called a world model.
Proponents like Stanford professor Fei-Fei Li and AMI Labs founder Yann LeCun argue these models can overcome the well-known limitations of LLMs—and realize AI’s promise for robotics. Find out why they’ve brought world models to the forefront of the field. —Grace Huckins World models are on our list of the 10 Things That Matter in AI Right Now, our essential guide to what’s really worth your attention in the field. Subscribers can watch an exclusive roundtable unveiling the technologies and trends on the list, with analysis from MIT Technology Review’s AI reporter Grace Huckins and executive editors Amy Nordrum and Niall Firth. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 China has blocked Meta’s $2 billion acquisition of AI startup ManusRegulators cited national security grounds. (WSJ $)+ Beijing called the deal a “conspiratorial” attempt to hollow out its tech base. (FT $)+ The country is tightening its grip on AI firms that try to leave. (TechCrunch)+ The decision escalates China’s AI rivalry with the US. (Bloomberg $)+ But there will be no winners in their competition. (MIT Technology Review)2 Google is investing up to $40 billion in AnthropicIn a deal valuing the AI firm at $350 billion. (CNBC)+ The funding will support the firm’s growing computing needs. (TechCrunch)+ Anthropic and OpenAI are fighting for compute capacity. (Axios) 3 President Trump just fired the entire National Science BoardThe NSF has played a crucial role in developing technology. (The Verge)+ The move heightens fears over political interference in US science. (Nature)4 Conspiracy theories about the Washington shooting are proliferating onlineOver 300,000 posts appeared on X using the keyword “staged.” (NYT $)+ The theories are also swirling on Bluesky and Instagram. (Wired)5 The AI compute crunch is starting to hit the broader economy.It’s affecting jobs, gadgets, and electricity prices. (404 Media)+ The AI compute explosion is the tech story of our time. (MIT Technology Review)6 Elon Musk says a new banking tool brings X close to a “super app”He’s pledged to launch the tool this month. (Bloomberg)7 AI optimism is surging across Asia while US sentiment coolsThe divide could shape where adoption happens fastest. (Rest of World)8 Apple is tying its new CEO’s ascent to its first foldable iPhoneIt wants to build the buzz around John Ternus. (Gizmodo) 9 Twelve firms are developing the Golden Dome’s space-based interceptorsThey’ve won contracts worth up to $3.2 billion. (Ars Technica)10 NASA has shared promising results from Artemis IIThe spacecraft and rocket fared well. (Engadget) Quote of the day
“Getting out the truth and establishing facts and reliable information takes time. But our audiences really don’t have that kind of patience.” —Amanda Crawford, associate professor at the University of Connecticut, tells the NYT why conspiracy theories are gaining traction online. One More Thing MIRIAM MARTINCIC Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change Kenya’s Great Rift Valley is home to five geothermal power stations, which harness clouds of steam to generate about a quarter of the country’s electricity. But some of the energy escapes into the atmosphere, while even more remains underground for lack of demand. That’s what brought Octavia Carbon here. Last year, the startup began harnessing some of that excess energy to remove CO2 from the air. The company says the method is efficient, affordable, and—crucially—scalable. But the project also faces fierce opposition.
Announcing our partnership with the Republic of Korea
Bringing frontier AI models to Korea’s scientific communityKorea’s Ministry of Science and ICT (MSIT) has recently launched the K-Moonshot Missions, an initiative aimed at unlocking step-change improvements in research productivity and addressing national grand challenges.Helping make this vision a reality, Google will establish an AI Campus in the Republic of Korea — an AI-focused facility within its Seoul offices.The AI Campus will be a hub for Korean academia and research institutions to collaborate with Google’s world-leading AI experts to accelerate scientific breakthroughs through research and access to our most advanced AI for Science models, programs and events. We will begin by exploring collaborations with research-oriented institutions including Seoul National University (SNU), Korea Advanced Institute of Science and Technology (KAIST) and the Ministry’s three AI Bio Innovation Hubs, leveraging our models in fields such as life sciences, energy, weather and climate, for example:AlphaEvolve – a Gemini-powered coding agent for designing and optimising advanced algorithms. This has shown beneficial impact across many areas in computing and math, and we are seeing similar examples emerge in drug discovery and energy.AlphaGenome – an AI model to help scientists better understand how mutations in human DNA sequences impact a wide range of gene functions, speeding up research on genome biology and helping to improve disease understanding.AlphaFold – already used by more than 85,000 researchers in Korea, we will explore accelerating AI-enabled predictions for proteins, DNA and RNA.AI co-scientist – a multi-agent AI system that acts as a virtual scientific collaborator to help researchers brainstorm and verify hypotheses. This is showing promising benefits in a range of biomedical applications and we look forward to collaborating through joint research exploration and technical advisory to support the Ministry’s AI Scientist Project on ways to best integrate the system.WeatherNext – we will explore collaborations to support Korea’s energy and sustainability goals in predicting and analyzing the impacts of extreme weather events and optimizing renewable energy on grids.Cultivating AI talent and partnering on safetyRealizing the full potential of AI requires investing in people and building responsibly. To support the next generation of Korean AI talent, we are opening doors to forge connections with Google DeepMind, including exploring internship opportunities for Korean students. This builds on Google’s broader commitment to the region, including the recent milestone of providing 50,000 AI Essentials scholarships to help job seekers gain foundational skills.Finally, following our Frontier AI Safety Commitments made at the AI Seoul Summit, we will collaborate with the Korean AI Safety Institute (AISI) on research and best practices.Building on the AlphaGo legacyAs we look back on the legacy of AlphaGo, we are incredibly excited for what lies ahead. We look forward to collaborating with the government as they invest in important local AI infrastructure, such as a new National AI for Science Center (NAIS), due to open in May.By combining Google DeepMind’s frontier AI models with the brilliant scientific minds in Korea, we believe we can unlock scientific discoveries that will benefit society for generations to come.

Data Center World 2026: Innovation Spotlight
Belden + OptiCool: Modular Cooling for the AI Middle Market At Data Center World 2026, company representatives from Belden and OptiCool described a joint push into integrated rack-level infrastructure—pairing connectivity, power, and modular cooling into a single deployable system aimed squarely at enterprise and mid-market colocation providers. The partnership reflects a shift already underway inside Belden itself. Long known as a manufacturer of wire, cable, and connectivity products, the company said it has spent the last several years evolving into a solutions provider—leveraging a broader portfolio that spans industrial networking, automation, and control systems. That repositioning is now extending into AI infrastructure. From Components to Fully Integrated Systems Rather than selling discrete products into bid cycles, Belden is now packaging racks, PDUs, cable management, and cooling into a unified offering—delivered as a manufacturer-backed system rather than a third-party integration. “We can bring a full solution to the table now,” a company representative said, emphasizing that the company is “standing behind the solution as a manufacturer, not as a system integrator.” The cooling layer comes via OptiCool, whose rear-door heat exchanger (RDHx) technology is designed to scale alongside uncertain AI workloads. Two-Phase Rear Door Cooling at Rack Scale OptiCool’s approach centers on two-phase cooling applied at the rear door, combining the non-invasive characteristics of RDHx with the efficiency gains typically associated with direct-to-chip liquid cooling. According to company representatives, the system: Supports up to 120 kW per rack (with 60 kW demonstrated on the show floor) Delivers up to 10x cooling capacity compared to traditional approaches Operates at roughly one-third the energy consumption of comparable single-phase systems Instead of injecting cold air, the system extracts heat using refrigerant as the heat sink, reducing demand on CRAC units and broader facility cooling infrastructure. Designing for Uncertainty: Modular, Swappable Capacity The defining feature—and

The Trillion-Dollar AIDC Boom Gets Real: Omdia Maps the Path From Megaclusters to Microgrids
The AI data center buildout is getting bigger, denser, and more electrically complex than even many bullish observers expected. That was the core message from Omdia’s Data Center World analyst summit, where Senior Director Vlad Galabov and Practice Lead Shen Wang laid out a view of the market that has grown more expansive in just the past year. What had been a large-scale infrastructure story is now, in Omdia’s telling, something closer to a full-stack industrial transition: hyperscalers are still leading, but enterprises, second-tier cloud providers, and new AI use cases are beginning to add demand on top of demand. Omdia’s updated forecast reflects that shift. Galabov said the firm has now raised its 2030 projection for data center investment beyond the $1.6 trillion figure it showed a year ago, arguing that surging AI usage, expanding buyer classes, and the emergence of new power infrastructure categories have all forced a rethink. “One of the reasons why we raised it is that people keep using more AI,” Galabov said. “And that just means more money, because we need to buy more GPUs to run the AI.” That is the simple version. The more consequential one is that AI is no longer behaving like a contained technology cycle. It is spilling outward into adjacent infrastructure markets, including batteries, gas-fired onsite generation, and high-voltage DC power architectures that until recently sat well outside the mainstream data center conversation. A Market Moving Faster Than the Forecasts Galabov opened by revisiting the predictions Omdia made last year for 2030. On several fronts, he said, the market is already validating them faster than expected. AI applications are becoming commonplace. AI has become the dominant driver of data center investment. Self-generation is no longer a fringe strategy. Even some of the rack-scale architecture concepts that once looked

AI’s Execution Era: Aligned and Netrality on Power, Speed, and the New Data Center Reality
At Data Center World 2026, the industry didn’t need convincing that something fundamental has shifted. “This feels different,” said Bill Kleyman as he opened a keynote fireside with Phill Lawson-Shanks and Amber Caramella. “In the past 24 months, we’ve seen more evolution… than in the two decades before.” What followed was less a forecast than a field report from the front lines of the AI infrastructure buildout—where demand is immediate, power is decisive, and execution is everything. A Different Kind of Growth Cycle For Caramella, the shift starts with scale—and speed. “What feels fundamentally different is just the sheer pace and breadth of the demand combined with a real shift in architecture,” she said. Vacancy rates have collapsed even as capacity expands. AI workloads are not just additive—they are redefining absorption curves across the market. But the deeper change is behavioral. “Over 75% of people are using AI in their day-to-day business… and now the conversation is shifting to agentic AI,” Caramella noted. That shift—from tools to delegated workflows—points to a second wave of infrastructure demand that has not yet fully materialized. Lawson-Shanks framed the transformation in more structural terms. The industry, he said, has always followed a predictable chain: workload → software → hardware → facility → location. That chain has broken. “We had a very predictable industry… prior to Covid. And Covid changed everything,” he said, describing how hyperscale demand compressed deployment cycles overnight. What followed was a surge that utilities—and supply chains—were not prepared to meet. From Capacity to Constraint: Power Becomes Strategy If AI has a gating factor, it is no longer compute. It is power. “Before it used to be an operational convenience,” Caramella said. “Now it’s a strategic advantage—or constraint if you don’t have it.” That shift is reshaping executive decision-making. Power is no

Rebuilding the data stack for AI
In partnership withInfosys Topaz Artificial intelligence may be dominating boardroom agendas, but many enterprises are discovering that the biggest obstacle to meaningful adoption is the state of their data. While consumer-facing AI tools have dazzled users with speed and ease, enterprise leaders are discovering that deploying AI at scale requires something far less glamorous but far more consequential: data infrastructure that is unified, governed, and fit for purpose. That gap between AI ambition and enterprise readiness is becoming one of the defining challenges of this next phase of digital transformation. As Bavesh Patel, senior vice president of Databricks, puts it, “the quality of that AI and how effective that AI is, is really dependent on information in your organization.” Yet in many companies, that information remains fragmented across legacy systems, siloed applications, and disconnected formats, making it nearly impossible for AI systems to generate trustworthy, context-rich outputs. “Really, the big competitive differentiator for most organizations is their own data and then their third-party data that they can add to it,” says Patel. For enterprise AI to deliver value, data must be consolidated into open formats, governed with precision, and made accessible across functions. Without that foundation, businesses risk “terrible AI,” as Patel bluntly describes it. That means moving beyond siloed SaaS platforms and disconnected dashboards toward a unified, open data architecture capable of combining structured and unstructured data, preserving real-time context, and enforcing rigorous access controls. When the groundwork is laid correctly, organizations can move toward measurable outcomes, unlocking efficiencies, automating complex workflows, and even launching entirely new lines of business.
That value focus is critical, says Rajan Padmanabhan, unit technology officer at Infosys, especially as enterprises seek precision in the outputs driving business decisions. Rather than treating AI initiatives as isolated innovation projects, leading companies are tying AI deployment directly to business metrics, using governance frameworks to determine what delivers results and what should be abandoned quickly. “We see this big opportunity just with AI literacy with business users, where they’re very eager to understand how they should be thinking about AI,” adds Patel. “What does AI mean when you peel the covers? What are the pieces and the building blocks that you need to put in place, both from a technology and a training and an enablement standpoint?”
The possibilities ahead are substantial. As AI agents evolve from copilots into autonomous operators capable of managing workflows and transactions, the organizations that win will be those that build the right foundation now. “What we are seeing as a new way of thinking is moving from a system of execution or a system of engagement to a system of action,” notes Padmanabhan. “That is the new way we see the road ahead.” The future of AI in the enterprise will be determined by whether businesses can turn fragmented information into a strategic asset capable of powering both smarter decisions and entirely new ways of operating. This episode of Business Lab is produced in partnership with Infosys Topaz. Full Transcript: Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. This episode is produced in partnership with Infosys Topaz.Now, recent advancements in AI may have unlocked some compelling new industrial applications, but a reliance on inadequate data models means that many enterprises are hitting a brick wall. AI and agentic AI in particular place a whole new set of demands on data. The technology requires greater access, context, and guardrails to operate effectively. Existing data models often fall short. They’re too fragmented or siloed. Data itself often lacks quality. To bridge the gap, they require an AI-ready upgrade. Two words for you: data reconfigured.My guest today, are Bavesh Patel, senior vice president for Go-to-Market at Databricks, and Rajan Padmanabhan, unit technology officer for data analytics and AI at Infosys.
Welcome, Bavesh and Rajan. Rajan Padmanabhan: Thank you. Thanks for having us. Bavesh Patel: Thanks for having us. Megan: Fantastic. Thank you both so much for joining us today. Bavesh, if I could come to you first, when we talk about AI-ready data, what exactly do we mean? What new demands does AI place on data, and how does this impact the way it needs to be structured and used? Bavesh: Yeah. Great question. Appreciate you hosting us today. I think that obviously the whole world is enamored with AI because of all of the power that we can all see as users. AI is now democratized across hundreds of millions of users. And when we think about enterprises and businesses using AI, the quality of that AI and how effective that AI is really dependent on information in your organization, and that’s data. And what we found is that most enterprises, their data is kind of locked away in these different applications and different systems. And it’s very difficult to get a good view of, what is all my data? How trustworthy is it? How recent and fresh is it? And all of that is being injected into the AI. Unless you have a proper understanding of your data, the ability to ensure that it’s data that’s accurate and that can be used so that the AI can take advantage of it, you’re actually going to end up having terrible AI. We see a lot of customers spend time on cleansing their data, organizing their data, making sure it’s access controlled correctly, and that tends to be the fuel of good AI. Megan: Yeah. It’s such a foundational thing, isn’t it? But it can be missed, I think, quite easily. Rajan, what difference can having AI-ready data really make for enterprises as they unlock that full potential of AI and its applications? Rajan: First and foremost, thanks for having us. It’s a pleasure. I think in continuation of what Bavesh talked about, see, data and AI is pretty synonymous. And similarly, the consumer AI and enterprise AI and enterprise agentic AI are different because first and foremost, the business needs to have the context. That context from your enterprise information, which is not only structured, both structured and unstructured and user-generated contents and all forms of data is going to be very, very critical to really get the context right, and really get any model that you pick. That’s where the platforms like Databricks really help with the plethora of models or whether you want to build your own models or whether you want to ground the model based on your data. That is going to be very, very critical. That is where getting the data for AI is going to be very, very critical.
The third critical part, and this actually will be one of the roadblocks for adoption of AI. That’s why if you see the AI adoption on the consumer side is skyrocketing, but on the enterprise side, the enterprises are struggling is primarily around the precision of their output, because you are taking a business decisions where you are taking a buy decision, you are taking a sell decision, or you are trying to recommend something, recommend the content. It could be 20 different use cases. For that, the precision is going to be very critical. We are seeing our customers, the successful customers, definitely for the precision to be more than 92% is not aspiration, that is a must-have. If you have that, definitely being that AI data is going to be the entrepreneur right now for that. Megan: And I suppose if we’ve outlined there how critical this is, where should enterprises start then, professional perhaps, the level, what are the foundations when it comes to building an AI-ready data model?
Bavesh: Yeah. And I think Rajan hit the nail on the head. I mean, enterprises are grappling with a different set of problems than consumer AI. The first thing is that you’ve got to get a handle on your data. As I mentioned, a lot of the data is locked in. Ensuring that you have ability to put your data in a place where you can understand the holistic view of as much of your data as possible. That kind of starts with putting your data in open formats. A lot of the valuable data today in an organization is locked away in some proprietary SaaS app or some system, and all the datasets aren’t connected together to form that context. The first step is to really do an analysis of what is your data estate? What are the critical pieces of data that need to be put into a place where you can start to understand them and how they’re connected to one another? Thinking about how do you set up your data catalog, thinking about how do the relationships between the data assets work, putting data governance around it, that seems to be the first step. And if you think about how ChatGPT was built, it took all the data on the internet and then aggregated it, synthesized it, and then built these transformer models, while enterprises, they don’t really have a handle of all their data within the organization. That’s the first foundation that you really want to think about. The second thing is that you don’t want to just go ad hoc, go and do random AI projects. You really need to be thinking about business value. A lot of our customers are looking at AI much more strategically in that they want to be able to get projects on the board with wins and then generate business value. Building an AI value roadmap, which is connected to how well your data is organized, those two things seem to be foundational to how do you launch AI successfully in your organization. Megan: That value piece is so important, isn’t it? And as I understand it, Infosys and Databricks have worked closely together to guide organizations through this transformation. I wondered, can you share some examples of the impact you’ve seen enterprises you’ve worked with, Rajan, what difference has it made to the ways in which they can integrate more sophisticated AI and agentic AI applications? Rajan: Well, that’s a very, very good question. What both Databricks and Infosys has done is we have come up with, a kind of a framework first. First and foremost, it all needs to start with the value. One of the largest food products company where we collaborated together, what we have done is we have applied this framework. The framework consists of six different things. First and foremost, very critical is the value management, which Bavesh touched upon. We have worked together to come up with a 3M measurement framework, what we call adaptability, business value, and then responsible. You can’t just go and do a garage project. It has to be measurable. It should be responsible, follow all those things. That is going to be very critical. And we helped this client to prioritize, which will give them the most value for money, the investments that they are making. The second critical part here is it is not like most of the enterprises today are not everybody’s AI-born companies. Most of them were born during analog days; most of them were born in digital days. There are companies which are applying AI for modernization, because a lot of your historical information, which is actually helping you to build that long-term context. And that is where we have worked closely with some of the native tools of Databricks, like Lakebridge or the AI assistants that are there, and then create composable services on top of it to help the clients unlock the value bringing into Databricks. And then the second part where we help the client is exactly to the point, the readying of data. Now you brought in the data, now you have to bring both the structured, unstructured, analytical and all these aspects.
And that is where the third layer, we closely work with the Databricks, which is part of leveraging all the great capabilities within the Databricks, be it Unity Catalog, be it the open formats, or be it the gateways and other aspects. We were able to make the data available for this client. What has really helped our client, the third part, is Agent Bricks, which is one of the differentiatiors. It gives you the flavor for the enterprise. That is where we have closely worked, and we built some of our industry-specific agents, be it CPG, be it energy, be it FS. And for this client, what we have done is we have taken some of those CPG-specific use cases. Either it could be on the HR space or the procurement space or on the marketing space. And this has really helped our client be able to build a business capability surrounding this and unlock eight to nine use cases, we call it as a products, agentic AI products, which can really drive more value for them, solving the real business problems. And this kind of a comprehensive set of frameworks plus set of suites of services, plus our solution assets, Infosys solution assets, as well asunlocking the value from Databricks has really helped these clients. And we see similar patents for a lot of these successful engagements where we were able to continuously drive the value by applying this framework actually. Megan: Right. Sounds like it made a real material difference. Rajan mentioned a few of the tools in Databricks catalog there, Bavesh. I know you’ve recently worked to launch an operational database for AI agents and apps. I wonder how does a platform like that help organizations in this journey? What makes it different from some of the other platforms out there right now? Bavesh: Databricks has come to market with a new offering called Lakebase, which is really an OLTP database where you can build your AI apps. And if you think about it, there’s really two main types of data in an enterprise. There’s all the historical data, which is all the things that have happened, and that’s really what your analytics is based on. You have an old app system where you have put all your historical data and Databricks has come to market with what we call the Lakehouse, which is essentially a data warehouse with all of your data that is not operational in nature. It’s historical data. And I think that Lakehouse concept is really pushing forward with AI because a lot of our customers have thousands of users within their business and they need to get data. And what they’ve done is they’ve actually gone down the BI route, which is really building a dashboard or a report.
Most organizations have had thousands of these dashboards and reports proliferate across the organization and then they need to be customized. It just takes a long time for users inside of the business to actually get access to the data. AI now is really making that a lot easier from just the analytics perspective where we can now democratize access to the data, which has really been the holy grail for most data teams. They really want to get out of the way and just give the right data to the right people inside of the business with the right access. With a product like Genie at Databricks, you can just use English language or whatever your language is to ask questions of the data. And it’ll give you back data that answers your questions in context. It’ll give you not just what ChatGPT will give you, which is information about a topic that’s on the internet, but it will actually tell you, “Well, why did my sales numbers not reflect what I expected in the month of April?” It’ll give you some root cause analysis based on your enterprise data. Genie is going to be one of these things that’s really important where it’s going to truly kind of democratize data inside of the business. That’s kind of this OLAP world, which is what the Lakehouse is. More recently, we’ve come to market with what we call the Lakebase, which is the OLTP world. What we’re finding is that agents are now being deployed in these organizations, and those agents need a place to keep all of their orchestration, all of the context of what’s happening in that particular workflow. On the one hand, you’ve got users just asking questions. On the other hand, the next chapter is going to be around automating an entire business process. If you’re taking a function like generating a campaign in marketing, right? There are a lot of tools you use and a lot of steps you use. An agent can come in and really automate a lot of that. But on the back end of that agent, you’re going to need to stand up a real-time database to keep track of all the things that the agent is doing. That’s what Databricks has brought to market, which is this OLTP Lakebase solution. The innovation that we have brought to market is that it’s a modern kind of Postgres database where we have separated the compute and storage, very much like what we did with the data Lakehouse with the data warehouse. But on the Lakebase, the data is on one copy inside of your cloud storage, and then the compute is separated and it’s serverless. You can do things like branching and you can start up the OLTP database really quickly. What we found is that agents are actually starting these Lakebases because they can very quickly go start one up, keep it running, put it down when it needs to, make a copy of it. Agents are doing this, then they need the velocity, they need a cost-effective solution. And the beauty of all this is when you take the OLTP, which is all around the Lakebase and the real time, and you take the OLAP, you now have one system for all your data. You don’t have to copy the data around, you don’t have to manage all the permissions, you can set the context against it. We see these AI apps being really the future of how businesses run, where they’re going to take away all of the bottlenecks that humans are having to do repetitive work and automate these using LLMs and all these new technologies. We want to be the default for powering all that because we believe that our Lakebase technology is going to be faster, cheaper, and more secure for an AI database. Megan: Sounds like a real game changer. And we’ve touched on this a couple of times already, I mean, this idea of value. We know that engaging the commercial value of investments into AI is really high on the priorities right now for senior leaders. How important is this value measure piece when it comes to creating AI-ready data systems, Rajan? How can organizations ensure they’re monitoring what is delivering and what isn’t? Rajan: This is the paramount importance and most of the successful AI implementations or agentic AI implementations really required this value measurement. I’ll just extend the client example that I talked about, the large food products company, the global products company, to explain this question. I just want to create a metaphor. When the initial digital world came, we have a lot of these analytics primarily around defining those performance management KPIs, fact-based decisioning and other things were evolving over a period of time. Typically, a lot of these metrics are going to be very critical for them to measure how a function, how a business is doing. On a similar line for the value measurement, if I take the same example of the client, what is very critical for an organization is actually to map your outcome that you are expecting. Iin this case, how do I optimize my spend on direct and indirect purchases? So by applying AI, I would like to identify the areas where I can optimize the spend. That means one of the critical measures that you have is, what is your indirect expense classification and what spends you have been classified and how much you are able to reduce by bringing in this. Establishing these measures and the metrics is going to be very, very critical. And once you establish these base metrics and the measurement, and the beauty of it is some of these metrics, to just extend what Bavesh was talking about, the capabilities that Databricks gives you, like metrics view, features, tools, and other things would actually help you to translate those AI telemetries, business telemetries that is coming from your applications into a measurable metrics in terms of an outcome, which you can actually measure using the Genie room for value management measurement. Then what happens is two things that you can take, the use case, the products that as I said for this client, the products that we build either on the procurement side or on the marketing research side, if you find there is a value either because of VAC, they identify that they’re able to optimize or it is able to reachability, what is the reach, you can either accelerate that use case and further fine tune that product to expand it. Or there are, if you find it is not really driving the value or I’m not able to see the value that it is going to deliver, you can very well do a fast failure method rather than trying to make it work, you can understand and then you can take a call to pivot it to something else different. There are three aspects here. What we see from our experience, not only with this client across some of our other clients from industrial manufacturing or FS or in the energy, is by setting up this metrics-driven valuation method upfront and then leveraging the capabilities to establish, transform these telemetries, signals into a measurement, what we call an AI compass room so that you really measure the business stakeholders, whether it is coming from a marketing office or whether it is coming from supply chain office or whether it is coming from a CFO office where they can say, “Hey, this is what it is intended to do, this is what the current measurement, and this is where it’s failing that can help them to pivot.” And this will actually drive and democratize AI, all the agent decay across the enterprise, and that really drives the value. This is going to be one of the critical part that enterprise needs to do it. And that is where the six part framework that I talked about, applying that framework like value office, applying the ready for AI, applying the transformation fabric. Then the third part is the governance, which is going to be the entrepreneur of this. Then running your operations, not based on SLA, based on the experience level agreements and business metrics for you to continually measure, bringing all these six layers is going to be very critical. That’s when we see the organizations are very successful, and some of our proven examples exactly do the same that this is going to be very critical for organizations from a measurement standpoint. Megan: Lots of tangible ways there that you can actually gauge value here. And you touched on governance and the impact of AI on governance is another huge talking point among senior leaders and interactions with data are a core part of that. To what extent is having the right governance and security protocols an integral part of having AI-ready data? To Bavesh, what scenarios do these systems need to handle? What does that mean for data models? Bavesh: This is becoming kind of the prerequisite to deploying a successful AI project. I think MIT produced a report that said 95% of these new AI projects fail to actually generate business value. A big reason for that is you can go and prototype and stand up and vibe code a pilot, but when you’re actually moving a workload into production, you realize that governance becomes so critical. So what do we really mean by governance? I think the first thing is getting your data in order, like I said, in open formats. Most companies realize now that the way they engage with their customers, the way they develop a drug, the way they approve a person for a credit limit increase, all of that enterprise information is actually their competitive advantage. Because you can go and use a frontier model like ChatGPT or Claude that everybody has access to. Really the big competitive differentiator for most organizations is their own data and then their third-party data that they can add to it. Getting your data into an open format so you can understand your data and understanding your data is where governance comes in. Because when you think about governance, you really want to be able to find the data. If I’m an end user or if I’m building an AI product, I want to know what data’s available to me. Can I trust the data? How fresh is the data? Is it coming from my analytics world or do I need a real-time system like a OLTP system? I need to find the data. I also need to make sure that access is controlled in a way that doesn’t cause any huge headaches from my organization. This becomes critical. If I have a whole bunch of PDFs that have purchase orders in them, who actually has access to all that data? In a clinical trial, for example, in healthcare, you really want to ensure that people across trials don’t have visibility to patient data. Maybe the model that was used to build that was running across trial. Who has access to all the data? Who has access to only parts of the data? You really have to think about this. We also look at semantics of the data. Rajan brought this up right at the beginning of this, which is what is the context? How do we think about the metrics and all the things that the business users know in their head? We need to start codifying that somewhere. We have a product at Databricks called Unity Catalog where you can do the discovery, the access and the business semantics. You also want to share the data. And in the world of agents, what we see is something called agent sprawl. In a very short order, just like how SaaS applications became very prevalent within any organization where they really solved a business problem. You go to a line of business and you say, “I need to be able to do credit underwriting” or “I am doing a prior authorization use case or pick thousands of use cases.” There’s a SaaS app for that. Much like that, there’s going to be this world in which agents are going to come into play, and most organizations are going to have lots of agents running all the time, but the reality of it is that how did that agent perform? What was the feedback loop from the user? What was the cost of running that workload and is it going up dramatically? And if you don’t have a way to monitor, to understand, and trace all the questions and answers and responses at scale, you’re going to find yourself in a big pickle. This actually could hurt your organization because users will be very confused about what to do. When you look at governance, most organizations are recognizing that they have to start to understand what is it that they have put in place from a systems, from a process, from a tooling standpoint, focus on one use case, build out the governance for that, but build it in a way that’s going to allow you to become repeatable. AI is not going to be about one use case or two use cases. It’s whoever builds the flywheel of building many use cases in a safe, secure way, in a cost-effective way that’s driving a business outcome. If you don’t apply governance, it’s going to be very hard. At Databricks, we made a big bet on governance four or five years ago. This is one of the main reasons our company’s growing right now because we can ensure that there’s quality data that’s going into all of your AI. You can use things like Genie and you can use things like Agent Bricks and you can build apps using Lakebase. None of that really works without governance. It’s really what we call the brain inside of Databricks. Most of our customers spend a lot of time inside of Unity Catalog. And the great news is that AI is helping governance get set up much more quickly. We have a customer that three years ago, they were trying to get all of the data assets across all their domains from the customer, from the loyalty app, from the e-commerce engine. They had to go and map out all this data assets. AI is now doing a lot of their work for them. The human in the loop is just checking things. We’ve made this much easier with AI. We always think about AI as a business use case and an outcome, which I think is going to be where the biggest value is. But at Databricks, we’re using AI inside of our platform to make it much easier to operate and to make it much easier to provide all the right things for your business. This is a super critical part of how we plan to innovate as AI takes fruition in the market. Megan: And Rajan, Bavesh touched on this a little bit there, but does the integration of Agentic AI add another layer of complexity here too? What new consideration around governance does that raise? Rajan: That’s a very, very valid question. I would like to take a metaphor to really explain. We are getting into the world of self-driving cars, robotaxis, and other things. While that takes us to the autonomous world, but still there are rules that you need to adhere to when you are driving on a road. The reason I’m bringing this metaphor is because what is actually required is actually adhering to the rules and different topographies, different things, depends upon where you are driving is going to be very, very critical. The complexity that agents are going to add is basically how you operate with those constraints. For example, as a UTO, I can do 10 things, but say if I cannot approve a discount for more than 70% or I cannot give something as a bonus for someone because that is a part of the CFO, which an agent should be aware of. That is one aspect, applying the constraints around it and making sure that the agents are adhering to the constraints. The second set of complexity that it builds is the tools to access. As a business, in today’s world, when you define a process, certain processes need a certain set of tools to really actionize it. There are certain entitlements, only people entitled to do certain things based on their identity, based on the need or the situation need, you need to govern. The third is information sharing. While MCP and other aspects are great, UCP and other aspects are great, but one critical thing is what you need to share, what you don’t need to share. And those are the critical considerations. The last part is learning and relearning. Sometimes when you learn good things, you should keep something. Sometimes it is better for you to completely remove it and reevaluate in a newer way, relearn it in a newer way. These are all the critical things that are required. On the similar line for agents, it is going to be paramount, because when you are operating agents for an enterprise, you need to know, learn, and adhere to certain compliance related rules, business related constraints, and then the entitlement identity, and then sharing whatever that apply to a physical human will also start applying to an agent. That is where this is going to be very critical. This requires a new set of operating systems. That doesn’t really mean now get out of a new thing. That is where I’m just interpreting how Bavesh touched upon the Unity Catalog. The best part that which we see and some of our clients that which are implementing is extending the Unity Catalog and the capabilities like now you can catalog the tools, catalog the MCP as well as catalog these agents, and then govern those agents based on the constraints, ground them based on the constraints. It’s going to be very, very critical. Doing it not later, but starting that as part of your strategy and enforcing this as one of the critical dimensions of when you measure the value is also going to be very critical for an organization. It is like making sure that not only building the autonomous car, but as well as making sure that the car drives as per the rules of the road, not going rogue. Megan: Lots to think about there. Fascinating stuff. Thank you. Just to close, with a quick look ahead, we all know the pace of development in AI and Agentic AI is so rapid. For those organizations that can prioritize AI-ready data now, what are the most compelling use cases for the technology that you can see coming to the fore in the next few years, Bavesh? Bavesh: I think the excitement level is at its peak. We’ve seen so much investment in AI. I think the reason why there’s a lot of excitement is because you can look at the early adopters and you can see massive amounts of gains that these organizations are seeing. The one thing I will tell you is that the companies that there’s really three categories and the companies that I think are doing well, a lot of them started out with just copilots and things that are just giving people quick answers. Think about it as making an individual productive. That is the first phase. And the ROI on that has been somewhat questionable. With something like Genie, it makes it a lot more effective because it’s actually on your data and your data is contextualized in your organization. I think that’s one level of area that we’re going to see a lot of innovation. We’ll see most organizations just start to get the right information to the right person at the right time. And that has been a dream for a lot of organizations. The second one is around automating entire business processes. We see functions within marketing, like I described earlier, or whether you’re going through a process of rebates for a company. There’s a whole bunch of steps involved where you have to go into three different apps and export data from Excel and put it over here. There’s thousands of people doing very laborious, monotonous, repeatable work. These agents are really going to help get an immense amount of not only productivity for the business process, but it’s just going to make things faster. Processes that took weeks are now going to take days. Processes that took days are going to take hours and minutes now. One trend we’ve seen is that the AI world is so dynamic. In a world where you got lots of different players, you want to think about first principles, what are the foundations? You want to think about owning your data, making sure you have a handle on your structured and unstructured data. You want to put governance on that. But the other thing that you want to make sure that you don’t do is lock yourself in. Today, if you think about it, Gemini is really good with multimodal. Anytime you have pictures or videos or things like that, Gemini just is super good. Whereas if you’re writing code, Claude is really good. If you’re just doing certain types of questions around introspection, ChatGPT is really good. What you really want is an open data platform where you can build your open AI on multiple clouds, which is what we built at Databricks. I think that’ll help with the second piece, which is you can pick and choose because when you build these agents, you don’t have to be locked into just one. You should be picking the best quality and the best security and the best ROI and cost for a particular workload. One workload may use multiple of these models, and they might be even specific industry models. You need a system and a platform that can really handle this complexity. I think the third category is business reimagination. A lot of people talk about this where, yes, you’re going to go and take the data and make it available and give everybody access to the data. You’re going to make existing processes much more efficient. But the third thing is there’s going to be brand new things that come out of it. We have a very large customer who’s a bank and they have built a product that they didn’t have a year ago. Essentially, it’s machine learning and LLMs helping treasury departments forecast what their balances are going to be because they have more data at their fingertips. Historically, it took a long time for the data to get to the bankers. They were not able to really predict what a balance would be for a treasury department. Think about this for a big enterprise company, they have now built a brand new data AI solution that they’re monetizing and it’s generated hundreds of millions of dollars in the first six months. We’re seeing brand new lines of business open up and that is going to be really exciting because that’s where a lot of the transformation is going to happen. There’s going to be productivity. There’s going to be kind of automation at the business process level. Then there’s going to be these big new things that we didn’t even imagine that people are going to come up with. We are actually seeing the early signals of this in every industry. We see retailers getting data at the hourly and the minute level so that they can integrate much more closely with their supply chains. We’re seeing much more targeted customer 360-degree use cases where as retailers or as consumers, we get annoyed by ads, but now it’s so contextualized and you have so much information about what really matters to your target customer, you’re giving them value added kind of information and that’s engaging them more. There’s a whole bunch of innovation happening with agentic commerce and things like concierge and virtualized shopping. You look at any industry, there’s definitely new ways of doing things. This is what’s really exciting about AI, but you really have to not get too far ahead without thinking about what are the foundational things. You mentioned this earlier, which is open data platform, making sure you have governance correctly, making sure you think about your historical analytical data and your application data that’s going to be real time, having a good foundation to build on, that’s going to allow you to scale and move more quickly and compete in this new world. We’re very excited about what we’re seeing with our customers and what they’re building. And honestly, that’s the best part about being in my role at Databricks, which is our teams really go to customers and say, “What are the outcomes you’re driving?” The early signals have been super positive. We’re seeing companies that get serious about all the foundational elements and really are methodical about building really outcome-based AI solutions, that 5% of projects that are being successful, those are wildly successful. That’s why we’re growing as a company because once you get a good project under your belt, that gets visibility within executives. The last thing is that historically, a lot of tech has been in the IT department. You get the business designing how they want to go to market and how they’re going to compete and what products and services they want to offer. IT was the enabler and in many cases became the cost center and was relegated to rationalizing the portfolio of spend and tools. But now we’re seeing the business kind of take the lead with AI where they want to understand, they want to know, “Hey, what can I be doing now that was not possible before?” We see this big opportunity just with AI literacy with business users where they’re very eager to understand how they should be thinking about AI. What does AI mean when you peel the covers? What are the pieces and the building blocks that you need to put in place, both from a technology and a training and an enablement standpoint? We’re spending a lot of time with executives helping them along this journey. We definitely see a lot of amazing opportunities ahead. Megan: Yeah. So much innovation going on. And finally, how about yourself, Rajan? What on the horizon is exciting you the most? Rajan: I think Bavesh covered quite a bit, but I think the way I’m seeing is today predominantly we are talking about labor shift. That means unlocking the potential of human or shifting the current way of working to the new way of working with the more efficiency game. It’s predominantly more of an efficiency game. I think that is what we are seeing now and the majority of the successful use cases around the labor shift. But what is pretty promising is the two kinds of shift, the business shifts. What we are seeing as a new way of thinking or the new thing that is coming up is moving from system of execution or a system of engagement to system of action. That is the new way we see the road ahead. That is where some of the points that I touched upon. The business wants to have access to it, but how does it really make the real difference for it? One classical example that I could clearly see which we have implemented for one of our customers primarily in the manufacturing space, is around the lifecycle of creation of a product and then publishing the content around the product in line with their different B2B marketplaces. Some of those, you are not just talking about recommending, creating, but actually you are able to reimagine this process, which used to involve five different departments, now can be done much faster, but at the same time gives you that veracity in terms of the decisioning that you are able to do and as far as how you’re able to actionize. That is the second thing which we are seeing. The third part I think is also going to be is the way how the commerce has evolved. There is also not beyond that agentic commerce, but I think what we are seeing is that agent to agent commerce, agent to human commerce and agent to agent payments, agent to human payments, and then the content monetization. These are the new set of business opportunities like building new business agentic products. It could be for family techs, it could be for on the consumer side, or it could be on the industrial technology side. These are going to be what I’m calling the economy shift, labor shift, business shift, because that is going to bring a new set of system of actions, moving them from the system of executions or the typical SaaS application with the bolt-on agentic, the so called agentic application. That is going to be a major transformation, and we are underway. But on the technology side, what is very critical for entrepreneuring is in today’s world you have data, analytical data, operational data, and then there is intelligence, there are different facets of it. I think both this analytical core and operational core is going to really come into one. That’s why we are so gung-ho about the releases of Lakebase and other things because that is the way the future is going to drive. When they are really thinking about being ready for AI technology use cases, they should really think, how do you really create this unified core for the newer world? The second part is people have to reimagine today, if I take SAP as an example, you do hundreds of edge applications, business applications needed to integrate another thing. Typically, we create sprawl of these integrations. One technology use case, people can say, “Hey, how do I really create a domain-based service mesh on top of this unified core and how do I make it more agentic integration ready?” That is one of the technology use cases that we are advising to the client. I think now with a lot of the new areas that are coming around SAP, BDC with the Databricks, and this zero-based integration, that makes them rethink the way they need to integrate, the way they need to do things. The third part, I think from a technology investment and technology, the use cases that most come for the technology that I would talk about is don’t just talk about now. This is the time that you have to, the way you own the people, the FTEs for your organizations. Agents are going to be your new FTEs. That means that some of the new technology paradigm is going to be you will end up creating these co-intellects within your organization. That means you need to invest on what we call this agentic grid, where it becomes like a unified agentic fabric where every other agents can really collaborate and integrate and building on top of the same, the unified operational analytical core, the unified agentic integration on top of it, which is going to create a new set of experiences, agentic experiences rather than the traditional experiences or conversational experiences. Then the new collaboration methods are going to be some of the critical aspects from a technology side that people have to really think from a technology standpoint. To start with, I would say you start looking at it from a data standpoint, building that unified core, building that unified integration and building that collaboration layer for both sharing and collaborating with intelligence as well as the agentic collaboration all governed under single umbrella. That is going to be the one critical use case which no one will feel bad about, and they are going to get really a 100X of their investments out of it. Megan: Certainly no shortage of exciting developments on the horizon. Thank you both so much for that conversation. That was Bavesh Patel, senior vice president for Go-to-Market at Databricks and Rajan Padmanabhan, unit technology officer for data analytics and AI at Infosys, whom I spoke with from Brighton, England.That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.This show is available wherever you get your podcasts, and if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Three reasons why DeepSeek’s new model matters On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to a new design that handles large amounts of text more efficiently. While the model remains open source, its performance matches leading closed-source rivals from Anthropic, OpenAI, and Google. It is also DeepSeek’s first release optimized Huawei’s Ascend chips—a key test of China’s dependence on Nvidia. Here are three ways V4 could shake up AI.
—Caiwei Chen The rise of world models AI systems have already gained impressive mastery over the digital world, but the physical world remains humanity’s domain. As it turns out, building an AI that composes novels or code apps is far easier than developing one to fold laundry or navigate city streets. To bridge this gap, many researchers believe you need something called a world model.
Proponents like Stanford professor Fei-Fei Li and AMI Labs founder Yann LeCun argue these models can overcome the well-known limitations of LLMs—and realize AI’s promise for robotics. Find out why they’ve brought world models to the forefront of the field. —Grace Huckins World models are on our list of the 10 Things That Matter in AI Right Now, our essential guide to what’s really worth your attention in the field. Subscribers can watch an exclusive roundtable unveiling the technologies and trends on the list, with analysis from MIT Technology Review’s AI reporter Grace Huckins and executive editors Amy Nordrum and Niall Firth. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 China has blocked Meta’s $2 billion acquisition of AI startup ManusRegulators cited national security grounds. (WSJ $)+ Beijing called the deal a “conspiratorial” attempt to hollow out its tech base. (FT $)+ The country is tightening its grip on AI firms that try to leave. (TechCrunch)+ The decision escalates China’s AI rivalry with the US. (Bloomberg $)+ But there will be no winners in their competition. (MIT Technology Review)2 Google is investing up to $40 billion in AnthropicIn a deal valuing the AI firm at $350 billion. (CNBC)+ The funding will support the firm’s growing computing needs. (TechCrunch)+ Anthropic and OpenAI are fighting for compute capacity. (Axios) 3 President Trump just fired the entire National Science BoardThe NSF has played a crucial role in developing technology. (The Verge)+ The move heightens fears over political interference in US science. (Nature)4 Conspiracy theories about the Washington shooting are proliferating onlineOver 300,000 posts appeared on X using the keyword “staged.” (NYT $)+ The theories are also swirling on Bluesky and Instagram. (Wired)5 The AI compute crunch is starting to hit the broader economy.It’s affecting jobs, gadgets, and electricity prices. (404 Media)+ The AI compute explosion is the tech story of our time. (MIT Technology Review)6 Elon Musk says a new banking tool brings X close to a “super app”He’s pledged to launch the tool this month. (Bloomberg)7 AI optimism is surging across Asia while US sentiment coolsThe divide could shape where adoption happens fastest. (Rest of World)8 Apple is tying its new CEO’s ascent to its first foldable iPhoneIt wants to build the buzz around John Ternus. (Gizmodo) 9 Twelve firms are developing the Golden Dome’s space-based interceptorsThey’ve won contracts worth up to $3.2 billion. (Ars Technica)10 NASA has shared promising results from Artemis IIThe spacecraft and rocket fared well. (Engadget) Quote of the day
“Getting out the truth and establishing facts and reliable information takes time. But our audiences really don’t have that kind of patience.” —Amanda Crawford, associate professor at the University of Connecticut, tells the NYT why conspiracy theories are gaining traction online. One More Thing MIRIAM MARTINCIC Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change Kenya’s Great Rift Valley is home to five geothermal power stations, which harness clouds of steam to generate about a quarter of the country’s electricity. But some of the energy escapes into the atmosphere, while even more remains underground for lack of demand. That’s what brought Octavia Carbon here. Last year, the startup began harnessing some of that excess energy to remove CO2 from the air. The company says the method is efficient, affordable, and—crucially—scalable. But the project also faces fierce opposition.
Announcing our partnership with the Republic of Korea
Bringing frontier AI models to Korea’s scientific communityKorea’s Ministry of Science and ICT (MSIT) has recently launched the K-Moonshot Missions, an initiative aimed at unlocking step-change improvements in research productivity and addressing national grand challenges.Helping make this vision a reality, Google will establish an AI Campus in the Republic of Korea — an AI-focused facility within its Seoul offices.The AI Campus will be a hub for Korean academia and research institutions to collaborate with Google’s world-leading AI experts to accelerate scientific breakthroughs through research and access to our most advanced AI for Science models, programs and events. We will begin by exploring collaborations with research-oriented institutions including Seoul National University (SNU), Korea Advanced Institute of Science and Technology (KAIST) and the Ministry’s three AI Bio Innovation Hubs, leveraging our models in fields such as life sciences, energy, weather and climate, for example:AlphaEvolve – a Gemini-powered coding agent for designing and optimising advanced algorithms. This has shown beneficial impact across many areas in computing and math, and we are seeing similar examples emerge in drug discovery and energy.AlphaGenome – an AI model to help scientists better understand how mutations in human DNA sequences impact a wide range of gene functions, speeding up research on genome biology and helping to improve disease understanding.AlphaFold – already used by more than 85,000 researchers in Korea, we will explore accelerating AI-enabled predictions for proteins, DNA and RNA.AI co-scientist – a multi-agent AI system that acts as a virtual scientific collaborator to help researchers brainstorm and verify hypotheses. This is showing promising benefits in a range of biomedical applications and we look forward to collaborating through joint research exploration and technical advisory to support the Ministry’s AI Scientist Project on ways to best integrate the system.WeatherNext – we will explore collaborations to support Korea’s energy and sustainability goals in predicting and analyzing the impacts of extreme weather events and optimizing renewable energy on grids.Cultivating AI talent and partnering on safetyRealizing the full potential of AI requires investing in people and building responsibly. To support the next generation of Korean AI talent, we are opening doors to forge connections with Google DeepMind, including exploring internship opportunities for Korean students. This builds on Google’s broader commitment to the region, including the recent milestone of providing 50,000 AI Essentials scholarships to help job seekers gain foundational skills.Finally, following our Frontier AI Safety Commitments made at the AI Seoul Summit, we will collaborate with the Korean AI Safety Institute (AISI) on research and best practices.Building on the AlphaGo legacyAs we look back on the legacy of AlphaGo, we are incredibly excited for what lies ahead. We look forward to collaborating with the government as they invest in important local AI infrastructure, such as a new National AI for Science Center (NAIS), due to open in May.By combining Google DeepMind’s frontier AI models with the brilliant scientific minds in Korea, we believe we can unlock scientific discoveries that will benefit society for generations to come.

Data Center World 2026: Innovation Spotlight
Belden + OptiCool: Modular Cooling for the AI Middle Market At Data Center World 2026, company representatives from Belden and OptiCool described a joint push into integrated rack-level infrastructure—pairing connectivity, power, and modular cooling into a single deployable system aimed squarely at enterprise and mid-market colocation providers. The partnership reflects a shift already underway inside Belden itself. Long known as a manufacturer of wire, cable, and connectivity products, the company said it has spent the last several years evolving into a solutions provider—leveraging a broader portfolio that spans industrial networking, automation, and control systems. That repositioning is now extending into AI infrastructure. From Components to Fully Integrated Systems Rather than selling discrete products into bid cycles, Belden is now packaging racks, PDUs, cable management, and cooling into a unified offering—delivered as a manufacturer-backed system rather than a third-party integration. “We can bring a full solution to the table now,” a company representative said, emphasizing that the company is “standing behind the solution as a manufacturer, not as a system integrator.” The cooling layer comes via OptiCool, whose rear-door heat exchanger (RDHx) technology is designed to scale alongside uncertain AI workloads. Two-Phase Rear Door Cooling at Rack Scale OptiCool’s approach centers on two-phase cooling applied at the rear door, combining the non-invasive characteristics of RDHx with the efficiency gains typically associated with direct-to-chip liquid cooling. According to company representatives, the system: Supports up to 120 kW per rack (with 60 kW demonstrated on the show floor) Delivers up to 10x cooling capacity compared to traditional approaches Operates at roughly one-third the energy consumption of comparable single-phase systems Instead of injecting cold air, the system extracts heat using refrigerant as the heat sink, reducing demand on CRAC units and broader facility cooling infrastructure. Designing for Uncertainty: Modular, Swappable Capacity The defining feature—and

The Trillion-Dollar AIDC Boom Gets Real: Omdia Maps the Path From Megaclusters to Microgrids
The AI data center buildout is getting bigger, denser, and more electrically complex than even many bullish observers expected. That was the core message from Omdia’s Data Center World analyst summit, where Senior Director Vlad Galabov and Practice Lead Shen Wang laid out a view of the market that has grown more expansive in just the past year. What had been a large-scale infrastructure story is now, in Omdia’s telling, something closer to a full-stack industrial transition: hyperscalers are still leading, but enterprises, second-tier cloud providers, and new AI use cases are beginning to add demand on top of demand. Omdia’s updated forecast reflects that shift. Galabov said the firm has now raised its 2030 projection for data center investment beyond the $1.6 trillion figure it showed a year ago, arguing that surging AI usage, expanding buyer classes, and the emergence of new power infrastructure categories have all forced a rethink. “One of the reasons why we raised it is that people keep using more AI,” Galabov said. “And that just means more money, because we need to buy more GPUs to run the AI.” That is the simple version. The more consequential one is that AI is no longer behaving like a contained technology cycle. It is spilling outward into adjacent infrastructure markets, including batteries, gas-fired onsite generation, and high-voltage DC power architectures that until recently sat well outside the mainstream data center conversation. A Market Moving Faster Than the Forecasts Galabov opened by revisiting the predictions Omdia made last year for 2030. On several fronts, he said, the market is already validating them faster than expected. AI applications are becoming commonplace. AI has become the dominant driver of data center investment. Self-generation is no longer a fringe strategy. Even some of the rack-scale architecture concepts that once looked

AI’s Execution Era: Aligned and Netrality on Power, Speed, and the New Data Center Reality
At Data Center World 2026, the industry didn’t need convincing that something fundamental has shifted. “This feels different,” said Bill Kleyman as he opened a keynote fireside with Phill Lawson-Shanks and Amber Caramella. “In the past 24 months, we’ve seen more evolution… than in the two decades before.” What followed was less a forecast than a field report from the front lines of the AI infrastructure buildout—where demand is immediate, power is decisive, and execution is everything. A Different Kind of Growth Cycle For Caramella, the shift starts with scale—and speed. “What feels fundamentally different is just the sheer pace and breadth of the demand combined with a real shift in architecture,” she said. Vacancy rates have collapsed even as capacity expands. AI workloads are not just additive—they are redefining absorption curves across the market. But the deeper change is behavioral. “Over 75% of people are using AI in their day-to-day business… and now the conversation is shifting to agentic AI,” Caramella noted. That shift—from tools to delegated workflows—points to a second wave of infrastructure demand that has not yet fully materialized. Lawson-Shanks framed the transformation in more structural terms. The industry, he said, has always followed a predictable chain: workload → software → hardware → facility → location. That chain has broken. “We had a very predictable industry… prior to Covid. And Covid changed everything,” he said, describing how hyperscale demand compressed deployment cycles overnight. What followed was a surge that utilities—and supply chains—were not prepared to meet. From Capacity to Constraint: Power Becomes Strategy If AI has a gating factor, it is no longer compute. It is power. “Before it used to be an operational convenience,” Caramella said. “Now it’s a strategic advantage—or constraint if you don’t have it.” That shift is reshaping executive decision-making. Power is no

Oil prices plunge following full reopening of the Strait of Hormuz to commercial vessels
Oil prices plunged on Apr. 17, as geopolitical tensions in the Middle East showed signs of easing, following the full reopening of the Strait of Hormuz to commercial vessels. Global crude markets reacted sharply after Iran confirmed that the Strait of Hormuz is now “completely open” to commercial shipping during an ongoing ceasefire tied to regional conflict negotiations. The announcement marked a major turning point after weeks of disruption that had severely constrained global oil flows. Stay updated on oil price volatility, shipping disruptions, LNG market analysis, and production output at OGJ’s Iran war content hub. Brent crude fell by more than 10%, dropping to around $88–89/bbl, while US West Texas Intermediate (WTI) declined to the low $80s—both benchmarks hitting their lowest levels in over a month. The sell-off reflects a rapid unwinding of the geopolitical risk premium that had built up during the conflict. The reopening follows a fragile, 10-day ceasefire involving Israel and Lebanon, alongside tentative progress in US–Iran negotiations. While the waterway is now open, the US has maintained a naval blockade on Iranian ports, signaling that broader geopolitical risks have not fully dissipated. The return of tanker traffic through the Gulf could gradually restore millions of barrels per day to global markets, easing the tight conditions that had driven recent price volatility. However, some uncertainty remains over how quickly shipping activity will normalize and whether the ceasefire will hold. Despite the sharp price decline, the oil market remains structurally fragile. Weeks of disruption have depleted inventories and altered trade flows, and it may take time for supply chains to fully recover. Additionally, any breakdown in ceasefire talks could quickly reverse the current trend. Beyond energy markets, the development rippled across global financial systems. Equity markets surged, with major US indices posting strong gains as lower oil

EIA: US crude inventories up 1.9 million bbl
US crude oil inventories for the week ended Apr. 17, excluding the Strategic Petroleum Reserve, increased by 1.9 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 465.7 million bbl, US crude oil inventories are about 3% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 4.6 million bbl from last week and are about 0.5% below the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.4 million bbl last week and are about 8% below the 5-year average for this time of year. Propane-propylene inventories increased by 2.1 million bbl from last week and are 69% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.0 million b/d for the week, which was 55,000 b/d less than the previous week’s average. Refineries operated at 89.1% of capacity. Gasoline production increased, averaging 10.1 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.1 million b/d, up 787,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.0 million b/d, 0.4% less than the same 4-week period last year. Total motor gasoline imports averaged 587,000 b/d. Distillate fuel imports averaged 190,000 b/d.

Strike Energy begins drilling Walyering West-1 in Western Australia
Strike Energy Ltd. has started drilling operations at Walyering West-1 (WAW-1) in the Perth basin of Western Australia in production license (PL) L23. Evaluation, including wireline logging and potential production testing, will be undertaken as part of the program, the company said in a release last week. WAW-1, which was spudded Apr. 15, lies about 3 km from existing production infrastructure at the Walyering gas plant. Drilling, through the Ensign 970 rig, is targeting sandstone reservoirs within the Cattamarra Coal Measures (CCM) formation to evaluate gas and condensate potential within the proven Walyering hydrocarbon system, the operator said. Drilling operations are expected to take about 20 days, with preliminary results expected in early May. WAW-1 is targeting Jurassic-aged conventional sandstone reservoirs within a fault bound, 4-way dip closure in the license. Strike Energy is operator with 100% equity interest.

Sable Offshore brings Platform Heritage online amid ongoing pipeline dispute
Sable Offshore Corp., Houston, has started production from Platform Heritage in federal waters offshore California. Together with output from Platform Harmony, the company is producing an average 750 gross b/d of oil per well from the 40 wells currently online across the two platforms. Once all 74 production wells are brought online, Sable expects average per‑well production to be about 700 gross b/d. The company’s third platform, Hondo, is expected to come online in June and, once fully ramped up, is forecast to produce about 10,000 gross b/d. Court ruling constrains pipeline restart, legal challenges continue Platforms Harmony and Heritage are part of Sable Offshore’s Santa Ynez Unit (SYU). Platform Heritage was recently cleared by the US Bureau of Safety and Environmental Enforcement (BSEE) to resume operations following completion of a final pre‑restart inspection and a directive from the US Secretary of Energy requiring Sable Offshore to restore SYU operations under authorities delegated through the Defense Production Act and certain executive orders. A California state court, however, has refused to lift an injunction blocking restart of Sable Offshore’s onshore pipeline system, ruling that the federal directive does not override existing court orders or state permitting requirements. The decision comes amid broader litigation between Sable and California regulators. The operator is pursuing legal action related to what it characterizes as state and county regulatory overreach and is seeking damages of at least $347 million from the California Coastal Commission and more than $100 million from Santa Barbara County over alleged unlawful withholding of permit transfers. Separately, the company detailed its capital spending and financing plans for the remainder of the year. From April 2026 through yearend, the operator expects to spend about $180 million on facility upgrades, maintenance capital, and production optimization, and plans to refinance its debt during the second

2026 US Gulf Coast Oil & Gas Infrastructure map
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } EndeavorB2B’s MAPSearch and the Offshore editorial team produced the 2026 US Gulf Coast Oil & Gas Infrastructure map for readers across the Offshore and Oil & Gas Journal brands, which serves as a useful resource for industry analysis, planning, and understanding of the Gulf Coast’s role in energy production and transportation. All of the following are included and can be identified in the legends in the bottom margin: Active, under construction and idle Gulf Coast natural gas and crude oil pipelines labeled by operator and diameter. Active offshore oil and gas platform locations. Active, under construction and proposed LNG plants numbered by names and operators. Active and proposed CCS project injection sites numbered by names and locations and labeled by CO2 capacity in metric tons/year. Active CO2 pipelines labeled by operator and diameter. Major fabrication yard numbered by names and locations. Major port names and locations. Major heliport numbered by names and locations. In addition, there is an inset for the LNG terminals on the East Coast as well as five for major hubs along the Gulf in Corpus Christi, Houston, Lake Charles, Morgan City/Amelia, Houma, and Donaldsville/St. James. Click the Download button below and click on the

Dangote advances petrochemical expansion at Lekki refinery
Dangote Petroleum Refinery and Petrochemicals FZE has let a contract to Honeywell International Inc. to supply process technologies and catalysts for a petrochemical expansion at the operator’s 650,000-b/d integrated refining complex in the Lekki Free Trade Zone near Lagos, Nigeria, supporting increased production of propylene and linear alkylbenzene (LAB). Under the contract, Honeywell UOP LLC will license its proprietary Oleflex technology to enable the refinery’s production of an additional 750,000 tonnes/year (tpy) of propylene, alongside a suite of process technologies and catalysts to support output of 400,000 tpy of LAB, Honeywell said on Apr. 20. The capacity additions Dangote’s goal of meeting growing regional demand for plastics, packaging materials, and detergent feedstocks while reducing reliance on imports, the service provider said. According to Honeywell, the integration of additional petrochemical capacity aims to improve overall refinery economics at the site by increasing product flexibility and value capture from crude processing. The petrochemical expansion forms part of a broader scale-up of Dangote’s refining and downstream operations, with the operator also progressing plans to increase crude processing capacity at the Lekki refinery from 650,000 b/d to 1.4 million b/d by 2028, a move that would position the complex as the world’s largest single-site refinery. This latest contract for the Lekki refinery follows Dangote’s separate November 2025 award to Honeywell under which the service provider will deliver advanced process controls, catalysts, and digital optimization technologies to support higher throughput and reliability across both existing and new units as part of the complex’s crude processing expansion. Dangote said the expansion supports Nigeria’s broader objective of strengthening domestic refining and petrochemical supply chains. Increased local production of petrochemicals and refined products is expected to reduce import dependence and improve foreign exchange balances, while supporting downstream manufacturing sectors. The announcement builds on a long-term collaboration between the

National Grid, Con Edison urge FERC to adopt gas pipeline reliability requirements
The Federal Energy Regulatory Commission should adopt reliability-related requirements for gas pipeline operators to ensure fuel supplies during cold weather, according to National Grid USA and affiliated utilities Consolidated Edison Co. of New York and Orange and Rockland Utilities. In the wake of power outages in the Southeast and the near collapse of New York City’s gas system during Winter Storm Elliott in December 2022, voluntary efforts to bolster gas pipeline reliability are inadequate, the utilities said in two separate filings on Friday at FERC. The filings were in response to a gas-electric coordination meeting held in November by the Federal-State Current Issues Collaborative between FERC and the National Association of Regulatory Utility Commissioners. National Grid called for FERC to use its authority under the Natural Gas Act to require pipeline reliability reporting, coupled with enforcement mechanisms, and pipeline tariff reforms. “Such data reporting would enable the commission to gain a clearer picture into pipeline reliability and identify any problematic trends in the quality of pipeline service,” National Grid said. “At that point, the commission could consider using its ratemaking, audit, and civil penalty authority preemptively to address such identified concerns before they result in service curtailments.” On pipeline tariff reforms, FERC should develop tougher provisions for force majeure events — an unforeseen occurence that prevents a contract from being fulfilled — reservation charge crediting, operational flow orders, scheduling and confirmation enhancements, improved real-time coordination, and limits on changes to nomination rankings, National Grid said. FERC should support efforts in New England and New York to create financial incentives for gas-fired generators to enter into winter contracts for imported liquefied natural gas supplies, or other long-term firm contracts with suppliers and pipelines, National Grid said. Con Edison and O&R said they were encouraged by recent efforts such as North American Energy Standard

US BOEM Seeks Feedback on Potential Wind Leasing Offshore Guam
The United States Bureau of Ocean Energy Management (BOEM) on Monday issued a Call for Information and Nominations to help it decide on potential leasing areas for wind energy development offshore Guam. The call concerns a contiguous area around the island that comprises about 2.1 million acres. The area’s water depths range from 350 meters (1,148.29 feet) to 2,200 meters (7,217.85 feet), according to a statement on BOEM’s website. Closing April 7, the comment period seeks “relevant information on site conditions, marine resources, and ocean uses near or within the call area”, the BOEM said. “Concurrently, wind energy companies can nominate specific areas they would like to see offered for leasing. “During the call comment period, BOEM will engage with Indigenous Peoples, stakeholder organizations, ocean users, federal agencies, the government of Guam, and other parties to identify conflicts early in the process as BOEM seeks to identify areas where offshore wind development would have the least impact”. The next step would be the identification of specific WEAs, or wind energy areas, in the larger call area. BOEM would then conduct environmental reviews of the WEAs in consultation with different stakeholders. “After completing its environmental reviews and consultations, BOEM may propose one or more competitive lease sales for areas within the WEAs”, the Department of the Interior (DOI) sub-agency said. BOEM Director Elizabeth Klein said, “Responsible offshore wind development off Guam’s coast offers a vital opportunity to expand clean energy, cut carbon emissions, and reduce energy costs for Guam residents”. Late last year the DOI announced the approval of the 2.4-gigawatt (GW) SouthCoast Wind Project, raising the total capacity of federally approved offshore wind power projects to over 19 GW. The project owned by a joint venture between EDP Renewables and ENGIE received a positive Record of Decision, the DOI said in

Biden Bars Offshore Oil Drilling in USA Atlantic and Pacific
President Joe Biden is indefinitely blocking offshore oil and gas development in more than 625 million acres of US coastal waters, warning that drilling there is simply “not worth the risks” and “unnecessary” to meet the nation’s energy needs. Biden’s move is enshrined in a pair of presidential memoranda being issued Monday, burnishing his legacy on conservation and fighting climate change just two weeks before President-elect Donald Trump takes office. Yet unlike other actions Biden has taken to constrain fossil fuel development, this one could be harder for Trump to unwind, since it’s rooted in a 72-year-old provision of federal law that empowers presidents to withdraw US waters from oil and gas leasing without explicitly authorizing revocations. Biden is ruling out future oil and gas leasing along the US East and West Coasts, the eastern Gulf of Mexico and a sliver of the Northern Bering Sea, an area teeming with seabirds, marine mammals, fish and other wildlife that indigenous people have depended on for millennia. The action doesn’t affect energy development under existing offshore leases, and it won’t prevent the sale of more drilling rights in Alaska’s gas-rich Cook Inlet or the central and western Gulf of Mexico, which together provide about 14% of US oil and gas production. The president cast the move as achieving a careful balance between conservation and energy security. “It is clear to me that the relatively minimal fossil fuel potential in the areas I am withdrawing do not justify the environmental, public health and economic risks that would come from new leasing and drilling,” Biden said. “We do not need to choose between protecting the environment and growing our economy, or between keeping our ocean healthy, our coastlines resilient and the food they produce secure — and keeping energy prices low.” Some of the areas Biden is protecting

Biden Admin Finalizes Hydrogen Tax Credit Favoring Cleaner Production
The Biden administration has finalized rules for a tax incentive promoting hydrogen production using renewable power, with lower credits for processes using abated natural gas. The Clean Hydrogen Production Credit is based on carbon intensity, which must not exceed four kilograms of carbon dioxide equivalent per kilogram of hydrogen produced. Qualified facilities are those whose start of construction falls before 2033. These facilities can claim credits for 10 years of production starting on the date of service placement, according to the draft text on the Federal Register’s portal. The final text is scheduled for publication Friday. Established by the 2022 Inflation Reduction Act, the four-tier scheme gives producers that meet wage and apprenticeship requirements a credit of up to $3 per kilogram of “qualified clean hydrogen”, to be adjusted for inflation. Hydrogen whose production process makes higher lifecycle emissions gets less. The scheme will use the Energy Department’s Greenhouse Gases, Regulated Emissions and Energy Use in Transportation (GREET) model in tiering production processes for credit computation. “In the coming weeks, the Department of Energy will release an updated version of the 45VH2-GREET model that producers will use to calculate the section 45V tax credit”, the Treasury Department said in a statement announcing the finalization of rules, a process that it said had considered roughly 30,000 public comments. However, producers may use the GREET model that was the most recent when their facility began construction. “This is in consideration of comments that the prospect of potential changes to the model over time reduces investment certainty”, explained the statement on the Treasury’s website. “Calculation of the lifecycle GHG analysis for the tax credit requires consideration of direct and significant indirect emissions”, the statement said. For electrolytic hydrogen, electrolyzers covered by the scheme include not only those using renewables-derived electricity (green hydrogen) but

Xthings unveils Ulticam home security cameras powered by edge AI
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Xthings announced that its Ulticam security camera brand has a new model out today: the Ulticam IQ Floodlight, an edge AI-powered home security camera. The company also plans to showcase two additional cameras, Ulticam IQ, an outdoor spotlight camera, and Ulticam Dot, a portable, wireless security camera. All three cameras offer free cloud storage (seven days rolling) and subscription-free edge AI-powered person detection and alerts. The AI at the edge means that it doesn’t have to go out to an internet-connected data center to tap AI computing to figure out what is in front of the camera. Rather, the processing for the AI is built into the camera itself, and that sets a new standard for value and performance in home security cameras. It can identify people, faces and vehicles. CES 2025 attendees can experience Ulticam’s entire lineup at Pepcom’s Digital Experience event on January 6, 2025, and at the Venetian Expo, Halls A-D, booth #51732, from January 7 to January 10, 2025. These new security cameras will be available for purchase online in the U.S. in Q1 and Q2 2025 at U-tec.com, Amazon, and Best Buy. The Ulticam IQ Series: smart edge AI-powered home security cameras Ulticam IQ home security camera. The Ulticam IQ Series, which includes IQ and IQ Floodlight, takes home security to the next level with the most advanced AI-powered recognition. Among the very first consumer cameras to use edge AI, the IQ Series can quickly and accurately identify people, faces and vehicles, without uploading video for server-side processing, which improves speed, accuracy, security and privacy. Additionally, the Ulticam IQ Series is designed to improve over time with over-the-air updates that enable new AI features. Both cameras

Intel unveils new Core Ultra processors with 2X to 3X performance on AI apps
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel unveiled new Intel Core Ultra 9 processors today at CES 2025 with as much as two or three times the edge performance on AI apps as before. The chips under the Intel Core Ultra 9 and Core i9 labels were previously codenamed Arrow Lake H, Meteor Lake H, Arrow Lake S and Raptor Lake S Refresh. Intel said it is pushing the boundaries of AI performance and power efficiency for businesses and consumers, ushering in the next era of AI computing. In other performance metrics, Intel said the Core Ultra 9 processors are up to 5.8 times faster in media performance, 3.4 times faster in video analytics end-to-end workloads with media and AI, and 8.2 times better in terms of performance per watt than prior chips. Intel hopes to kick off the year better than in 2024. CEO Pat Gelsinger resigned last month without a permanent successor after a variety of struggles, including mass layoffs, manufacturing delays and poor execution on chips including gaming bugs in chips launched during the summer. Intel Core Ultra Series 2 Michael Masci, vice president of product management at the Edge Computing Group at Intel, said in a briefing that AI, once the domain of research labs, is integrating into every aspect of our lives, including AI PCs where the AI processing is done in the computer itself, not the cloud. AI is also being processed in data centers in big enterprises, from retail stores to hospital rooms. “As CES kicks off, it’s clear we are witnessing a transformative moment,” he said. “Artificial intelligence is moving at an unprecedented pace.” The new processors include the Intel Core 9 Ultra 200 H/U/S models, with up to

Introducing workspace agents in ChatGPT
Today, we’re introducing workspace agents in ChatGPT. Teams can now create shared agents that handle complex tasks and long-running workflows, all while operating within the permissions and controls set by their organization.Workspace agents are an evolution of GPTs. Powered by Codex, they can take on many of the tasks people already do at work—from preparing reports, to writing code, to responding to messages. They run in the cloud, so they can keep working even when you’re not. They’re also designed to be shared within an organization, so teams can build an agent once, use it together in ChatGPT or Slack, and improve it over time.AI has already helped people work faster on their own, but many of the most important workflows inside an organization depend on shared context, handoffs, and decisions across teams. Workspace agents are designed for that kind of work: they can gather context from the right systems, follow team processes, ask for approval when needed, and keep work moving across tools. For example, our sales team at OpenAI uses an agent to pull together details from call notes and account research, qualify new leads, and draft follow-up emails right in a rep’s inbox. It helps account teams spend less time stitching together details and more time with customers.To get started, click Agents in the ChatGPT sidebar and describe a workflow your team does often. ChatGPT will guide you step by step to turn it into an agent. Workspace agents are available in research preview in ChatGPT Business, Enterprise, Edu, and Teachers plans. Editor’s note: GPTs will remain available while teams test workspace agents with their workflows. Soon, we’ll make it easy to convert GPTs into workspace agents.Build a powerful workspace agent in minutesTurn sound on for guided walkthroughs of five agents your team can build today.A software review agent that triages software requests, enforces policy, routes approvals, and opens IT tickets with clear next steps.A product feedback routing agent that captures feedback from Slack, support, and public channels, prioritizes what matters, and turns signals into weekly product action.A weekly metrics reporting agent that auto-pulls Friday data, generates charts, drafts the narrative, and delivers a business report.A lead outreach agent that qualifies inbound leads, drafts tailored follow-ups, and updates the CRM.A third-party risk management agent that screens vendors for sanctions, financial, and reputational risk, then delivers reports.Describe the job you want done or just drop in a file. ChatGPT helps turn it into an agent: defining the steps, connecting the right tools, adding skills, and testing it until it works the way you expect. Here are a few agents teams at OpenAI have built—and that your team can build, too:Software Reviewer: Reviews employee software requests, checks them against approved tools and policies, recommends next steps, and files IT tickets when needed.Product Feedback Router: Monitors Slack, support channels, and public forums, then turns feedback into prioritized tickets and weekly product summaries.Weekly Metrics Reporter: Pulls data every Friday, creates charts, writes the summary, and shares a report with the team.Lead Outreach Agent: Researches inbound leads, scores them against your qualification rubric, drafts personalized follow-up emails, and updates your CRM.Third-Party Risk Manager: Researches vendors, assesses signals like sanctions exposure, financial health, and reputational risk, and produces a structured report.You can also get started quickly with templates(opens in a new window) for finance, sales, marketing, and more. Each comes with built-in skills and suggested tools, so you can quickly set up an agent and customize from there.Workspace agents can gather context and take action across dozens of tools.Agents are powered by Codex in the cloud, giving them access to a workspace for files, code, tools, and memory. Agents do more than answer a prompt: they can write or run code, use connected apps, remember what they’ve learned, and continue work across multiple steps.Workspace agents can keep working even when you’re away. You can set them to run on a schedule, or deploy them in Slack so they can pick up requests as they come in. For example, our product team built an agent that proactively answers employee questions in Slack channels. The agent responds with a clear answer, links relevant documentation, and can file a ticket when it finds a new issue. This agent helps teams get unblocked faster while making sure important follow-ups don’t slip through the cracks.Today, teams can interact with agents in ChatGPT and Slack, with more surfaces coming soon. Agents can join the conversations and workflows where work already happens, helping teams move work forward with less coordination.Turn best practices into shared agentsManage sharing and discover workspace agents shared by your team from the Agents tab in the ChatGPT sidebar.Knowledge is often scattered across people and systems. Workspace agents give teams a way to turn that knowledge into a reusable workflow: one that follows the right process, uses the right tools, and can be shared across the organization.For example, our accounting team built an agent that prepares key parts of month-end close, from journal entries to balance sheet reconciliations to variance analysis. It completes the work in minutes, generates workpapers with the underlying inputs and control totals needed for review, and follows internal policies. The agent is available in ChatGPT for anyone on the team to use, or added to Slack channels so the team can ask it questions and collaborate around its outputs. Because agents have memory and can be guided and corrected in conversation, they get better as teams use them. Over time, agents become a practical way to keep team knowledge current: build once, improve through use, then share or duplicate for new workflows.Stay in control, with the right safeguardsView analytics for your live workspace agents from the menu in the editor.When you delegate work to an agent, you stay in control. You decide what tools and data it can use, what actions it can take, and when it needs approval. For sensitive steps, like editing a spreadsheet, sending an email, or adding a calendar event, you can require the agent to ask for permission before moving forward.After you share an agent, analytics help you see how it’s being used, including how many runs it has completed and how many people are using it.Enterprise governance and visibilityWorkspace agents come with enterprise-grade monitoring and controls, so admins can protect sensitive data while giving teams a safe way to move faster with AI. ChatGPT Enterprise and Edu admins can control which connected tools and actions user groups can access. Admins can also manage who has access to use, build, and share agents. Built-in safeguards help agents stay aligned with your instructions when they encounter misleading external content, including prompt injection attacks.The Compliance API(opens in a new window) gives admins visibility into every agent’s configuration, updates, and runs, so they can monitor and control how agents are being built and used. Admins can also suspend agents if needed.Soon, admins will also be able to view every agent built across their organization in the admin console, including usage patterns and connected data sources.Early feedback from customersEarly testers of workspace agents are already seeing more consistent results and time for higher-value work.RipplingSoftBank Corp.Better MortgageBBVAHibob“The hard part of building an agent is not the model. It’s the integrations, memory, the user experience. Workspace agents collapsed that work, so one of our Sales Consultants built, evaluated, and iterated a Sales Opportunity agent end to end without an engineering team. It researches accounts, summarizes Gong calls, and posts deal briefs directly into the team’s Slack room. What used to take reps 5-6 hours a week now runs automatically in the background on every deal.”— Ankur Bhatt, AI Engineering, RipplingAvailability and pricingWorkspace agents are available in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans. For Enterprise and Edu plans, admins can enable agents using role-based controls.Workspace agents will be free until May 6, 2026, with credit-based pricing starting on that date.What’s nextWe’ll keep adding more great things in the weeks ahead to help teams get more work done with less manual effort. This includes new triggers that can start work automatically, better dashboards to understand and improve performance, more ways for agents to take action across your business tools, and support for workspace agents in the Codex app.Teams do their best work when knowledge is easier to find, processes are easier to follow, and people can get help in the flow of work. Workspace agents are an early step toward that future: AI that works alongside people in the tools and conversations where work already happens, helping teams spend less time coordinating work and more time creating, building, and making decisions that move the business forward.
Partnering with industry leaders to accelerate AI transformation
We’re joining forces with Accenture, Bain & Company, BCG, Deloitte, and McKinsey to bring the power of frontier AI to organizations around the world.Artificial intelligence (AI) could contribute up to $15.7 trillion to the global economy by 2030, yet many businesses face a significant adoption gap. To date, only 25% of organizations have successfully moved AI into production at scale.At Google DeepMind, we believe AI is one of the most transformative technologies of our time, capable of delivering new products and services, and scientific breakthroughs that improve the lives of billions of people. To help businesses harness this potential responsibly, we’re partnering with Accenture, Bain & Company, BCG, Deloitte, and McKinsey to accelerate AI-driven transformation and help industries adopt frontier technology at scale. By combining our advanced research with their strategic expertise, we aim to solve complex challenges across sectors and drive global economic growth.A new initiative for enterprise transformationWe’re partnering with global enterprise consultancies to help them deliver world-leading agentic transformation for customers at speed and scale. Together we’ll use AI to drive meaningful human impact, empowering workforces with AI tools that provide real-time data for better decision-making and management of complex tasks.From research labs to real-world impactThis collaboration allows our partners to work directly with Google DeepMind’s world-leading technical talent. Together, we will focus on critical enterprise needs in sectors like finance, manufacturing, retail, media and entertainment.These partnerships include three key pillars:Enabling scaled, industry-specific AI capabilities: We will collaborate on challenging customer use cases, supporting the development of scaled, industry-specific AI solutions.Early access to frontier models: Partners will receive early access to our frontier models, including the Gemini family. Their feedback on these will help us further refine these systems to ensure they’re equipped to deliver benefits for customers.Access to AI leadership: We will connect our leadership with customer CEOs and boards, helping them navigate the future of frontier AI research and development.Looking aheadThese efforts build upon Google Cloud’s work supporting global consulting partners, systems integrators, software partners, and specialized services providers as they implement and scale agentic AI.In the coming years, AI has the potential to solve critical global challenges and amplify human potential. To secure these benefits, AI must be diffused responsibly across industries, remaining guided by human expertise. We are excited to see what we can build and scale together with the world’s leading strategic partners in pursuit of these goals.

AI needs a strong data fabric to deliver business value
In partnership withSAP Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey. But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data — they must understand the business context behind it. Without that context, AI can generate answers quickly but still make the wrong decision, says Irfan Khan, president and chief product officer of SAP Data & Analytics. “AI is incredibly good at producing results,” he says. “It moves fast, but without context it can’t exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn’t help. It can actually hurt us.”
In the emerging era of autonomous systems and intelligent applications, that context layer is becoming essential. To provide context, companies need a well-designed data fabric that does more than just integrate data, Khan says. The right data fabric allows organizations to scale AI safely, coordinate decisions across systems and agents, and ensure that automation reflects real business priorities rather than making decisions in isolation. Recognizing this, many organizations are rethinking their data architecture. Instead of simply moving data into a single repository, they are looking for ways to connect information across applications, clouds, and operational systems while preserving the semantics that describe how the business works. That shift is driving growing interest in data fabric as a foundation for AI infrastructure.
Losing context is a critical AI problem Traditional data strategies have largely focused on aggregation. Over the past two decades, organizations have invested heavily in extracting information from operational systems and loading it into centralized warehouses, lakes, and dashboards. This approach makes it easier to run reports, monitor performance, and generate insights across the business, but in the process, much of the meaning attached to that data — how it relates to policies, processes, and real-world decisions — is lost. Take two companies using AI to manage supply-chain disruptions. If one uses raw signals such as inventory levels, lead times, and supply scores, while the other adds context across business processes, policies, and metadata, both systems will rapidly analyze the data but likely come up with different conclusions. Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains will allow one AI system to make strategic decisions, while the other will not have the proper context, Khan says. “Both systems move very quickly, but only one moves in the right direction,” he says. “This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design.” In the past, companies implicitly managed a lack of context because human experts provided the missing information, but with AI, there is a shortfall and that creates serious limitations. AI systems do not just display information; they act on it. If a system does not explain why data matters, an AI model may optimize for the wrong outcome. Inventory numbers, payment histories, or demand signals might be accurate, but they do not necessarily reveal which customers must be prioritized, which contractual obligations apply, or which products are strategically important. As a result, the system can produce answers that are technically correct but operationally flawed. This realization is changing how companies think about AI readiness. Most acknowledge that they do not have the mature data processes and infrastructure in place to trust their data and their AI systems. Only one in five organizations consider their approach to data to be highly mature, and only 9% feel fully prepared to integrate and interoperate with their data systems. Don’t consolidate, integrate The emerging solution is a data fabric: An abstraction layer that spans infrastructure, architecture, and logical organization. For agentic AI, the fabric becomes the primary interface, allowing agents to interact with business knowledge rather than raw storage systems. Knowledge graphs play a central role, enabling agents to query enterprise data using natural language and business logic. The value of the data fabric relies on three components: Intelligent compute to provide speed, a knowledge pool to provide business understanding and context, and agents to provide autonomous action are grounded in that understanding. What makes this powerful is how these capabilities work together, says Khan.
The technology provides the architecture — a foundation that makes agent-to-agent communication and coordination possible. The process will define how businesses and IT share ownership, and establish governance and a culture in which people trust enough to adopt it. Now all three things must work together for a business data fabric to truly be successful. “It empowers confident, consistent decisions, and when these elements all come together, AI just doesn’t analyze and interpret the data — it drives smarter, faster decisions that really create business impact,” he says. “This is the promise of a thoughtfully designed business data fabric, where every part reinforces the other, and every insight is grounded in trust and clarity.” Technically, building a data-fabric layer requires several capabilities. Data must be accessible across multiple environments through federation rather than forced consolidation. A semantic or knowledge layer is needed to harmonize meaning across systems, often supported by knowledge graphs and catalog-driven metadata. Governance and policy enforcement must also operate across the fabric so that AI systems can access data securely and consistently. Together, these elements create a foundation where AI interacts with business knowledge instead of raw storage systems — an essential step for moving from experimentation to real enterprise automation. Beyond data isolation and dashboards In the emerging era of agentic AI, the responsibility for monitoring, analyzing, and making decisions based on data increasingly shifts to software. AI agents can monitor events, trigger workflows, and make decisions in real time, often without direct human intervention. That speed creates new opportunities, but it also raises the stakes. When multiple agents operate across finance, supply chain, procurement, or customer operations, they must be guided by the same understanding of business priorities. Without a common knowledge layer connecting disparate data together, coordination between systems quickly breaks down. One system might optimize for margin, another for liquidity, and another for compliance, each working from a different slice of data. Importantly, most enterprises already possess much of the knowledge needed to make this work, says Khan. Years of operational data, master data, workflows, and policy logic already exist across business applications — companies just need to make it accessible. Companies that deploy data fabrics gain greater trust in their data, with more than two thirds of enterprises seeing improved data accessibility, data visibility, and exerting more control over their data. “The opportunity isn’t just inventing context from scratch, it’s activating and connecting the context across your business that already exists,” he continues, adding that a data fabric is the “architecture that ensures data semantics, business processes and policies are connected as a unified system across all the clouds.” This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Los Angeles is finally going underground
Los Angeles deserves its reputation as the quintessential car city—the rhythms of its 2,200 square miles are dictated by wide boulevards and concrete arcs of freeways. But it once had a world-class rail transit system, and for the last three decades, the city has been rebuilding a network of trolleys and subways. In May, a new four-mile segment with three new subway stations will open along Wilshire Boulevard, a key east-west corridor that connects downtown LA to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-packed stretch of the city will be, if all goes well, a 25-minute train ride. The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. The ground underneath it is literally a disaster waiting to happen—it’s tarry and full of methane. One of those methane deposits actually exploded in 1985, destroying a department store in the neighborhood. In response, the city pushed its new train routes to other parts of town. These days, dirt full of flammable goo is no longer a problem. “The technology finally caught up with the concerns,” says LA Metro’s James Cohen, a longtime manager of the engineering for this stretch of subway. The key was an earth-pressure-balance tunnel-boring machine, an automated digger that is designed to chew through ground packed with explosive gas. It sends removed dirt topside via conveyor belts and slides precast concrete liner segments into the tunnel, which are joined together with gaskets to create a gas- and waterproof tube. All that let the machine dig about 50 feet every day. A Metro train pulls into La Cienega station Art by Susan Silton at the Fairfax station Art by Eamon Ore-Giron at the La Brea station Meanwhile, engineers excavated the stations from the street level down. They worked mostly on weekends, digging out a space and then decking it with concrete so that work could go on underneath while LA drivers continued to exercise their God-given right to get around by car above. Did the project finish on time? No. Did it come in under budget? Also no; this segment alone cost nearly $4 billion. Is the city now racing to build housing and walkable areas to take full advantage of the extension? Oh, please. Yet the new stations still manage to feel, in the end, transformative—as if Los Angeles’s train has finally come in.

3 things Michelle Kim is into right now
Isegye Idol If you thought K-pop was weird, virtual idols—humans who perform as anime-style digital characters via motion capture—will blow your mind. My favorite is a girl group called Isegye Idol, created by Woowakgood, a Korean VTuber (a streamer who likewise performs as a digital persona). Isegye Idol’s six members are anonymous, which seems to let them deploy a rare breed of honesty and humor. They play games (League of Legends, Go, Minecraft), chitchat, and perform kitschy music that’s somewhere between anime soundtrack and video-game score. It’s very DIY—and very intimate. And the group’s wild popularity speaks to the mood of Gen Z South Koreans, famously lonely and culturally adrift—struggling to find work, giving up on dating, trying to find friendships online. Isegye Idol shows what a magical online universe people can build when reality stops working for them. Mr. Nobody Against Putin Pavel Talankin didn’t have the easiest life as a schoolteacher in the copper-smelting town of Karabash, Russia; UNESCO once called it the most toxic place on Earth. But video he shot, partially in secret, makes it clear he loved it—the smokestacks, the cold, the ice mustache he’d get walking around outside, and, most of all, his bright-eyed students. That makes it all the more painful when a distant, grinding war and state propaganda change the town. An antiwar progressive with a democracy flag in his classroom, Talankin had to deal with a new patriotic curriculum, mandatory parades, visits from mercenaries—and the loss of the creative space he’d built with his students. Talankin’s footage tells his story in this Oscar-winning documentary from director David Borenstein, and what struck me most is how strange it is being an adult around kids. We shape them in profound ways we might not even recognize. Repertoire by James Acaster I am the kind of person who will pay $150 to watch a comedian in a smelly theater in San Francisco that charges $20 for a can of water—because I am crazy enough to hope that standup will not die. In February, I saw the British comedian James Acaster perform live … and it was a mediocre show. But Repertoire, his 2018 miniseries on Netflix, is gold. Shot shortly after Acaster went through a breakup, the four-part show features him portraying, among other characters, a cop who goes undercover as a standup comedian, forgets who he is, and gets divorced. And then things get weird. “What if every relationship you’ve ever been in,” Acaster asks, “is somebody slowly figuring out they didn’t like you as much as they hoped they would?” If the best comedy comes from paying attention to the hellhole that you’re in, I wish Acaster many more pitfalls.

One town’s scheme to get rid of its geese
“Pull over!” I order my brother one sunny February afternoon. Our target is in sight: a gaggle of Canada geese, pecking at grass near the dog park. As I approach, tiptoeing over their grayish-white poop, I notice that one bird wears a white cuff around its slender black neck. It’s a GPS tracker—part of a new tech-centered campaign to drive the geese out of my hometown of Foster City, California. __________________________THE PLACEFoster City, CAUSA About 300 geese live in this sleepy Bay Area suburb, equal to nearly 1% of our human population—and some say this town isn’t big enough for the both of us. Goose poop notoriously blanketed our middle school’s lawn, and the birds have hassled residents for generations. My own grandmother remembers when geese took over her garage for five whole minutes before waddling out. She says, “I wanted to kill them, but I thought I’d get in trouble.” Indeed, that idea doesn’t fly here. City officials backed out of a previous plan to kill 100 geese following uproar from local environmentalists. Still, the poop creates a public health hazard; the birds need to go. So the city paid nearly $400,000—roughly $1,300 per goose—to Wildlife Innovations, a company that resolves conflicts between humans and wildlife, to haze the geese with gadgets. The company’s approach is “basically, making the geese less comfortable,” Dan Biteman, head of the goose management plan and senior wildlife biologist at Wildlife Innovations, tells me.
The need for such conflict resolution is on the rise as land development collides with changes in animal behavior. Though overpopulation of Canada geese is a national nuisance in the US, such tensions also surface with other species in this country and elsewhere, including grizzlies on the Montana prairies, coyotes on San Francisco streets, and savanna elephants in Tanzania parks. So the people whose job it is to deal with recalcitrant critters are bringing on the gadgets.
Back in Foster City, I spot a black camera mounted to a tree trunk at Gull Park by the lagoon. They’re in seven parks around town, programmed to snap photos every 15 minutes and transmit them back to Wildlife Innovations HQ. If they detect geese, a biologist immediately drives over to disperse the birds. One team member uses devices like lasers or drones; another brings along a goose-hating border collie named Rocky. Belligerent birds must grapple with the Goosinator.ANNIKA HOM As a special measure, staff deploy the “Goosinator,” a small, remote-controlled neon-orange pontoon boat with a fearsome dog-like mouth painted on its bow, meant to evoke geese’s fear of coyotes and bright colors. It comes with attachable wheels and can zoom around on land or water to chase birds away. Biteman tells me the company is thinking about mounting speakers on trees and flying drones that will screech the calls of goose predators like red-tailed hawks or golden eagles. The company received federal permits required by the Migratory Bird Treaty Act to stick GPS trackers on 10 geese, too. This way, staff can surveil the geese and research their behavior and movements. At local goose hangouts, signs that look like “Wanted” posters alert the public to the new plan. As I watch some culprits graze (and defecate) on a church lawn, I think to myself: Enjoy it while it lasts. Annika Hom is an award-winning independent journalist. She’s written for National Geographic, Wired, and more.

Rebuilding the data stack for AI
In partnership withInfosys Topaz Artificial intelligence may be dominating boardroom agendas, but many enterprises are discovering that the biggest obstacle to meaningful adoption is the state of their data. While consumer-facing AI tools have dazzled users with speed and ease, enterprise leaders are discovering that deploying AI at scale requires something far less glamorous but far more consequential: data infrastructure that is unified, governed, and fit for purpose. That gap between AI ambition and enterprise readiness is becoming one of the defining challenges of this next phase of digital transformation. As Bavesh Patel, senior vice president of Databricks, puts it, “the quality of that AI and how effective that AI is, is really dependent on information in your organization.” Yet in many companies, that information remains fragmented across legacy systems, siloed applications, and disconnected formats, making it nearly impossible for AI systems to generate trustworthy, context-rich outputs. “Really, the big competitive differentiator for most organizations is their own data and then their third-party data that they can add to it,” says Patel. For enterprise AI to deliver value, data must be consolidated into open formats, governed with precision, and made accessible across functions. Without that foundation, businesses risk “terrible AI,” as Patel bluntly describes it. That means moving beyond siloed SaaS platforms and disconnected dashboards toward a unified, open data architecture capable of combining structured and unstructured data, preserving real-time context, and enforcing rigorous access controls. When the groundwork is laid correctly, organizations can move toward measurable outcomes, unlocking efficiencies, automating complex workflows, and even launching entirely new lines of business.
That value focus is critical, says Rajan Padmanabhan, unit technology officer at Infosys, especially as enterprises seek precision in the outputs driving business decisions. Rather than treating AI initiatives as isolated innovation projects, leading companies are tying AI deployment directly to business metrics, using governance frameworks to determine what delivers results and what should be abandoned quickly. “We see this big opportunity just with AI literacy with business users, where they’re very eager to understand how they should be thinking about AI,” adds Patel. “What does AI mean when you peel the covers? What are the pieces and the building blocks that you need to put in place, both from a technology and a training and an enablement standpoint?”
The possibilities ahead are substantial. As AI agents evolve from copilots into autonomous operators capable of managing workflows and transactions, the organizations that win will be those that build the right foundation now. “What we are seeing as a new way of thinking is moving from a system of execution or a system of engagement to a system of action,” notes Padmanabhan. “That is the new way we see the road ahead.” The future of AI in the enterprise will be determined by whether businesses can turn fragmented information into a strategic asset capable of powering both smarter decisions and entirely new ways of operating. This episode of Business Lab is produced in partnership with Infosys Topaz. Full Transcript: Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. This episode is produced in partnership with Infosys Topaz.Now, recent advancements in AI may have unlocked some compelling new industrial applications, but a reliance on inadequate data models means that many enterprises are hitting a brick wall. AI and agentic AI in particular place a whole new set of demands on data. The technology requires greater access, context, and guardrails to operate effectively. Existing data models often fall short. They’re too fragmented or siloed. Data itself often lacks quality. To bridge the gap, they require an AI-ready upgrade. Two words for you: data reconfigured.My guest today, are Bavesh Patel, senior vice president for Go-to-Market at Databricks, and Rajan Padmanabhan, unit technology officer for data analytics and AI at Infosys.
Welcome, Bavesh and Rajan. Rajan Padmanabhan: Thank you. Thanks for having us. Bavesh Patel: Thanks for having us. Megan: Fantastic. Thank you both so much for joining us today. Bavesh, if I could come to you first, when we talk about AI-ready data, what exactly do we mean? What new demands does AI place on data, and how does this impact the way it needs to be structured and used? Bavesh: Yeah. Great question. Appreciate you hosting us today. I think that obviously the whole world is enamored with AI because of all of the power that we can all see as users. AI is now democratized across hundreds of millions of users. And when we think about enterprises and businesses using AI, the quality of that AI and how effective that AI is really dependent on information in your organization, and that’s data. And what we found is that most enterprises, their data is kind of locked away in these different applications and different systems. And it’s very difficult to get a good view of, what is all my data? How trustworthy is it? How recent and fresh is it? And all of that is being injected into the AI. Unless you have a proper understanding of your data, the ability to ensure that it’s data that’s accurate and that can be used so that the AI can take advantage of it, you’re actually going to end up having terrible AI. We see a lot of customers spend time on cleansing their data, organizing their data, making sure it’s access controlled correctly, and that tends to be the fuel of good AI. Megan: Yeah. It’s such a foundational thing, isn’t it? But it can be missed, I think, quite easily. Rajan, what difference can having AI-ready data really make for enterprises as they unlock that full potential of AI and its applications? Rajan: First and foremost, thanks for having us. It’s a pleasure. I think in continuation of what Bavesh talked about, see, data and AI is pretty synonymous. And similarly, the consumer AI and enterprise AI and enterprise agentic AI are different because first and foremost, the business needs to have the context. That context from your enterprise information, which is not only structured, both structured and unstructured and user-generated contents and all forms of data is going to be very, very critical to really get the context right, and really get any model that you pick. That’s where the platforms like Databricks really help with the plethora of models or whether you want to build your own models or whether you want to ground the model based on your data. That is going to be very, very critical. That is where getting the data for AI is going to be very, very critical.
The third critical part, and this actually will be one of the roadblocks for adoption of AI. That’s why if you see the AI adoption on the consumer side is skyrocketing, but on the enterprise side, the enterprises are struggling is primarily around the precision of their output, because you are taking a business decisions where you are taking a buy decision, you are taking a sell decision, or you are trying to recommend something, recommend the content. It could be 20 different use cases. For that, the precision is going to be very critical. We are seeing our customers, the successful customers, definitely for the precision to be more than 92% is not aspiration, that is a must-have. If you have that, definitely being that AI data is going to be the entrepreneur right now for that. Megan: And I suppose if we’ve outlined there how critical this is, where should enterprises start then, professional perhaps, the level, what are the foundations when it comes to building an AI-ready data model?
Bavesh: Yeah. And I think Rajan hit the nail on the head. I mean, enterprises are grappling with a different set of problems than consumer AI. The first thing is that you’ve got to get a handle on your data. As I mentioned, a lot of the data is locked in. Ensuring that you have ability to put your data in a place where you can understand the holistic view of as much of your data as possible. That kind of starts with putting your data in open formats. A lot of the valuable data today in an organization is locked away in some proprietary SaaS app or some system, and all the datasets aren’t connected together to form that context. The first step is to really do an analysis of what is your data estate? What are the critical pieces of data that need to be put into a place where you can start to understand them and how they’re connected to one another? Thinking about how do you set up your data catalog, thinking about how do the relationships between the data assets work, putting data governance around it, that seems to be the first step. And if you think about how ChatGPT was built, it took all the data on the internet and then aggregated it, synthesized it, and then built these transformer models, while enterprises, they don’t really have a handle of all their data within the organization. That’s the first foundation that you really want to think about. The second thing is that you don’t want to just go ad hoc, go and do random AI projects. You really need to be thinking about business value. A lot of our customers are looking at AI much more strategically in that they want to be able to get projects on the board with wins and then generate business value. Building an AI value roadmap, which is connected to how well your data is organized, those two things seem to be foundational to how do you launch AI successfully in your organization. Megan: That value piece is so important, isn’t it? And as I understand it, Infosys and Databricks have worked closely together to guide organizations through this transformation. I wondered, can you share some examples of the impact you’ve seen enterprises you’ve worked with, Rajan, what difference has it made to the ways in which they can integrate more sophisticated AI and agentic AI applications? Rajan: Well, that’s a very, very good question. What both Databricks and Infosys has done is we have come up with, a kind of a framework first. First and foremost, it all needs to start with the value. One of the largest food products company where we collaborated together, what we have done is we have applied this framework. The framework consists of six different things. First and foremost, very critical is the value management, which Bavesh touched upon. We have worked together to come up with a 3M measurement framework, what we call adaptability, business value, and then responsible. You can’t just go and do a garage project. It has to be measurable. It should be responsible, follow all those things. That is going to be very critical. And we helped this client to prioritize, which will give them the most value for money, the investments that they are making. The second critical part here is it is not like most of the enterprises today are not everybody’s AI-born companies. Most of them were born during analog days; most of them were born in digital days. There are companies which are applying AI for modernization, because a lot of your historical information, which is actually helping you to build that long-term context. And that is where we have worked closely with some of the native tools of Databricks, like Lakebridge or the AI assistants that are there, and then create composable services on top of it to help the clients unlock the value bringing into Databricks. And then the second part where we help the client is exactly to the point, the readying of data. Now you brought in the data, now you have to bring both the structured, unstructured, analytical and all these aspects.
And that is where the third layer, we closely work with the Databricks, which is part of leveraging all the great capabilities within the Databricks, be it Unity Catalog, be it the open formats, or be it the gateways and other aspects. We were able to make the data available for this client. What has really helped our client, the third part, is Agent Bricks, which is one of the differentiatiors. It gives you the flavor for the enterprise. That is where we have closely worked, and we built some of our industry-specific agents, be it CPG, be it energy, be it FS. And for this client, what we have done is we have taken some of those CPG-specific use cases. Either it could be on the HR space or the procurement space or on the marketing space. And this has really helped our client be able to build a business capability surrounding this and unlock eight to nine use cases, we call it as a products, agentic AI products, which can really drive more value for them, solving the real business problems. And this kind of a comprehensive set of frameworks plus set of suites of services, plus our solution assets, Infosys solution assets, as well asunlocking the value from Databricks has really helped these clients. And we see similar patents for a lot of these successful engagements where we were able to continuously drive the value by applying this framework actually. Megan: Right. Sounds like it made a real material difference. Rajan mentioned a few of the tools in Databricks catalog there, Bavesh. I know you’ve recently worked to launch an operational database for AI agents and apps. I wonder how does a platform like that help organizations in this journey? What makes it different from some of the other platforms out there right now? Bavesh: Databricks has come to market with a new offering called Lakebase, which is really an OLTP database where you can build your AI apps. And if you think about it, there’s really two main types of data in an enterprise. There’s all the historical data, which is all the things that have happened, and that’s really what your analytics is based on. You have an old app system where you have put all your historical data and Databricks has come to market with what we call the Lakehouse, which is essentially a data warehouse with all of your data that is not operational in nature. It’s historical data. And I think that Lakehouse concept is really pushing forward with AI because a lot of our customers have thousands of users within their business and they need to get data. And what they’ve done is they’ve actually gone down the BI route, which is really building a dashboard or a report.
Most organizations have had thousands of these dashboards and reports proliferate across the organization and then they need to be customized. It just takes a long time for users inside of the business to actually get access to the data. AI now is really making that a lot easier from just the analytics perspective where we can now democratize access to the data, which has really been the holy grail for most data teams. They really want to get out of the way and just give the right data to the right people inside of the business with the right access. With a product like Genie at Databricks, you can just use English language or whatever your language is to ask questions of the data. And it’ll give you back data that answers your questions in context. It’ll give you not just what ChatGPT will give you, which is information about a topic that’s on the internet, but it will actually tell you, “Well, why did my sales numbers not reflect what I expected in the month of April?” It’ll give you some root cause analysis based on your enterprise data. Genie is going to be one of these things that’s really important where it’s going to truly kind of democratize data inside of the business. That’s kind of this OLAP world, which is what the Lakehouse is. More recently, we’ve come to market with what we call the Lakebase, which is the OLTP world. What we’re finding is that agents are now being deployed in these organizations, and those agents need a place to keep all of their orchestration, all of the context of what’s happening in that particular workflow. On the one hand, you’ve got users just asking questions. On the other hand, the next chapter is going to be around automating an entire business process. If you’re taking a function like generating a campaign in marketing, right? There are a lot of tools you use and a lot of steps you use. An agent can come in and really automate a lot of that. But on the back end of that agent, you’re going to need to stand up a real-time database to keep track of all the things that the agent is doing. That’s what Databricks has brought to market, which is this OLTP Lakebase solution. The innovation that we have brought to market is that it’s a modern kind of Postgres database where we have separated the compute and storage, very much like what we did with the data Lakehouse with the data warehouse. But on the Lakebase, the data is on one copy inside of your cloud storage, and then the compute is separated and it’s serverless. You can do things like branching and you can start up the OLTP database really quickly. What we found is that agents are actually starting these Lakebases because they can very quickly go start one up, keep it running, put it down when it needs to, make a copy of it. Agents are doing this, then they need the velocity, they need a cost-effective solution. And the beauty of all this is when you take the OLTP, which is all around the Lakebase and the real time, and you take the OLAP, you now have one system for all your data. You don’t have to copy the data around, you don’t have to manage all the permissions, you can set the context against it. We see these AI apps being really the future of how businesses run, where they’re going to take away all of the bottlenecks that humans are having to do repetitive work and automate these using LLMs and all these new technologies. We want to be the default for powering all that because we believe that our Lakebase technology is going to be faster, cheaper, and more secure for an AI database. Megan: Sounds like a real game changer. And we’ve touched on this a couple of times already, I mean, this idea of value. We know that engaging the commercial value of investments into AI is really high on the priorities right now for senior leaders. How important is this value measure piece when it comes to creating AI-ready data systems, Rajan? How can organizations ensure they’re monitoring what is delivering and what isn’t? Rajan: This is the paramount importance and most of the successful AI implementations or agentic AI implementations really required this value measurement. I’ll just extend the client example that I talked about, the large food products company, the global products company, to explain this question. I just want to create a metaphor. When the initial digital world came, we have a lot of these analytics primarily around defining those performance management KPIs, fact-based decisioning and other things were evolving over a period of time. Typically, a lot of these metrics are going to be very critical for them to measure how a function, how a business is doing. On a similar line for the value measurement, if I take the same example of the client, what is very critical for an organization is actually to map your outcome that you are expecting. Iin this case, how do I optimize my spend on direct and indirect purchases? So by applying AI, I would like to identify the areas where I can optimize the spend. That means one of the critical measures that you have is, what is your indirect expense classification and what spends you have been classified and how much you are able to reduce by bringing in this. Establishing these measures and the metrics is going to be very, very critical. And once you establish these base metrics and the measurement, and the beauty of it is some of these metrics, to just extend what Bavesh was talking about, the capabilities that Databricks gives you, like metrics view, features, tools, and other things would actually help you to translate those AI telemetries, business telemetries that is coming from your applications into a measurable metrics in terms of an outcome, which you can actually measure using the Genie room for value management measurement. Then what happens is two things that you can take, the use case, the products that as I said for this client, the products that we build either on the procurement side or on the marketing research side, if you find there is a value either because of VAC, they identify that they’re able to optimize or it is able to reachability, what is the reach, you can either accelerate that use case and further fine tune that product to expand it. Or there are, if you find it is not really driving the value or I’m not able to see the value that it is going to deliver, you can very well do a fast failure method rather than trying to make it work, you can understand and then you can take a call to pivot it to something else different. There are three aspects here. What we see from our experience, not only with this client across some of our other clients from industrial manufacturing or FS or in the energy, is by setting up this metrics-driven valuation method upfront and then leveraging the capabilities to establish, transform these telemetries, signals into a measurement, what we call an AI compass room so that you really measure the business stakeholders, whether it is coming from a marketing office or whether it is coming from supply chain office or whether it is coming from a CFO office where they can say, “Hey, this is what it is intended to do, this is what the current measurement, and this is where it’s failing that can help them to pivot.” And this will actually drive and democratize AI, all the agent decay across the enterprise, and that really drives the value. This is going to be one of the critical part that enterprise needs to do it. And that is where the six part framework that I talked about, applying that framework like value office, applying the ready for AI, applying the transformation fabric. Then the third part is the governance, which is going to be the entrepreneur of this. Then running your operations, not based on SLA, based on the experience level agreements and business metrics for you to continually measure, bringing all these six layers is going to be very critical. That’s when we see the organizations are very successful, and some of our proven examples exactly do the same that this is going to be very critical for organizations from a measurement standpoint. Megan: Lots of tangible ways there that you can actually gauge value here. And you touched on governance and the impact of AI on governance is another huge talking point among senior leaders and interactions with data are a core part of that. To what extent is having the right governance and security protocols an integral part of having AI-ready data? To Bavesh, what scenarios do these systems need to handle? What does that mean for data models? Bavesh: This is becoming kind of the prerequisite to deploying a successful AI project. I think MIT produced a report that said 95% of these new AI projects fail to actually generate business value. A big reason for that is you can go and prototype and stand up and vibe code a pilot, but when you’re actually moving a workload into production, you realize that governance becomes so critical. So what do we really mean by governance? I think the first thing is getting your data in order, like I said, in open formats. Most companies realize now that the way they engage with their customers, the way they develop a drug, the way they approve a person for a credit limit increase, all of that enterprise information is actually their competitive advantage. Because you can go and use a frontier model like ChatGPT or Claude that everybody has access to. Really the big competitive differentiator for most organizations is their own data and then their third-party data that they can add to it. Getting your data into an open format so you can understand your data and understanding your data is where governance comes in. Because when you think about governance, you really want to be able to find the data. If I’m an end user or if I’m building an AI product, I want to know what data’s available to me. Can I trust the data? How fresh is the data? Is it coming from my analytics world or do I need a real-time system like a OLTP system? I need to find the data. I also need to make sure that access is controlled in a way that doesn’t cause any huge headaches from my organization. This becomes critical. If I have a whole bunch of PDFs that have purchase orders in them, who actually has access to all that data? In a clinical trial, for example, in healthcare, you really want to ensure that people across trials don’t have visibility to patient data. Maybe the model that was used to build that was running across trial. Who has access to all the data? Who has access to only parts of the data? You really have to think about this. We also look at semantics of the data. Rajan brought this up right at the beginning of this, which is what is the context? How do we think about the metrics and all the things that the business users know in their head? We need to start codifying that somewhere. We have a product at Databricks called Unity Catalog where you can do the discovery, the access and the business semantics. You also want to share the data. And in the world of agents, what we see is something called agent sprawl. In a very short order, just like how SaaS applications became very prevalent within any organization where they really solved a business problem. You go to a line of business and you say, “I need to be able to do credit underwriting” or “I am doing a prior authorization use case or pick thousands of use cases.” There’s a SaaS app for that. Much like that, there’s going to be this world in which agents are going to come into play, and most organizations are going to have lots of agents running all the time, but the reality of it is that how did that agent perform? What was the feedback loop from the user? What was the cost of running that workload and is it going up dramatically? And if you don’t have a way to monitor, to understand, and trace all the questions and answers and responses at scale, you’re going to find yourself in a big pickle. This actually could hurt your organization because users will be very confused about what to do. When you look at governance, most organizations are recognizing that they have to start to understand what is it that they have put in place from a systems, from a process, from a tooling standpoint, focus on one use case, build out the governance for that, but build it in a way that’s going to allow you to become repeatable. AI is not going to be about one use case or two use cases. It’s whoever builds the flywheel of building many use cases in a safe, secure way, in a cost-effective way that’s driving a business outcome. If you don’t apply governance, it’s going to be very hard. At Databricks, we made a big bet on governance four or five years ago. This is one of the main reasons our company’s growing right now because we can ensure that there’s quality data that’s going into all of your AI. You can use things like Genie and you can use things like Agent Bricks and you can build apps using Lakebase. None of that really works without governance. It’s really what we call the brain inside of Databricks. Most of our customers spend a lot of time inside of Unity Catalog. And the great news is that AI is helping governance get set up much more quickly. We have a customer that three years ago, they were trying to get all of the data assets across all their domains from the customer, from the loyalty app, from the e-commerce engine. They had to go and map out all this data assets. AI is now doing a lot of their work for them. The human in the loop is just checking things. We’ve made this much easier with AI. We always think about AI as a business use case and an outcome, which I think is going to be where the biggest value is. But at Databricks, we’re using AI inside of our platform to make it much easier to operate and to make it much easier to provide all the right things for your business. This is a super critical part of how we plan to innovate as AI takes fruition in the market. Megan: And Rajan, Bavesh touched on this a little bit there, but does the integration of Agentic AI add another layer of complexity here too? What new consideration around governance does that raise? Rajan: That’s a very, very valid question. I would like to take a metaphor to really explain. We are getting into the world of self-driving cars, robotaxis, and other things. While that takes us to the autonomous world, but still there are rules that you need to adhere to when you are driving on a road. The reason I’m bringing this metaphor is because what is actually required is actually adhering to the rules and different topographies, different things, depends upon where you are driving is going to be very, very critical. The complexity that agents are going to add is basically how you operate with those constraints. For example, as a UTO, I can do 10 things, but say if I cannot approve a discount for more than 70% or I cannot give something as a bonus for someone because that is a part of the CFO, which an agent should be aware of. That is one aspect, applying the constraints around it and making sure that the agents are adhering to the constraints. The second set of complexity that it builds is the tools to access. As a business, in today’s world, when you define a process, certain processes need a certain set of tools to really actionize it. There are certain entitlements, only people entitled to do certain things based on their identity, based on the need or the situation need, you need to govern. The third is information sharing. While MCP and other aspects are great, UCP and other aspects are great, but one critical thing is what you need to share, what you don’t need to share. And those are the critical considerations. The last part is learning and relearning. Sometimes when you learn good things, you should keep something. Sometimes it is better for you to completely remove it and reevaluate in a newer way, relearn it in a newer way. These are all the critical things that are required. On the similar line for agents, it is going to be paramount, because when you are operating agents for an enterprise, you need to know, learn, and adhere to certain compliance related rules, business related constraints, and then the entitlement identity, and then sharing whatever that apply to a physical human will also start applying to an agent. That is where this is going to be very critical. This requires a new set of operating systems. That doesn’t really mean now get out of a new thing. That is where I’m just interpreting how Bavesh touched upon the Unity Catalog. The best part that which we see and some of our clients that which are implementing is extending the Unity Catalog and the capabilities like now you can catalog the tools, catalog the MCP as well as catalog these agents, and then govern those agents based on the constraints, ground them based on the constraints. It’s going to be very, very critical. Doing it not later, but starting that as part of your strategy and enforcing this as one of the critical dimensions of when you measure the value is also going to be very critical for an organization. It is like making sure that not only building the autonomous car, but as well as making sure that the car drives as per the rules of the road, not going rogue. Megan: Lots to think about there. Fascinating stuff. Thank you. Just to close, with a quick look ahead, we all know the pace of development in AI and Agentic AI is so rapid. For those organizations that can prioritize AI-ready data now, what are the most compelling use cases for the technology that you can see coming to the fore in the next few years, Bavesh? Bavesh: I think the excitement level is at its peak. We’ve seen so much investment in AI. I think the reason why there’s a lot of excitement is because you can look at the early adopters and you can see massive amounts of gains that these organizations are seeing. The one thing I will tell you is that the companies that there’s really three categories and the companies that I think are doing well, a lot of them started out with just copilots and things that are just giving people quick answers. Think about it as making an individual productive. That is the first phase. And the ROI on that has been somewhat questionable. With something like Genie, it makes it a lot more effective because it’s actually on your data and your data is contextualized in your organization. I think that’s one level of area that we’re going to see a lot of innovation. We’ll see most organizations just start to get the right information to the right person at the right time. And that has been a dream for a lot of organizations. The second one is around automating entire business processes. We see functions within marketing, like I described earlier, or whether you’re going through a process of rebates for a company. There’s a whole bunch of steps involved where you have to go into three different apps and export data from Excel and put it over here. There’s thousands of people doing very laborious, monotonous, repeatable work. These agents are really going to help get an immense amount of not only productivity for the business process, but it’s just going to make things faster. Processes that took weeks are now going to take days. Processes that took days are going to take hours and minutes now. One trend we’ve seen is that the AI world is so dynamic. In a world where you got lots of different players, you want to think about first principles, what are the foundations? You want to think about owning your data, making sure you have a handle on your structured and unstructured data. You want to put governance on that. But the other thing that you want to make sure that you don’t do is lock yourself in. Today, if you think about it, Gemini is really good with multimodal. Anytime you have pictures or videos or things like that, Gemini just is super good. Whereas if you’re writing code, Claude is really good. If you’re just doing certain types of questions around introspection, ChatGPT is really good. What you really want is an open data platform where you can build your open AI on multiple clouds, which is what we built at Databricks. I think that’ll help with the second piece, which is you can pick and choose because when you build these agents, you don’t have to be locked into just one. You should be picking the best quality and the best security and the best ROI and cost for a particular workload. One workload may use multiple of these models, and they might be even specific industry models. You need a system and a platform that can really handle this complexity. I think the third category is business reimagination. A lot of people talk about this where, yes, you’re going to go and take the data and make it available and give everybody access to the data. You’re going to make existing processes much more efficient. But the third thing is there’s going to be brand new things that come out of it. We have a very large customer who’s a bank and they have built a product that they didn’t have a year ago. Essentially, it’s machine learning and LLMs helping treasury departments forecast what their balances are going to be because they have more data at their fingertips. Historically, it took a long time for the data to get to the bankers. They were not able to really predict what a balance would be for a treasury department. Think about this for a big enterprise company, they have now built a brand new data AI solution that they’re monetizing and it’s generated hundreds of millions of dollars in the first six months. We’re seeing brand new lines of business open up and that is going to be really exciting because that’s where a lot of the transformation is going to happen. There’s going to be productivity. There’s going to be kind of automation at the business process level. Then there’s going to be these big new things that we didn’t even imagine that people are going to come up with. We are actually seeing the early signals of this in every industry. We see retailers getting data at the hourly and the minute level so that they can integrate much more closely with their supply chains. We’re seeing much more targeted customer 360-degree use cases where as retailers or as consumers, we get annoyed by ads, but now it’s so contextualized and you have so much information about what really matters to your target customer, you’re giving them value added kind of information and that’s engaging them more. There’s a whole bunch of innovation happening with agentic commerce and things like concierge and virtualized shopping. You look at any industry, there’s definitely new ways of doing things. This is what’s really exciting about AI, but you really have to not get too far ahead without thinking about what are the foundational things. You mentioned this earlier, which is open data platform, making sure you have governance correctly, making sure you think about your historical analytical data and your application data that’s going to be real time, having a good foundation to build on, that’s going to allow you to scale and move more quickly and compete in this new world. We’re very excited about what we’re seeing with our customers and what they’re building. And honestly, that’s the best part about being in my role at Databricks, which is our teams really go to customers and say, “What are the outcomes you’re driving?” The early signals have been super positive. We’re seeing companies that get serious about all the foundational elements and really are methodical about building really outcome-based AI solutions, that 5% of projects that are being successful, those are wildly successful. That’s why we’re growing as a company because once you get a good project under your belt, that gets visibility within executives. The last thing is that historically, a lot of tech has been in the IT department. You get the business designing how they want to go to market and how they’re going to compete and what products and services they want to offer. IT was the enabler and in many cases became the cost center and was relegated to rationalizing the portfolio of spend and tools. But now we’re seeing the business kind of take the lead with AI where they want to understand, they want to know, “Hey, what can I be doing now that was not possible before?” We see this big opportunity just with AI literacy with business users where they’re very eager to understand how they should be thinking about AI. What does AI mean when you peel the covers? What are the pieces and the building blocks that you need to put in place, both from a technology and a training and an enablement standpoint? We’re spending a lot of time with executives helping them along this journey. We definitely see a lot of amazing opportunities ahead. Megan: Yeah. So much innovation going on. And finally, how about yourself, Rajan? What on the horizon is exciting you the most? Rajan: I think Bavesh covered quite a bit, but I think the way I’m seeing is today predominantly we are talking about labor shift. That means unlocking the potential of human or shifting the current way of working to the new way of working with the more efficiency game. It’s predominantly more of an efficiency game. I think that is what we are seeing now and the majority of the successful use cases around the labor shift. But what is pretty promising is the two kinds of shift, the business shifts. What we are seeing as a new way of thinking or the new thing that is coming up is moving from system of execution or a system of engagement to system of action. That is the new way we see the road ahead. That is where some of the points that I touched upon. The business wants to have access to it, but how does it really make the real difference for it? One classical example that I could clearly see which we have implemented for one of our customers primarily in the manufacturing space, is around the lifecycle of creation of a product and then publishing the content around the product in line with their different B2B marketplaces. Some of those, you are not just talking about recommending, creating, but actually you are able to reimagine this process, which used to involve five different departments, now can be done much faster, but at the same time gives you that veracity in terms of the decisioning that you are able to do and as far as how you’re able to actionize. That is the second thing which we are seeing. The third part I think is also going to be is the way how the commerce has evolved. There is also not beyond that agentic commerce, but I think what we are seeing is that agent to agent commerce, agent to human commerce and agent to agent payments, agent to human payments, and then the content monetization. These are the new set of business opportunities like building new business agentic products. It could be for family techs, it could be for on the consumer side, or it could be on the industrial technology side. These are going to be what I’m calling the economy shift, labor shift, business shift, because that is going to bring a new set of system of actions, moving them from the system of executions or the typical SaaS application with the bolt-on agentic, the so called agentic application. That is going to be a major transformation, and we are underway. But on the technology side, what is very critical for entrepreneuring is in today’s world you have data, analytical data, operational data, and then there is intelligence, there are different facets of it. I think both this analytical core and operational core is going to really come into one. That’s why we are so gung-ho about the releases of Lakebase and other things because that is the way the future is going to drive. When they are really thinking about being ready for AI technology use cases, they should really think, how do you really create this unified core for the newer world? The second part is people have to reimagine today, if I take SAP as an example, you do hundreds of edge applications, business applications needed to integrate another thing. Typically, we create sprawl of these integrations. One technology use case, people can say, “Hey, how do I really create a domain-based service mesh on top of this unified core and how do I make it more agentic integration ready?” That is one of the technology use cases that we are advising to the client. I think now with a lot of the new areas that are coming around SAP, BDC with the Databricks, and this zero-based integration, that makes them rethink the way they need to integrate, the way they need to do things. The third part, I think from a technology investment and technology, the use cases that most come for the technology that I would talk about is don’t just talk about now. This is the time that you have to, the way you own the people, the FTEs for your organizations. Agents are going to be your new FTEs. That means that some of the new technology paradigm is going to be you will end up creating these co-intellects within your organization. That means you need to invest on what we call this agentic grid, where it becomes like a unified agentic fabric where every other agents can really collaborate and integrate and building on top of the same, the unified operational analytical core, the unified agentic integration on top of it, which is going to create a new set of experiences, agentic experiences rather than the traditional experiences or conversational experiences. Then the new collaboration methods are going to be some of the critical aspects from a technology side that people have to really think from a technology standpoint. To start with, I would say you start looking at it from a data standpoint, building that unified core, building that unified integration and building that collaboration layer for both sharing and collaborating with intelligence as well as the agentic collaboration all governed under single umbrella. That is going to be the one critical use case which no one will feel bad about, and they are going to get really a 100X of their investments out of it. Megan: Certainly no shortage of exciting developments on the horizon. Thank you both so much for that conversation. That was Bavesh Patel, senior vice president for Go-to-Market at Databricks and Rajan Padmanabhan, unit technology officer for data analytics and AI at Infosys, whom I spoke with from Brighton, England.That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print, on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.This show is available wherever you get your podcasts, and if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Three reasons why DeepSeek’s new model matters On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to a new design that handles large amounts of text more efficiently. While the model remains open source, its performance matches leading closed-source rivals from Anthropic, OpenAI, and Google. It is also DeepSeek’s first release optimized Huawei’s Ascend chips—a key test of China’s dependence on Nvidia. Here are three ways V4 could shake up AI.
—Caiwei Chen The rise of world models AI systems have already gained impressive mastery over the digital world, but the physical world remains humanity’s domain. As it turns out, building an AI that composes novels or code apps is far easier than developing one to fold laundry or navigate city streets. To bridge this gap, many researchers believe you need something called a world model.
Proponents like Stanford professor Fei-Fei Li and AMI Labs founder Yann LeCun argue these models can overcome the well-known limitations of LLMs—and realize AI’s promise for robotics. Find out why they’ve brought world models to the forefront of the field. —Grace Huckins World models are on our list of the 10 Things That Matter in AI Right Now, our essential guide to what’s really worth your attention in the field. Subscribers can watch an exclusive roundtable unveiling the technologies and trends on the list, with analysis from MIT Technology Review’s AI reporter Grace Huckins and executive editors Amy Nordrum and Niall Firth. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 China has blocked Meta’s $2 billion acquisition of AI startup ManusRegulators cited national security grounds. (WSJ $)+ Beijing called the deal a “conspiratorial” attempt to hollow out its tech base. (FT $)+ The country is tightening its grip on AI firms that try to leave. (TechCrunch)+ The decision escalates China’s AI rivalry with the US. (Bloomberg $)+ But there will be no winners in their competition. (MIT Technology Review)2 Google is investing up to $40 billion in AnthropicIn a deal valuing the AI firm at $350 billion. (CNBC)+ The funding will support the firm’s growing computing needs. (TechCrunch)+ Anthropic and OpenAI are fighting for compute capacity. (Axios) 3 President Trump just fired the entire National Science BoardThe NSF has played a crucial role in developing technology. (The Verge)+ The move heightens fears over political interference in US science. (Nature)4 Conspiracy theories about the Washington shooting are proliferating onlineOver 300,000 posts appeared on X using the keyword “staged.” (NYT $)+ The theories are also swirling on Bluesky and Instagram. (Wired)5 The AI compute crunch is starting to hit the broader economy.It’s affecting jobs, gadgets, and electricity prices. (404 Media)+ The AI compute explosion is the tech story of our time. (MIT Technology Review)6 Elon Musk says a new banking tool brings X close to a “super app”He’s pledged to launch the tool this month. (Bloomberg)7 AI optimism is surging across Asia while US sentiment coolsThe divide could shape where adoption happens fastest. (Rest of World)8 Apple is tying its new CEO’s ascent to its first foldable iPhoneIt wants to build the buzz around John Ternus. (Gizmodo) 9 Twelve firms are developing the Golden Dome’s space-based interceptorsThey’ve won contracts worth up to $3.2 billion. (Ars Technica)10 NASA has shared promising results from Artemis IIThe spacecraft and rocket fared well. (Engadget) Quote of the day
“Getting out the truth and establishing facts and reliable information takes time. But our audiences really don’t have that kind of patience.” —Amanda Crawford, associate professor at the University of Connecticut, tells the NYT why conspiracy theories are gaining traction online. One More Thing MIRIAM MARTINCIC Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change Kenya’s Great Rift Valley is home to five geothermal power stations, which harness clouds of steam to generate about a quarter of the country’s electricity. But some of the energy escapes into the atmosphere, while even more remains underground for lack of demand. That’s what brought Octavia Carbon here. Last year, the startup began harnessing some of that excess energy to remove CO2 from the air. The company says the method is efficient, affordable, and—crucially—scalable. But the project also faces fierce opposition.
Announcing our partnership with the Republic of Korea
Bringing frontier AI models to Korea’s scientific communityKorea’s Ministry of Science and ICT (MSIT) has recently launched the K-Moonshot Missions, an initiative aimed at unlocking step-change improvements in research productivity and addressing national grand challenges.Helping make this vision a reality, Google will establish an AI Campus in the Republic of Korea — an AI-focused facility within its Seoul offices.The AI Campus will be a hub for Korean academia and research institutions to collaborate with Google’s world-leading AI experts to accelerate scientific breakthroughs through research and access to our most advanced AI for Science models, programs and events. We will begin by exploring collaborations with research-oriented institutions including Seoul National University (SNU), Korea Advanced Institute of Science and Technology (KAIST) and the Ministry’s three AI Bio Innovation Hubs, leveraging our models in fields such as life sciences, energy, weather and climate, for example:AlphaEvolve – a Gemini-powered coding agent for designing and optimising advanced algorithms. This has shown beneficial impact across many areas in computing and math, and we are seeing similar examples emerge in drug discovery and energy.AlphaGenome – an AI model to help scientists better understand how mutations in human DNA sequences impact a wide range of gene functions, speeding up research on genome biology and helping to improve disease understanding.AlphaFold – already used by more than 85,000 researchers in Korea, we will explore accelerating AI-enabled predictions for proteins, DNA and RNA.AI co-scientist – a multi-agent AI system that acts as a virtual scientific collaborator to help researchers brainstorm and verify hypotheses. This is showing promising benefits in a range of biomedical applications and we look forward to collaborating through joint research exploration and technical advisory to support the Ministry’s AI Scientist Project on ways to best integrate the system.WeatherNext – we will explore collaborations to support Korea’s energy and sustainability goals in predicting and analyzing the impacts of extreme weather events and optimizing renewable energy on grids.Cultivating AI talent and partnering on safetyRealizing the full potential of AI requires investing in people and building responsibly. To support the next generation of Korean AI talent, we are opening doors to forge connections with Google DeepMind, including exploring internship opportunities for Korean students. This builds on Google’s broader commitment to the region, including the recent milestone of providing 50,000 AI Essentials scholarships to help job seekers gain foundational skills.Finally, following our Frontier AI Safety Commitments made at the AI Seoul Summit, we will collaborate with the Korean AI Safety Institute (AISI) on research and best practices.Building on the AlphaGo legacyAs we look back on the legacy of AlphaGo, we are incredibly excited for what lies ahead. We look forward to collaborating with the government as they invest in important local AI infrastructure, such as a new National AI for Science Center (NAIS), due to open in May.By combining Google DeepMind’s frontier AI models with the brilliant scientific minds in Korea, we believe we can unlock scientific discoveries that will benefit society for generations to come.

Data Center World 2026: Innovation Spotlight
Belden + OptiCool: Modular Cooling for the AI Middle Market At Data Center World 2026, company representatives from Belden and OptiCool described a joint push into integrated rack-level infrastructure—pairing connectivity, power, and modular cooling into a single deployable system aimed squarely at enterprise and mid-market colocation providers. The partnership reflects a shift already underway inside Belden itself. Long known as a manufacturer of wire, cable, and connectivity products, the company said it has spent the last several years evolving into a solutions provider—leveraging a broader portfolio that spans industrial networking, automation, and control systems. That repositioning is now extending into AI infrastructure. From Components to Fully Integrated Systems Rather than selling discrete products into bid cycles, Belden is now packaging racks, PDUs, cable management, and cooling into a unified offering—delivered as a manufacturer-backed system rather than a third-party integration. “We can bring a full solution to the table now,” a company representative said, emphasizing that the company is “standing behind the solution as a manufacturer, not as a system integrator.” The cooling layer comes via OptiCool, whose rear-door heat exchanger (RDHx) technology is designed to scale alongside uncertain AI workloads. Two-Phase Rear Door Cooling at Rack Scale OptiCool’s approach centers on two-phase cooling applied at the rear door, combining the non-invasive characteristics of RDHx with the efficiency gains typically associated with direct-to-chip liquid cooling. According to company representatives, the system: Supports up to 120 kW per rack (with 60 kW demonstrated on the show floor) Delivers up to 10x cooling capacity compared to traditional approaches Operates at roughly one-third the energy consumption of comparable single-phase systems Instead of injecting cold air, the system extracts heat using refrigerant as the heat sink, reducing demand on CRAC units and broader facility cooling infrastructure. Designing for Uncertainty: Modular, Swappable Capacity The defining feature—and

The Trillion-Dollar AIDC Boom Gets Real: Omdia Maps the Path From Megaclusters to Microgrids
The AI data center buildout is getting bigger, denser, and more electrically complex than even many bullish observers expected. That was the core message from Omdia’s Data Center World analyst summit, where Senior Director Vlad Galabov and Practice Lead Shen Wang laid out a view of the market that has grown more expansive in just the past year. What had been a large-scale infrastructure story is now, in Omdia’s telling, something closer to a full-stack industrial transition: hyperscalers are still leading, but enterprises, second-tier cloud providers, and new AI use cases are beginning to add demand on top of demand. Omdia’s updated forecast reflects that shift. Galabov said the firm has now raised its 2030 projection for data center investment beyond the $1.6 trillion figure it showed a year ago, arguing that surging AI usage, expanding buyer classes, and the emergence of new power infrastructure categories have all forced a rethink. “One of the reasons why we raised it is that people keep using more AI,” Galabov said. “And that just means more money, because we need to buy more GPUs to run the AI.” That is the simple version. The more consequential one is that AI is no longer behaving like a contained technology cycle. It is spilling outward into adjacent infrastructure markets, including batteries, gas-fired onsite generation, and high-voltage DC power architectures that until recently sat well outside the mainstream data center conversation. A Market Moving Faster Than the Forecasts Galabov opened by revisiting the predictions Omdia made last year for 2030. On several fronts, he said, the market is already validating them faster than expected. AI applications are becoming commonplace. AI has become the dominant driver of data center investment. Self-generation is no longer a fringe strategy. Even some of the rack-scale architecture concepts that once looked

AI’s Execution Era: Aligned and Netrality on Power, Speed, and the New Data Center Reality
At Data Center World 2026, the industry didn’t need convincing that something fundamental has shifted. “This feels different,” said Bill Kleyman as he opened a keynote fireside with Phill Lawson-Shanks and Amber Caramella. “In the past 24 months, we’ve seen more evolution… than in the two decades before.” What followed was less a forecast than a field report from the front lines of the AI infrastructure buildout—where demand is immediate, power is decisive, and execution is everything. A Different Kind of Growth Cycle For Caramella, the shift starts with scale—and speed. “What feels fundamentally different is just the sheer pace and breadth of the demand combined with a real shift in architecture,” she said. Vacancy rates have collapsed even as capacity expands. AI workloads are not just additive—they are redefining absorption curves across the market. But the deeper change is behavioral. “Over 75% of people are using AI in their day-to-day business… and now the conversation is shifting to agentic AI,” Caramella noted. That shift—from tools to delegated workflows—points to a second wave of infrastructure demand that has not yet fully materialized. Lawson-Shanks framed the transformation in more structural terms. The industry, he said, has always followed a predictable chain: workload → software → hardware → facility → location. That chain has broken. “We had a very predictable industry… prior to Covid. And Covid changed everything,” he said, describing how hyperscale demand compressed deployment cycles overnight. What followed was a surge that utilities—and supply chains—were not prepared to meet. From Capacity to Constraint: Power Becomes Strategy If AI has a gating factor, it is no longer compute. It is power. “Before it used to be an operational convenience,” Caramella said. “Now it’s a strategic advantage—or constraint if you don’t have it.” That shift is reshaping executive decision-making. Power is no
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.