Stay Ahead, Stay ONMINE

AI Agents from Zero to Hero – Part 1

Intro AI Agents are autonomous programs that perform tasks, make decisions, and communicate with others. Normally, they use a set of tools to help complete tasks. In GenAI applications, these Agents process sequential reasoning and can use external tools (like web searches or database queries) when the LLM knowledge isn’t enough. Unlike a basic chatbot, […]

Intro

AI Agents are autonomous programs that perform tasks, make decisions, and communicate with others. Normally, they use a set of tools to help complete tasks. In GenAI applications, these Agents process sequential reasoning and can use external tools (like web searches or database queries) when the LLM knowledge isn’t enough. Unlike a basic chatbot, which generates random text when uncertain, an AI Agent activates tools to provide more accurate, specific responses.

We are moving closer and closer to the concept of Agentic Ai: systems that exhibit a higher level of autonomy and decision-making ability, without direct human intervention. While today’s AI Agents respond reactively to human inputs, tomorrow’s Agentic AIs proactively engage in problem-solving and can adjust their behavior based on the situation.

Today, building Agents from scratch is becoming as easy as training a logistic regression model 10 years ago. Back then, Scikit-Learn provided a straightforward library to quickly train Machine Learning models with just a few lines of code, abstracting away much of the underlying complexity.

In this tutorial, I’m going to show how to build from scratch different types of AI Agents, from simple to more advanced systems. I will present some useful Python code that can be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you can replicate this example.

Setup

As I said, anyone can have a custom Agent running locally for free without GPUs or API keys. The only necessary library is Ollama (pip install ollama==0.4.7), as it allows users to run LLMs locally, without needing cloud-based services, giving more control over data privacy and performance.

First of all, you need to download Ollama from the website. 

Then, on the prompt shell of your laptop, use the command to download the selected LLM. I’m going with Alibaba’s Qwen, as it’s both smart and lite.

After the download is completed, you can move on to Python and start writing code.

import ollama
llm = "qwen2.5"

Let’s test the LLM:

stream = ollama.generate(model=llm, prompt='''what time is it?''', stream=True)
for chunk in stream:
    print(chunk['response'], end='', flush=True)

Obviously, the LLM per se is very limited and it can’t do much besides chatting. Therefore, we need to provide it the possibility to take action, or in other words, to activate Tools.

One of the most common tools is the ability to search the Internet. In Python, the easiest way to do it is with the famous private browser DuckDuckGo (pip install duckduckgo-search==6.3.5). You can directly use the original library or import the LangChain wrapper (pip install langchain-community==0.3.17). 

With Ollama, in order to use a Tool, the function must be described in a dictionary.

from langchain_community.tools import DuckDuckGoSearchResults
def search_web(query: str) -> str:
  return DuckDuckGoSearchResults(backend="news").run(query)

tool_search_web = {'type':'function', 'function':{
  'name': 'search_web',
  'description': 'Search the web',
  'parameters': {'type': 'object',
                'required': ['query'],
                'properties': {
                    'query': {'type':'str', 'description':'the topic or subject to search on the web'},
}}}}
## test
search_web(query="nvidia")

Internet searches could be very broad, and I want to give the Agent the option to be more precise. Let’s say, I’m planning to use this Agent to learn about financial updates, so I can give it a specific tool for that topic, like searching only a finance website instead of the whole web.

def search_yf(query: str) -> str:  engine = DuckDuckGoSearchResults(backend="news")
  return engine.run(f"site:finance.yahoo.com {query}")

tool_search_yf = {'type':'function', 'function':{
  'name': 'search_yf',
  'description': 'Search for specific financial news',
  'parameters': {'type': 'object',
                'required': ['query'],
                'properties': {
                    'query': {'type':'str', 'description':'the financial topic or subject to search'},
}}}}

## test
search_yf(query="nvidia")

Simple Agent (WebSearch)

In my opinion, the most basic Agent should at least be able to choose between one or two Tools and re-elaborate the output of the action to give the user a proper and concise answer. 

First, you need to write a prompt to describe the Agent’s purpose, the more detailed the better (mine is very generic), and that will be the first message in the chat history with the LLM. 

prompt = '''You are an assistant with access to tools, you must decide when to use tools to answer user message.''' 
messages = [{"role":"system", "content":prompt}]

In order to keep the chat with the AI alive, I will use a loop that starts with user’s input and then the Agent is invoked to respond (which can be a text from the LLM or the activation of a Tool).

while True:
    ## user input
    try:
        q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        continue
    messages.append( {"role":"user", "content":q} )
   
    ## model
    agent_res = ollama.chat(
        model=llm,
        tools=[tool_search_web, tool_search_yf],
        messages=messages)

Up to this point, the chat history could look something like this:

If the model wants to use a Tool, the appropriate function needs to be run with the input parameters suggested by the LLM in its response object:

So our code needs to get that information and run the Tool function.

## response
    dic_tools = {'search_web':search_web, 'search_yf':search_yf}

    if "tool_calls" in agent_res["message"].keys():
        for tool in agent_res["message"]["tool_calls"]:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                print(t_output)
                ### final res
                p = f'''Summarize this to answer user question, be as concise as possible: {t_output}'''
                res = ollama.generate(model=llm, prompt=q+". "+p)["response"]
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
 
    if agent_res['message']['content'] != '':
        res = agent_res["message"]["content"]
     
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

Now, if we run the full code, we can chat with our Agent.

Advanced Agent (Coding)

LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of Programming languages. The model learns the relationships between different parts of the code by predicting the next token in a sequence. In short, LLMs can generate Python code but can’t execute it, Agents can.

I shall prepare a Tool allowing the Agent to execute code. In Python, you can easily create a shell to run code as a string with the native command exec().

import io
import contextlib

def code_exec(code: str) -> str:
    output = io.StringIO()
    with contextlib.redirect_stdout(output):
        try:
            exec(code)
        except Exception as e:
            print(f"Error: {e}")
    return output.getvalue()

tool_code_exec = {'type':'function', 'function':{
  'name': 'code_exec',
  'description': 'execute python code',
  'parameters': {'type': 'object',
                'required': ['code'],
                'properties': {
                    'code': {'type':'str', 'description':'code to execute'},
}}}}

## test
code_exec("a=1+1; print(a)")

Just like before, I will write a prompt, but this time, at the beginning of the chat-loop, I will ask the user to provide a file path.

prompt = '''You are an expert data scientist, and you have tools to execute python code.
First of all, execute the following code exactly as it is: 'df=pd.read_csv(path); print(df.head())'
If you create a plot, ALWAYS add 'plt.show()' at the end.
'''
messages = [{"role":"system", "content":prompt}]
start = True

while True:
    ## user input
    try:
        if start is True:
            path = input('📁 Provide a CSV path >')
            q = "path = "+path
        else:
            q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        continue
   
    messages.append( {"role":"user", "content":q} )

Since coding tasks can be a little trickier for LLMs, I am going to add also memory reinforcement. By default, during one session, there isn’t a true long-term memory. LLMs have access to the chat history, so they can remember information temporarily, and track the context and instructions you’ve given earlier in the conversation. However, memory doesn’t always work as expected, especially if the LLM is small. Therefore, a good practice is to reinforce the model’s memory by adding periodic reminders in the chat history.

prompt = '''You are an expert data scientist, and you have tools to execute python code.
First of all, execute the following code exactly as it is: 'df=pd.read_csv(path); print(df.head())'
If you create a plot, ALWAYS add 'plt.show()' at the end.
'''
messages = [{"role":"system", "content":prompt}]
memory = '''Use the dataframe 'df'.'''
start = True

while True:
    ## user input
    try:
        if start is True:
            path = input('📁 Provide a CSV path >')
            q = "path = "+path
        else:
            q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        continue
   
    ## memory
    if start is False:
        q = memory+"n"+q
    messages.append( {"role":"user", "content":q} )

Please note that the default memory length in Ollama is 2048 characters. If your machine can handle it, you can increase it by changing the number when the LLM is invoked:

    ## model
    agent_res = ollama.chat(
        model=llm,
        tools=[tool_code_exec],
        options={"num_ctx":2048},
        messages=messages)

In this usecase, the output of the Agent is mostly code and data, so I don’t want the LLM to re-elaborate the responses.

    ## response
    dic_tools = {'code_exec':code_exec}
   
    if "tool_calls" in agent_res["message"].keys():
        for tool in agent_res["message"]["tool_calls"]:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                ### final res
                res = t_output
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
 
    if agent_res['message']['content'] != '':
        res = agent_res["message"]["content"]
     
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )
    start = False

Now, if we run the full code, we can chat with our Agent.

Conclusion

This article has covered the foundational steps of creating Agents from scratch using only Ollama. With these building blocks in place, you are already equipped to start developing your own Agents for different use cases. 

Stay tuned for Part 2, where we will dive deeper into more advanced examples.

Full code for this article: GitHub

I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.

👉 Let’s Connect 👈

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Crooks are hijacking and reselling AI infrastructure: Report

Threat actors may not only be stealing AI access from fully developed applications, the researchers added. A developer trying to prototype an app, who, through carelessness, doesn’t secure a server, could be victimized through credential theft as well. Joseph Steinberg, a US-based AI and cybersecurity expert, said the report is

Read More »

Amazon confirms 16,000 job cuts, including to AWS

Amazon is cutting about 16,000 jobs across the company, SVP of People Experience and Technology Beth Galetti wrote in an email to employees Wednesday. The cuts were widely expected — and although Galetti’s email did not mention Amazon Web Services, the cuts came as no surprise to AWS staff, some

Read More »

Trump Tells Iran Time Running Out for Deal

President Donald Trump warned Iran to make a nuclear deal with the US or face military strikes far worse than the attack he ordered last June, increasing pressure on the regime and propelling oil prices higher. In a social-media post on Wednesday, Trump said the fleet of US ships he’d ordered to the region, led by the USS Abraham Lincoln aircraft carrier, is “ready, willing, and able to rapidly fulfill its mission, with speed and violence, if necessary.” “Hopefully Iran will quickly ‘Come to the Table’ and negotiate a fair and equitable deal – NO NUCLEAR WEAPONS – one that is good for all parties,” Trump wrote. In response, Iran said it stands ready for dialogue based on mutual respect and interests but warned that “IF PUSHED, IT WILL DEFEND ITSELF AND RESPOND LIKE NEVER BEFORE,” the country’s mission to the United Nations said in a post on X. Foreign Minister Abbas Araghchi held a series of calls with top regional officials to discuss the escalating situation.  Trump has repeatedly warned Iran that the US might launch another attack, but those threats have recently been linked to Tehran’s deadly crackdown on protests rather than its atomic activities. The US leader has previously said Iran’s nuclear program was “obliterated” in the strikes last June that targeted three facilities across the country. Iran has long said it doesn’t want to develop nuclear weapons. Notably, in his latest post, Trump didn’t demand that Iran end uranium enrichment, its ballistic-missile program or its funding of anti-US proxy militias, all conditions Iran has balked at previously. Trump said this month that Iranian officials had reached out to resume negotiations over a deal, which in the past have focused on limits to the country’s nuclear development in exchange for sanctions relief. Signs have emerged in recent months that

Read More »

US coal generation jumped 31% during Winter Storm Fern: EIA

Listen to the article 3 min This audio is auto-generated. Please let us know if you have feedback. The U.S. electric grid leaned heavily on fossil fuels during Winter Storm Fern, with coal generation in particular jumping 31% for the week ended Sunday, the U.S. Energy Information Administration said Tuesday. Beginning Friday, Fern brought snow, ice and frigid temperatures to a swath of the U.S. from Texas to New England. Retrieved from U.S. Energy Information Administration. Coal generation has surged in the second half of January, from a mid-month low of around 70 GWh/day to about 130 GWh/day, according to EIA data. During the week of Fern, gas generation in the Lower 48 also increased 14% from the previous week while generation from solar, wind, and hydropower declined. “Grid operators can call upon the coal fleet to increase electricity generation in extreme weather events and other times when demand surges or output falls from other generation sources, a pattern also evident in severe cold snaps in February 2021 and January 2025,” EIA said. The U.S. Department of Energy on Saturday and Sunday issued emergency orders to keep some generation sources running during Winter Storm Fern, regardless of emissions limits, in New England, Texas and the Mid-Atlantic. Additional 202(c) waivers were issued Monday as cold temperatures remained and the outlook turned arctic. ISO New England and PJM Interconnection have authorizations to run necessary generation through Saturday, and the New York ISO has similar approvals through Monday, according to DOE’s list of waivers published this year. On Monday, the New England grid operator said it anticipates operating conditions “will continue to tighten in the coming days.” The latest 21-day forecast “indicates narrow energy margins from Friday, Jan. 30, through Sunday, Feb. 1, although the region is still expected to meet consumer demand and required

Read More »

USA Crude Oil Stocks Drop Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 2.3 million barrels from the week ending January 16 to the week ending January 23. That’s what the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on January 28 and included data for the week ending January 23. According to the report, crude oil stocks, not including the SPR, stood at 423.8 million barrels on January 23, 426.0 million barrels on January 16, and 415.1 million barrels on January 24, 2025. The report highlighted that data may not add up to totals due to independent rounding. Crude oil in the SPR stood at 415.0 million barrels on January 23, 414.5 million barrels on January 16, and 394.8 million barrels on January 24, 2025, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.715 billion barrels on January 23, the report highlighted. Total petroleum stocks were down 6.3 million barrels week on week and up 107.7 million barrels year on year, the report pointed out. “At 423.8 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 0.2 million barrels from last week and are about five percent above the five year average for this time of year. Finished gasoline inventories increased, while blending components inventories decreased last week,” it added. “Distillate fuel inventories increased by 0.3 million barrels last week and are about one percent above the five year average for this time of year. Propane/propylene inventories decreased 4.7 million

Read More »

FERC in 2026: Rising costs cloud regulators’ options on data centers, transmission and more

Listen to the article 12 min This audio is auto-generated. Please let us know if you have feedback. The Federal Energy Regulatory Commission faces a challenging year as it comes under pressure from the Trump administration to accelerate data center development amid rising concerns about energy affordability and grid reliability.  At the same time, FERC must also oversee compliance with fast-approaching deadlines for new rules on transmission planning and cost allocation. “Between those two things, it’s going to eat up so much bandwidth at FERC,” said Devin Hartman, director of energy and environmental policy at the R Street Institute, a free market-oriented think tank. “It’s going to be hard to do much else, especially on the policy front,” he said. The keen focus on data center development by the Trump administration comes amid questions about FERC’s status as an independent agency. The two most recent additions to the five-person commission were nominated by President Donald Trump as he has also sought to impose new requirements on regulatory agencies via executive order. Laura Swett, FERC’s new chairman, echoed the administration’s priorities when she said at her first open meeting in November that connecting data centers to the grid is her top priority — along with the agency’s mandate of ensuring grid reliability at fair rates. “I would expect Chair Swett to track more closely with White House priorities than perhaps any chair has ever done,” Hartman said. Former Chairman Willie Phillips, now a law partner at Holland & Knight, expressed faith in commissioners’ independence, however.  “If you value energy dominance, if you value reliability and having a FERC that is a referee that calls balls and strikes, then you have to value independence, and I believe that we have a FERC that does that,” he said. In the end, for Swett

Read More »

Rystad Expects Lower Upstream M&A Activity in 2026

In a market update sent to Rigzone by the Rystad Energy team recently, Rystad noted that global upstream merger and acquisition (M&A) activity is expected to be lower in 2026 than in 2025. Rystad highlighted in the update that, according to its analysis, “nearly $152 billion worth of opportunities [are] on the market as of January this year”. The company added that “timing and execution will determine whether several mega deals will go through, with numerous high value assets still on the market waiting for the right buyers”. According to a chart included in the update, which showed annual upstream M&A activity by continent and deal count, global upstream M&A deal value came in at $170 billion in 2025, $204 billion in 2024, $255 billion in 2023, $152 billion in 2022, $184 billion in 2021, $103 billion in 2020, and $154 billion in 2019. This chart highlighted that it excluded “government mandated deals and production sharing contract awards/expiry”. Rystad noted in its update that global upstream M&A activity “dipped 17 percent year on year to approximately $170 billion in 2025, with deal count decreasing 12 percent to 466”. “Consolidation within North American shale plays, LNG investments across U.S. and Argentina, and majors’ spinning off assets in Asia and the UK to form new regional joint ventures emerged as key themes last year,” Rystad said. “A few key deals across these themes include SM Energy and Civitas’ merger, Cenovus Energy’s acquisition of MEG Energy, a Blackstone-led consortium’s acquisition of a 49.9 percent stake in the Port Arthur LNG phase 2 project from Sempra Infrastructure Partners (SIP), Eni and Petronas merging certain assets in Indonesia and Malaysia, and TotalEnergies merging its UK operations with NeoNext Energy to form NeoNext+,” it added. In the update, Rystad highlighted that North America led activity in

Read More »

OKEA Makes New Discovery in Brage

OKEA ASA and its partners have discovered more hydrocarbon deposits in the producing Brage field on Norway’s side of the North Sea. “Preliminary estimates indicate additional resources for Brage of between 0.5 and 1.5 million standard cubic meters (Sm3) of recoverable oil equivalent (o.e.) if the discovery is oil”, the Norwegian Offshore Directorate (NOD) reported Wednesday. “If it is gas, the preliminary volume estimate is between 0.4 and 0.9 million Sm3 of o.e.” The discovery was made in development well 31/4-A-15 D, which targeted the Knockando Fensfjord prospect, the upstream regulator said. The well was drilled from the Brage installation, it said. “The licensees are now considering opportunities for developing ‘Knockando Fensfjord’”, the NOD added. “Development well 31/4-A-15 D was drilled into the reservoir section in the lower Fensfjord Formation of Late Jurassic age on its way to the ‘Talisker’ production target in the Brent Group of Middle Jurassic age”, the NOD said. “A 38.5-meter [126.31 feet] column of hydrocarbons was proven in an interval of multiple sandstone layers with moderate to good reservoir quality. The petroleum/water contact was not encountered”. Drilling reached a measured depth of 10,009 meters (32,837.93 feet) and a vertical depth of 2,309 meters below sea level. It was terminated in the Oseberg Formation in the Middle Jurassic, according to the NOD. The well is part of production license 055, awarded 1979 and valid through 2040. OKEA is license operator with a 35.2 percent stake. Lime Petroleum AS owns 33.84 percent, DNO Norge AS 14.26 percent, Petrolia NOCO AS 12.26 percent and M Vest Energy AS 4.44 percent. Brage, which sits in the northern part of the North Sea 10 kilometers (6.21 miles) east of the Oseberg field, started producing 1993 with an integrated production, drilling and accommodation facility, according to field information on government website Norskpetroleum.no. It produced 0.81

Read More »

Network engineers take on NetDevOps roles to advance stalled automation efforts

What NetDevOps looks like Most enterprises begin their NetDevOps journey modestly by automating a limited set of repetitive, lower-level tasks. Nearly 70% of enterprises pursuing infrastructure automation start with task-level scripting, rather than end-to-end automation, according to theCUBE Research’s AppDev Done Right Summit. This can include using tools such as Ansible or Python scripts to standardize device provisioning, configuration changes, or other routine changes. Then, more mature teams adopt Git for version control, define golden configurations, and apply basic validation before and after changes, explains Bob Laliberte, principal analyst at SiliconANGLE and theCUBE. A smaller group of enterprises extends automation efforts into complete CI/CD-style workflows with consistent testing, staged deployments, and automated verification, Laliberte adds. This capability is present in less than 25% of enterprises today, according to theCUBE, and it is typically focused on specific domains such as data center fabric or cloud networking. NetDevOps usually exists with the network organization as a dedicated automation or platform subgroup, and more than 60% of enterprises anchor NetDevOps initiatives within traditional infrastructure teams rather than application or platform engineering groups, according to Laliberte. “In larger enterprises, NetDevOps capabilities are increasingly centralized within shared infrastructure or platform teams that provide tooling, pipelines, and guardrails across compute, storage, and networking,” Laliberte says. “In more advanced or cloud-native environments, network specialists may be embedded within application, site reliability engineering (SRE), or platform teams, particularly where networking directly impacts application performance.” Transforming work At its core, NetDevOps isn’t just about changing titles for network engineers. It is about changing workflows, behaviors, and operating models across network operations.

Read More »

China clears Nvidia H200 sales to tech giants, reshaping AI data center plans

China is also accelerating efforts to strengthen domestic training chip design and manufacturing capabilities, with the strategic objective of reducing long-term dependence on foreign suppliers, Zeng added. Things could get more complex if authorities mandated imported chips to be deployed alongside domestically produced accelerators. Reuters has reported that this may be a possibility. “A mandated bundling requirement would create a heterogeneous computing environment that significantly increases system complexity,” Zeng said. “Performance inconsistencies and communication protocol disparities across different chip architectures would elevate O&M [operations and maintenance] overhead and introduce additional network latency.” However, the approvals are unlikely to close the gap with US hyperscalers, Zeng said, noting that the H200 remains one generation behind Nvidia’s Blackwell architecture and that approved volumes fall well short of China’s overall demand. Implications for global enterprises For global enterprise IT and network leaders, the move adds another variable to long-term AI infrastructure planning. Expanded sales of Nvidia’s H200 chips could help the company increase production scale, potentially creating room to ease pricing for Western enterprises deploying H200-based AI infrastructure, said Neil Shah, VP for research at Counterpoint Research.

Read More »

Nuclear safety rules quietly rewritten to favor AI

‘Referee now plays for the home team’ Kimball pointed out that while an SMR works on the same principle as a large-scale nuclear plant, using controlled fission to generate heat which is then converted to electricity, its design reduces environmental impacts such as groundwater contamination, water use, and the impact in the event of failure. For example, he said, the integral reactor design in an SMR, with all components in a single vessel, eliminates external piping. This means that accidents would be self-contained, reducing the environmental impact. In addition, he said, SMRs can be air-cooled, which greatly reduces the amount of water required. “These are just a couple of examples of how an SMR differs from the large industrial nuclear power plants we think of when we think of nuclear power.”  Because of differences like this, said Kimball, “I can see where rules generated/strengthened in the post-Three Mile Island era might need to be revisited for this new nuclear era. But it is really difficult to speak to how ‘loose’ these rules have become, and whether distinctions between SMRs and large-scale nuclear plants comprise the majority of the changes reported.” Finally, he said, “I don’t think I need to spend too many words on articulating the value of nuclear to the hyperscale or AI data center. The era of the gigawatt datacenter is upon us, and the traditional means of generating power can’t support this insatiable demand. But we have to ensure we deploy power infrastructure, such as SMRs, in a responsible, ethical, and safe manner.”  Further to that, Gogia pointed out that for CIOs and infrastructure architects, the risks extend well beyond potential radiation leaks. “What matters more immediately is that system anomalies — mechanical, thermal, software-related — may not be documented, investigated, or escalated with the diligence one would expect from

Read More »

Mplify launches AI-focused Carrier Ethernet certifications

“We didn’t want to just put a different sticker on it,” Vachon said. “We wanted to give the opportunity for operators to recertify their infrastructure so at least you’ve now got this very competitive infrastructure.” Testing occurs on live production networks. The automated testing platform can be completed in days once technical preparation is finished. Organizations pay once per certification with predictable annual maintenance fees required to keep certifications active. Optional retesting can refresh certification test records. Carrier Ethernet for AI The Carrier Ethernet for AI certification takes the business certification baseline and adds a performance layer specifically designed for AI workloads. Rather than creating a separate track, the AI certification requires providers to first complete the Carrier Ethernet for Business validation, then demonstrate they can meet additional stringent requirements. “What we identified was that there was another tier that we could produce a standard around for AI,” Vachon explained. “With extensive technical discussions with our membership, our CTO, and our director of certification, they identified the critical performance and functionality parameters.” The additional validation focuses on three key performance parameters: frame delay, inter-frame delay variation, and frame loss ratio aligned with AI workload requirements. Testing uses MEF 91 test requirements with AI-specific traffic profiles and performance objectives that go beyond standard business service thresholds. The program targets three primary use cases: connecting subscriber premises running AI applications to AI edge sites, interconnecting AI edge sites to AI data centers, and AI data center to data center interconnections.

Read More »

Gauging the real impact of AI agents

That creates the primary network issue for AI agents, which is dealing with implicit and creeping data. There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. The enterprises with the most experience with AI agents say it would be smart to expect some data center network upgrades to link agents to databases, and if the agents are distributed away from the data center, it may be necessary to improve the agent sites’ connection to the corporate VPN. As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. Right now, these tend to exist within a fairly small circle—a plant, a campus, perhaps a city or town—but over time, key enterprises say that their new-service interest could span a metro area. They point out that the real-time edge applications

Read More »

Photonic chip vendor snags Gates investment

“Moore’s Law is slowing, but AI can’t afford to wait. Our breakthrough in photonics unlocks an entirely new dimension of scaling, by packing massive optical parallelism on a single chip,” said Patrick Bowen, CEO of Neurophos. “This physics-level shift means both efficiency and raw speed improve as we scale up, breaking free from the power walls that constrain traditional GPUs.” The new funding includes investments from Microsoft’s investment fund M12 that will help speed up delivery of Neurophos’ first integrated photonic compute system, including datacenter-ready OPU modules. Neurophos is not the only company exploring this field. Last April, Lightmatter announced the launch of photonic chips to tackle data center bottlenecks, And in 2024, IBM said its researchers were exploring optical chips and developing a prototype in this area.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »