Stay Ahead, Stay ONMINE

AI Agents from Zero to Hero – Part 1

Intro AI Agents are autonomous programs that perform tasks, make decisions, and communicate with others. Normally, they use a set of tools to help complete tasks. In GenAI applications, these Agents process sequential reasoning and can use external tools (like web searches or database queries) when the LLM knowledge isn’t enough. Unlike a basic chatbot, […]

Intro

AI Agents are autonomous programs that perform tasks, make decisions, and communicate with others. Normally, they use a set of tools to help complete tasks. In GenAI applications, these Agents process sequential reasoning and can use external tools (like web searches or database queries) when the LLM knowledge isn’t enough. Unlike a basic chatbot, which generates random text when uncertain, an AI Agent activates tools to provide more accurate, specific responses.

We are moving closer and closer to the concept of Agentic Ai: systems that exhibit a higher level of autonomy and decision-making ability, without direct human intervention. While today’s AI Agents respond reactively to human inputs, tomorrow’s Agentic AIs proactively engage in problem-solving and can adjust their behavior based on the situation.

Today, building Agents from scratch is becoming as easy as training a logistic regression model 10 years ago. Back then, Scikit-Learn provided a straightforward library to quickly train Machine Learning models with just a few lines of code, abstracting away much of the underlying complexity.

In this tutorial, I’m going to show how to build from scratch different types of AI Agents, from simple to more advanced systems. I will present some useful Python code that can be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you can replicate this example.

Setup

As I said, anyone can have a custom Agent running locally for free without GPUs or API keys. The only necessary library is Ollama (pip install ollama==0.4.7), as it allows users to run LLMs locally, without needing cloud-based services, giving more control over data privacy and performance.

First of all, you need to download Ollama from the website. 

Then, on the prompt shell of your laptop, use the command to download the selected LLM. I’m going with Alibaba’s Qwen, as it’s both smart and lite.

After the download is completed, you can move on to Python and start writing code.

import ollama
llm = "qwen2.5"

Let’s test the LLM:

stream = ollama.generate(model=llm, prompt='''what time is it?''', stream=True)
for chunk in stream:
    print(chunk['response'], end='', flush=True)

Obviously, the LLM per se is very limited and it can’t do much besides chatting. Therefore, we need to provide it the possibility to take action, or in other words, to activate Tools.

One of the most common tools is the ability to search the Internet. In Python, the easiest way to do it is with the famous private browser DuckDuckGo (pip install duckduckgo-search==6.3.5). You can directly use the original library or import the LangChain wrapper (pip install langchain-community==0.3.17). 

With Ollama, in order to use a Tool, the function must be described in a dictionary.

from langchain_community.tools import DuckDuckGoSearchResults
def search_web(query: str) -> str:
  return DuckDuckGoSearchResults(backend="news").run(query)

tool_search_web = {'type':'function', 'function':{
  'name': 'search_web',
  'description': 'Search the web',
  'parameters': {'type': 'object',
                'required': ['query'],
                'properties': {
                    'query': {'type':'str', 'description':'the topic or subject to search on the web'},
}}}}
## test
search_web(query="nvidia")

Internet searches could be very broad, and I want to give the Agent the option to be more precise. Let’s say, I’m planning to use this Agent to learn about financial updates, so I can give it a specific tool for that topic, like searching only a finance website instead of the whole web.

def search_yf(query: str) -> str:  engine = DuckDuckGoSearchResults(backend="news")
  return engine.run(f"site:finance.yahoo.com {query}")

tool_search_yf = {'type':'function', 'function':{
  'name': 'search_yf',
  'description': 'Search for specific financial news',
  'parameters': {'type': 'object',
                'required': ['query'],
                'properties': {
                    'query': {'type':'str', 'description':'the financial topic or subject to search'},
}}}}

## test
search_yf(query="nvidia")

Simple Agent (WebSearch)

In my opinion, the most basic Agent should at least be able to choose between one or two Tools and re-elaborate the output of the action to give the user a proper and concise answer. 

First, you need to write a prompt to describe the Agent’s purpose, the more detailed the better (mine is very generic), and that will be the first message in the chat history with the LLM. 

prompt = '''You are an assistant with access to tools, you must decide when to use tools to answer user message.''' 
messages = [{"role":"system", "content":prompt}]

In order to keep the chat with the AI alive, I will use a loop that starts with user’s input and then the Agent is invoked to respond (which can be a text from the LLM or the activation of a Tool).

while True:
    ## user input
    try:
        q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        continue
    messages.append( {"role":"user", "content":q} )
   
    ## model
    agent_res = ollama.chat(
        model=llm,
        tools=[tool_search_web, tool_search_yf],
        messages=messages)

Up to this point, the chat history could look something like this:

If the model wants to use a Tool, the appropriate function needs to be run with the input parameters suggested by the LLM in its response object:

So our code needs to get that information and run the Tool function.

## response
    dic_tools = {'search_web':search_web, 'search_yf':search_yf}

    if "tool_calls" in agent_res["message"].keys():
        for tool in agent_res["message"]["tool_calls"]:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                print(t_output)
                ### final res
                p = f'''Summarize this to answer user question, be as concise as possible: {t_output}'''
                res = ollama.generate(model=llm, prompt=q+". "+p)["response"]
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
 
    if agent_res['message']['content'] != '':
        res = agent_res["message"]["content"]
     
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

Now, if we run the full code, we can chat with our Agent.

Advanced Agent (Coding)

LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of Programming languages. The model learns the relationships between different parts of the code by predicting the next token in a sequence. In short, LLMs can generate Python code but can’t execute it, Agents can.

I shall prepare a Tool allowing the Agent to execute code. In Python, you can easily create a shell to run code as a string with the native command exec().

import io
import contextlib

def code_exec(code: str) -> str:
    output = io.StringIO()
    with contextlib.redirect_stdout(output):
        try:
            exec(code)
        except Exception as e:
            print(f"Error: {e}")
    return output.getvalue()

tool_code_exec = {'type':'function', 'function':{
  'name': 'code_exec',
  'description': 'execute python code',
  'parameters': {'type': 'object',
                'required': ['code'],
                'properties': {
                    'code': {'type':'str', 'description':'code to execute'},
}}}}

## test
code_exec("a=1+1; print(a)")

Just like before, I will write a prompt, but this time, at the beginning of the chat-loop, I will ask the user to provide a file path.

prompt = '''You are an expert data scientist, and you have tools to execute python code.
First of all, execute the following code exactly as it is: 'df=pd.read_csv(path); print(df.head())'
If you create a plot, ALWAYS add 'plt.show()' at the end.
'''
messages = [{"role":"system", "content":prompt}]
start = True

while True:
    ## user input
    try:
        if start is True:
            path = input('📁 Provide a CSV path >')
            q = "path = "+path
        else:
            q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        continue
   
    messages.append( {"role":"user", "content":q} )

Since coding tasks can be a little trickier for LLMs, I am going to add also memory reinforcement. By default, during one session, there isn’t a true long-term memory. LLMs have access to the chat history, so they can remember information temporarily, and track the context and instructions you’ve given earlier in the conversation. However, memory doesn’t always work as expected, especially if the LLM is small. Therefore, a good practice is to reinforce the model’s memory by adding periodic reminders in the chat history.

prompt = '''You are an expert data scientist, and you have tools to execute python code.
First of all, execute the following code exactly as it is: 'df=pd.read_csv(path); print(df.head())'
If you create a plot, ALWAYS add 'plt.show()' at the end.
'''
messages = [{"role":"system", "content":prompt}]
memory = '''Use the dataframe 'df'.'''
start = True

while True:
    ## user input
    try:
        if start is True:
            path = input('📁 Provide a CSV path >')
            q = "path = "+path
        else:
            q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        continue
   
    ## memory
    if start is False:
        q = memory+"n"+q
    messages.append( {"role":"user", "content":q} )

Please note that the default memory length in Ollama is 2048 characters. If your machine can handle it, you can increase it by changing the number when the LLM is invoked:

    ## model
    agent_res = ollama.chat(
        model=llm,
        tools=[tool_code_exec],
        options={"num_ctx":2048},
        messages=messages)

In this usecase, the output of the Agent is mostly code and data, so I don’t want the LLM to re-elaborate the responses.

    ## response
    dic_tools = {'code_exec':code_exec}
   
    if "tool_calls" in agent_res["message"].keys():
        for tool in agent_res["message"]["tool_calls"]:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                ### final res
                res = t_output
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
 
    if agent_res['message']['content'] != '':
        res = agent_res["message"]["content"]
     
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )
    start = False

Now, if we run the full code, we can chat with our Agent.

Conclusion

This article has covered the foundational steps of creating Agents from scratch using only Ollama. With these building blocks in place, you are already equipped to start developing your own Agents for different use cases. 

Stay tuned for Part 2, where we will dive deeper into more advanced examples.

Full code for this article: GitHub

I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.

👉 Let’s Connect 👈

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Versa bolsters data protection, AI-powered operations in SASE upgrade

Docker-containerized ML models execute data discovery and classification locally, maintaining data sovereignty while scanning file repositories, SaaS applications, and inline traffic flows, the authors stated.  “Versa DLP uses advanced transformer models and fine-tuned Large Language Models (LLMs) to detect sensitive information across diverse document types and formats. Unlike traditional pattern

Read More »

DKnife targets network gateways in long running AitM campaign

Beyond update hijacking, the framework supports DNS manipulation, binary replacement, and selective traffic forwarding, giving attackers control over how specific requests are handled. Indicators point to China-Nexus development and targeting Several aspects of DKnife’s design and operation suggested ties to China-aligned threat actors. Talos identified configuration data and code comments written in

Read More »

Four new vulnerabilities found in Ingress NGINX

NGINX is a reverse proxy/load balancer that generally acts as the front-end web traffic receiver and directs it to the application service for data transformation. Ingress NGINX is a version used in Kubernetes as the controller for traffic coming into the infrastructure. It takes care of mapping traffic to pods

Read More »

Strategists See USA Crude Stocks Rising Over 6MM Barrels WoW

In an oil and gas report sent to Rigzone late Tuesday by the Macquarie team, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be up by 6.5 million barrels for the week ending February 6. “This follows a 3.5 million barrel draw in the prior week, with the crude balance again realizing tighter relative to our expectations, albeit amidst significant winter freeze noise,” the strategists said in the report. “For this week’s stats, we again see significant room for volatility as freeze impacts work their way through the data,” they added. “In any event, for the week ending 2/6, from refineries, we look for a minimal increase in crude runs, with turnaround timing adding noise to the picture. Among net imports, we model a meaningful increase, with exports lower (-0.5 million barrels per day) and imports higher (+0.2 million barrels per day) on a nominal basis,” they continued. The strategists noted in the report that timing of cargoes remains a source of potential volatility in the weekly crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a large nominal bounce-back (+0.7 million barrels per day) following last week’s freeze impacts,” the Macquarie strategists went on to state. “Here too, the extent of lingering disruptions adds uncertainty. Rounding out the picture, we anticipate SPR [U.S. Strategic Petroleum Reserve] stocks unchanged for the week ending 2/6,” they added. The strategists also noted that, “among products”, they “look for a modest gasoline build (+1.4 million barrels) offset by a distillate draw (-1.4 million barrels), with jet stocks nearly flat (+0.1 million barrels)”. “We model implied demand for these three products at ~13.8 million barrels per day for the week ending February 6,” they went on to state. U.S. commercial crude oil inventories, excluding those in

Read More »

Energy Secretary Continues to Strengthen Puerto Rico’s Energy Grid with Renewed Orders

WASHINGTON—The U.S. Department of Energy (DOE) today renewed two emergency orders to further strengthen Puerto Rico’s electric grid as the island prepares for rising energy demand and the 2026 hurricane season. Building on actions taken in May, August, and November 2025, the renewed orders authorize the Puerto Rico Electric Power Authority (PREPA) to dispatch generation units essential for maintaining critical generation capacity, while accelerating vegetation management to reduce outages, strengthen long-term grid reliability, and minimize the cost of blackouts. “The Department of Energy will continue modernizing Puerto Rico’s electric grid to ensure the island achieves long-term resilience and reliability,” said U.S. Secretary of Energy Chris Wright. “Renewing these orders ensures critical work moves forward, urgent reliability challenges are addressed, and Puerto Rico’s grid is ready to withstand rising energy demand. Thanks to President Trump, these efforts are delivering real, lasting progress for Puerto Rico.” DOE’s emergency actions have assisted the Puerto Rican government in restoring up to 820 MW of baseload generation capacity in Puerto Rico, resulting in an increase to the island’s systemwide generation capacity to 6,460 MW. Several plants were able to run without water injection during a water crisis, ensuring electricity kept flowing to Puerto Ricans despite unforeseen circumstances.  The orders also address vegetation management issues near high-voltage lines. Falling tree limbs or brush during Puerto Rico’s frequent storms and high winds can damage transmission lines, cause widespread outages and potentially cause wildfires.   “The Department of Energy’s 202(c) emergency orders have been instrumental in preventing the widespread power outages Puerto Rico was expected to face, allowing us to increase our baseload generation capacity and advance grid stability measures. Extending the orders is necessary to continue making progress and I thank President Trump and Secretary Wright for their unwavering commitment to ensure the island has an affordable, reliable supply of energy,”

Read More »

Oil Edges Down

Oil slid in a choppy session as traders parsed mixed signals on the risks of supply disruptions in the Middle East.  West Texas Intermediate edged down to settle near $64 a barrel, snapping a two-day winning streak, amid competing headlines on the status of diplomatic negotiations between the US and Iran. Prices dropped after Oman’s Foreign Minister Badr Albusaidi said that discussions during last week’s Iran-US talks were productive. Minutes later, futures pared some losses on an Axios report that US President Donald Trump might send a second aircraft carrier to the Middle East if negotiations on Iran’s nuclear program and other issues fail.  The episode underscores the whiplash investors face. Many are looking ahead to a Wednesday meeting between Trump and Benjamin Netanyahu for clarity, amid widespread expectations that Israel’s prime minister will urge a tougher US stance on Tehran’s ballistic missile program.  “I will present to the president our views regarding the principles of the negotiation,” Netanyahu said of the upcoming discussion.  In the absence of clear signals on the direction of the Middle East conflict, oil prices took cues from weaker equities.  Crude has risen more than 10% this year as recurrent geopolitical flare-ups eclipsed concerns that a global surplus would lift inventories and hurt prices.  The US said on Monday that American-flagged vessels should stay as far as possible from Iranian waters when passing through the Strait of Hormuz. Washington has amassed a powerful military force in the Middle East, even as it also pursues talks with Tehran over its nuclear ambitions. The Strait of Hormuz is a critical trade artery for Middle East energy shipments that links a slew of producers to global markets, especially in Asia. Tehran has threatened to close the maritime chokepoint during periods of geopolitical tension, though it hasn’t actually followed through. “Both

Read More »

Energy Department Awardee to Build First American Aluminum Smelter Since 1980

Project demonstrates the economic viability of domestic smelting and reduces U.S. reliance on imports. WASHINGTON—U.S. Secretary of Energy Chris Wright today joined executives of Century Aluminum to celebrate progress toward constructing a primary aluminum smelter in Inola, Oklahoma—a project supported through a grant from the U.S. Department of Energy (DOE). “Today, I was honored to meet with the team that will be delivering America’s first new aluminum smelter of the 21st century,” said Secretary Wright. “This project could not have been realized without President Trump’s commitment to revitalizing this country’s manufacturing base and reducing our reliance on foreign suppliers. Century Aluminum’s joint venture shows how President Trump’s economic policies are encouraging global companies to partner with U.S. firms, build here in America, and create good-paying American jobs.” Once complete, the plant will be the largest-ever primary aluminum production plant in the U.S., with the potential to produce over 500,000 tons of high-quality, primary aluminum per year—including approximately 20,000 tons of high-purity aluminum suitable for national defense applications. On January 26, 2026, Century announced plans to jointly develop the plant with Emirates Global Aluminum (EGA). EGA’s investment to help develop this multi-billion-dollar smelter project in Oklahoma was secured by President Trump as part of the strategic partnership announced between the United Arab Emirates and the United States in Abu Dhabi on May 15, 2025. In 2025, DOE’s Office of Clean Energy Demonstrations (OCED) awarded $500 million to advance construction of the first new primary aluminum smelter in the United States since 1980. This project demonstrates the economic viability of domestic smelting and reduces dependence on foreign imports. In 2024, the United States imported 5.46 million metric tons and exported 3 million metric tons of downstream aluminum products and scrap, resulting in net imports of 2.46 million metric tons. There are currently

Read More »

USA Boards Venezuela-Linked Oil Tanker

US forces boarded an oil tanker after a cat-and-mouse chase from the Caribbean to the Indian Ocean, the Pentagon said, as Washington expands its geographic scope in an ongoing crackdown on a global shadow fleet used to export sanctioned crude. The Aquila II had departed from the Jose terminal in Venezuela in early December and appeared to be bound for China, according to ship tracking data compiled by Bloomberg. The ship was intercepted while heading toward the Sunda Strait between the Indonesian islands of Java and Sumatra. It’s the latest Venezuela-linked ship the US has taken control of since December, and the furthest from Caribbean waters, underscoring how far Washington is prepared to go to enforce its energy quarantine worldwide. The Suezmax vessel, able to haul about 1 million barrels of oil, was sanctioned for its involvement in the Russian oil trade following Moscow’s invasion of Ukraine in 2022. Although it was sailing under the flag of Panama when it was sanctioned by the US, as a Treasury Sanctions list search shows, it now appears to be operating under an unknown flag, according to the Equasis international shipping database and Bloomberg data. Previous US tanker seizures were executed before and after US forces captured and removed former Venezuelan President Nicolas Maduro in a highly-coordinated operation that included air strikes on Caracas. The last such incident took place in late January, when Motor Vessel Sagitta was captured in the Caribbean Sea. The Trump administration has pledged to crack down on Venezuela’s use of sanctioned ships, which often deploy deceptive satellite positioning signals, false flags and other misleading techniques to illegally export oil and other goods.  On Monday, Defense Secretary Pete Hegseth said the Pentagon was pursuing dark fleet vessels carrying Venezuelan crude around the world.  “The only guidance I gave to my military commanders is,

Read More »

India Seizes Three Tankers

The Indian Coast Guard seized three tankers that it said were involved in oil smuggling, the first sign of the country getting tough on the so-called dark fleet.  The three ships were taken in the waters off Mumbai on Friday by the coast guard, which said in a post on X that it had “busted an international oil-smuggling racket” and that the vessels had been known to “frequently change identity.”  It’s the first time New Delhi has taken such action, according to people familiar with the Indian shipping industry, and comes as the US and Europe lead an effort to get tougher on vessels moving sanctioned oil. Many dark fleet tankers have sub-standard documentation, improper or fake flag registrations and poor maintenance, posing security and maritime safety risks.  The seizures are also happening as Washington pressures New Delhi to stop taking Russian crude, as part of a deal to cut import tariffs on the South Asian nation. India said early last year that it wouldn’t allow sanctioned tankers to discharge at its ports. Explainer: How Trump Is Testing India’s US-Russia Balancing Act The coast guard didn’t name the vessels it had seized, but shared photos of them in its post. The pictures matched past images of the Chiltern, Asphalt Star and Stellar Ruby that can be found on MarineTraffic, a ship-tracking intelligence platform.  Ship-intelligence platform TankerTracker.com identified the same vessels through their unique seven-digit IMO numbers.  All three ships were sanctioned by Washington last year for links to the Iranian oil trade.  Nobody responded to emails seeking comment at offices of the registered owners of the tankers as listed on the Equasis database. Calls made to the owners and manager of Chiltern and Asphalt Star were directed to voice mail. Calls made to the owner and manager of Stellar Ruby were not answered. The three vessels seized by

Read More »

Cisco amps up Silicon One line, delivers new systems and optics for AI networking

Those building blocks include the new G300 as well as the G200 51.2 Tbps chip, which is aimed at spine and aggregation applications, and the G100 25.6 Tbps chip, which is aimed at leaf operations. Expanded portfolio of Silicon One P200-powered systems Cisco in October rolled out the P200 Silicon One chip and the high-end, 51.2 Tbps 8223 router aimed at distributed AI workloads. That system supports Octal Small Form-Factor Pluggable (OSFP) and Quad Small Form-Factor Pluggable Double Density (QSFP-DD) optical form factors that help the box support geographically dispersed AI clusters. Cisco grew the G200 family this week with the addition of the 8122X-64EF-O, a 64x800G switch that will run the SONiC OS and includes support for Cisco 800G Linear Pluggable Optics (LPO) connectivity. LPO components typically set up direct links between fiber optic modules, eliminating the need for traditional components such as a digital signal processor. Cisco said its P200 systems running IOS XR software now better support core routing services to allow data-center-to-data-center links and data center interconnect applications. In addition, Cisco introduced a P200-powered 88-LC2-36EF-M line card, which delivers 28.8T of capacity. “Available for both our 8-slot and 18-slot modular systems, this line card enables up to an unprecedented 518.4T of total system bandwidth, the highest in the industry,” wrote Guru Shenoy, senior vice president of the Cisco provider connectivity group, in a blog post about the news. “When paired with Cisco 800G ZR/ZR+ coherent pluggable optics, these systems can easily connect sites over 1,000 kilometers apart, providing the high-density performance needed for modern data center interconnects and core routing.”

Read More »

NetBox Labs ships AI copilot designed for network engineers, not developers

Natural language for network engineers Beevers explained that network operations teams face two fundamental barriers to automation. First, they lack accurate data about their infrastructure. Second, they aren’t software developers and shouldn’t have to become them. “These are not software developers. They are network engineers or IT infrastructure engineers,” Beevers said. “The big realization for us through the copilot journey is they will never be software developers. Let’s stop trying to make them be. Let’s let these computers that are really good at being software developers do that, and let’s let the network engineers or the data center engineers be really good at what they’re really good at.”  That vision drove the development of NetBox Copilot’s natural language interface and its capabilities. Grounding AI in infrastructure reality The challenge with deploying AI  in network operations is trust. Generic large language models hallucinate, produce inconsistent results, and lack the operational context to make reliable decisions. NetBox Copilot addresses this by grounding the AI agent in NetBox’s comprehensive infrastructure data model. NetBox serves as the system of record for network and infrastructure teams, maintaining a semantic map of devices, connections, IP addressing, rack layouts, power distribution and the relationships between these elements. Copilot has native awareness of this data structure and the context it provides. This enables queries that would be difficult or impossible with traditional interfaces. Network engineers can ask “Which devices are missing IP addresses?” to validate data completeness, “Who changed this prefix last week?” for change tracking and compliance, or “What depends on this switch?” for impact analysis before maintenance windows.

Read More »

US pushes voluntary pact to curb AI data center energy impact

Others note that cost pressure isn’t limited to the server rack. Danish Faruqui, CEO of Fab Economics, said the AI ecosystem is layered from silicon to software services, creating multiple points where infrastructure expenses eventually resurface. “Cloud service providers are likely to gradually introduce more granular pricing models across cloud, AI, and SaaS offerings, tailored by customer type, as they work to absorb the costs associated with the White House energy and grid compact,” Faruqui said.   This may not show up as explicit energy surcharges, but instead surface through reduced discounts, higher spending commitments, and premiums for guaranteed capacity or performance. “Smaller enterprises will feel the impact first, while large strategic customers remain insulated longer,” Rawat said. “Ultimately, the compact would delay and redistribute cost pressure; it does not eliminate it.” Implications for data center design The proposal is also likely to accelerate changes in how AI facilities are designed. “Data centers will evolve into localized microgrids that combine utility power with on-site generation and higher-level implementation of battery energy storage systems,” Faruqui said. “Designing for grid interaction will become imperative for AI data centers, requiring intelligent, high-speed switching gear, increased battery energy storage capacity for frequency regulation, and advanced control systems that can manage on-site resources.”

Read More »

Intel teams with SoftBank to develop new memory type

However, don’t expect anything anytime soon. Intel’s Director of Global Strategic Partnerships Sanam Masroor outlined the plans in a blog post. Operations are expected to begin in Q1 2026, with prototypes due in 2027 and commercial products by 2030. While Intel has not come out and said it, that memory design is almost identical to HBM used in GPU accelerators and AI data centers. HBM sits right on the GPU die for immediate access to the GPU, unlike standard DRAM which resides on memory sticks plugged into the motherboard. HBM is much faster than DDR memory but is also much more expensive to produce. It’s also much more profitable than standard DRAM which is why the big three memory makers – Micron, Samsung, and SK Hynix – are favoring production of it.

Read More »

Nvidia’s $100 Billion OpenAI Bet Shrinks and Signals a New Phase in the AI Infrastructure Cycle

One of the most eye-popping figures of the AI boom – a proposed $100 billion Nvidia commitment to OpenAI and as much as 10 gigawatts of compute for the company’s Stargate AI infrastructure buildout – is no longer on the table. And that partial retreat tells the data center industry something important. According to multiple reports surfacing at the end of January, Nvidia has paused and re-scoped its previously discussed, non-binding investment framework with OpenAI, shifting from an unprecedented capital-plus-infrastructure commitment to a much smaller (though still massive) equity investment. What was once framed as a potential $100 billion alignment is now being discussed in the $20-30 billion range, as part of OpenAI’s broader effort to raise as much as $100 billion at a valuation approaching $830 billion. For data center operators, infrastructure developers, and power providers, the recalibration matters less for the headline number and more for what it reveals about risk discipline, competitive dynamics, and the limits of vertical circularity in AI infrastructure finance. From Moonshot to Measured Capital The original September 2025 memorandum reportedly contemplated not just capital, but direct alignment on compute delivery: a structure that would have tightly coupled Nvidia’s balance sheet with OpenAI’s AI-factory roadmap. By late January, however, sources indicated Nvidia executives had grown uneasy with both the scale and the structure of the deal. Speaking in Taipei on January 31, Nvidia CEO Jensen Huang pushed back on reports of friction, calling them “nonsense” and confirming Nvidia would “absolutely” participate in OpenAI’s current fundraising round. But Huang was also explicit on what had changed: the investment would be “nothing like” $100 billion, even if it ultimately becomes the largest single investment Nvidia has ever made. That nuance matters. Nvidia is not walking away from OpenAI. But it is drawing a clearer boundary around

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Onsite Engineer – Critical FacilitiesCharleston, SC This is NOT a traveling position. Having degreed engineers seems to be all the rage these days. I can also use this type of candidate in following cities: Ashburn, VA; Moncks Corner, SC; Binghamton, NY; Dallas, TX or Indianapolis, IN. Our client is an engineering design and commissioning company that is a subject matter expert in the data center space. This role will be onsite at a customer’s data center. They will provide onsite design coordination and construction administration, consulting and management support for the data center / mission critical facilities space with the mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Ashburn, VA This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Montvale, NJ; Charlotte, NC; Atlanta, GA; Hampton, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX; Kansas City, MO; Omaha, NE; Chesterton, IN or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »