Stay Ahead, Stay ONMINE

Retrieval Augmented Generation in SQLite

This is the second in a two-part series on using SQLite for Machine Learning. In my last article, I dove into how SQLite is rapidly becoming a production-ready database for web applications. In this article, I will discuss how to perform retrieval-augmented-generation using SQLite. If you’d like a custom web application with generative AI integration, […]

This is the second in a two-part series on using SQLite for Machine Learning. In my last article, I dove into how SQLite is rapidly becoming a production-ready database for web applications. In this article, I will discuss how to perform retrieval-augmented-generation using SQLite.

If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com

The code referenced in this article can be found here.


When I first learned how to perform retrieval-augmented-generation (RAG) as a budding data scientist, I followed the traditional path. This usually looks something like:

  • Google retrieval-augmented-generation and look for tutorials
  • Find the most popular framework, usually LangChain or LlamaIndex
  • Find the most popular cloud vector database, usually Pinecone or Weaviate
  • Read a bunch of docs, put all the pieces together, and success!

In fact I actually wrote an article about my experience building a RAG system in LangChain with Pinecone.

There is nothing terribly wrong with using a RAG framework with a cloud vector database. However, I would argue that for first time learners it overcomplicates the situation. Do we really need an entire framework to learn how to do RAG? Is it necessary to perform API calls to cloud vector databases? These databases act as black boxes, which is never good for learners (or frankly for anyone). 

In this article, I will walk you through how to perform RAG on the simplest stack possible. In fact, this ‘stack’ is just Sqlite with the sqlite-vec extension and the OpenAI API for use of their embedding and chat models. I recommend you read part 1 of this series to get a deep dive on SQLite and how it is rapidly becoming production ready for web applications. For our purposes here, it is enough to understand that SQLite is the simplest kind of database possible: a single file in your repository. 

So ditch your cloud vector databases and your bloated frameworks, and let’s do some RAG.


SQLite-Vec

One of the powers of the SQLite database is the use of extensions. For those of us familiar with Python, extensions are a lot like libraries. They are modular pieces of code written in C to extend the functionality of SQLite, making things that were once impossible possible. One popular example of a SQLite extension is the Full-Text Search (FTS) extension. This extension allows SQLite to perform efficient searches across large volumes of textual data in SQLite. Because the extension is written purely in C, we can run it anywhere a SQLite database can be run, including Raspberry Pis and browsers.

In this article I will be going over the extension known as sqlite-vec. This gives SQLite the power of performing vector search. Vector search is similar to full-text search in that it allows for efficient search across textual data. However, rather than search for an exact word or phrase in the text, vector search has a semantic understanding. In other words, searching for “horses” will find matches of “equestrian”, “pony”, “Clydesdale”, etc. Full-text search is incapable of this. 

sqlite-vec makes use of virtual tables, as do most extensions in SQLite. A virtual table is similar to a regular table, but with additional powers:

  • Custom Data Sources: The data for a standard table in SQLite is housed in a single db file. For a virtual table, the data can be housed in external sources, for example a CSV file or an API call.
  • Flexible Functionality: Virtual tables can add specialized indexing or querying capabilities and support complex data types like JSON or XML.
  • Integration with SQLite Query Engine: Virtual tables integrate seamlessly with SQLite’s standard query syntax e.g. SELECT , INSERT, UPDATE, and DELETE options. Ultimately it is up to the writers of the extensions to support these operations.
  • Use of Modules: The backend logic for how the virtual table will work is implemented by a module (written in C or another language).

The typical syntax for creating a virtual table looks like the following:

CREATE VIRTUAL TABLE my_table USING my_extension_module();

The important part of this statement is my_extension_module(). This specifies the module that will be powering the backend of the my_table virtual table. In sqlite-vec we will use the vec0 module.

Code Walkthrough

The code for this article can be found here. It is a simple directory with the majority of files being .txt files that we will be using as our dummy data. Because I am a physics nerd, the majority of the files pertain to physics, with just a few files relating to other random fields. I will not present the full code in this walkthrough, but instead will highlight the important pieces. Clone my repo and play around with it to investigate the full code. Below is a tree view of the repo. Note that my_docs.db is the single-file database used by SQLite to manage all of our data.

.

├── data

│   ├── cooking.txt

│   ├── gardening.txt

│   ├── general_relativity.txt

│   ├── newton.txt

│   ├── personal_finance.txt

│   ├── quantum.txt

│   ├── thermodynamics.txt

│   └── travel.txt

├── my_docs.db

├── requirements.txt

└── sqlite_rag_tutorial.py

Step 1 is to install the necessary libraries. Below is our requirements.txt file. As you can see it has only three libraries. I recommend creating a virtual environment with the latest Python version (3.13.1 was used for this article) and then running pip install -r requirements.txt to install the libraries.

# requirements.txt

sqlite-vec==0.1.6

openai==1.63.0

python-dotenv==1.0.1

Step 2 is to create an OpenAI API key if you don’t already have one. We will be using OpenAI to generate embeddings for the text files so that we can perform our vector search. 

# sqlite_rag_tutorial.py

import sqlite3

from sqlite_vec import serialize_float32

import sqlite_vec

import os

from openai import OpenAI

from dotenv import load_dotenv

# Set up OpenAI client

client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

Step 3 is to load the sqlite-vec extension into SQLite. We will be using Python and SQL for our examples in this article. Disabling the ability to load extensions immediately after loading your extension is a good security practice.

# Path to the database file

db_path = 'my_docs.db'

# Delete the database file if it exists

db = sqlite3.connect(db_path)

db.enable_load_extension(True)

sqlite_vec.load(db)

db.enable_load_extension(False)

Next we will go ahead and create our virtual table:

db.execute('''

   CREATE VIRTUAL TABLE documents USING vec0(

       embedding float[1536],

       +file_name TEXT,

       +content TEXT

   )

''')

documents is a virtual table with three columns:

  • sample_embedding : 1536-dimension float that will store the embeddings of our sample documents.
  • file_name : Text that will house the name of each file we store in the database. Note that this column and the following have a + symbol in front of them. This indicates that they are auxiliary fields. Previously in sqlite-vec only embedding data could be stored in the virtual table. However, recently an update was pushed that allows us to add fields to our table that we don’t really want embedded. In this case we are adding the content and name of the file in the same table as our embeddings. This will allow us to easily see what embeddings correspond to what content easily while sparing us the need for extra tables and JOIN statements.
  • content : Text that will store the content of each file. 

Now that we have our virtual table set up in our SQLite database, we can begin converting our text files into embeddings and storing them in our table:

# Function to get embeddings using the OpenAI API

def get_openai_embedding(text):

   response = client.embeddings.create(

       model="text-embedding-3-small",

       input=text

   )

   return response.data[0].embedding

# Iterate over .txt files in the /data directory

for file_name in os.listdir("data"):

   file_path = os.path.join("data", file_name)

   with open(file_path, 'r', encoding='utf-8') as file:

       content = file.read()

       # Generate embedding for the content

       embedding = get_openai_embedding(content)

       if embedding:

           # Insert file content and embedding into the vec0 table

           db.execute(

               'INSERT INTO documents (embedding, file_name, content) VALUES (?, ?, ?)',

               (serialize_float32(embedding), file_name, content)

# Commit changes

db.commit()

We essentially loop through each of our .txt files, embedding the content from each file, and then using an INSERT INTO statement to insert the embedding, file_name, and content into documents virtual table. A commit statement at the end ensures the changes are persisted. Note that we are using serialize_float32 here from the sqlite-vec library. SQLite itself does not have a built-in vector type, so it stores vectors as binary large objects (BLOBs) to save space and allow fast operations. Internally, it uses Python’s struct.pack() function, which converts Python data into C-style binary representations.

Finally, to perform RAG, you then use the following code to do a K-Nearest-Neighbors (KNN-style) operation. This is the heart of vector search. 

# Perform a sample KNN query

query_text = "What is general relativity?"

query_embedding = get_openai_embedding(query_text)

if query_embedding:

   rows = db.execute(

       """

       SELECT

           file_name,

           content,

           distance

       FROM documents

       WHERE embedding MATCH ?

       ORDER BY distance

       LIMIT 3

       """,

       [serialize_float32(query_embedding)]

   ).fetchall()

   print("Top 3 most similar documents:")

   top_contexts = []

   for row in rows:

       print(row)

       top_contexts.append(row[1])  # Append the 'content' column

We begin by taking in a query from the user, in this case “What is general relativity?” and embedding that query using the same embedding model as before. We then perform a SQL operation. Let’s break this down:

  • The SELECT statement means the retrieved data will have three columns: file_name, content, and distance. The first two we have already mentioned. Distance will be calculated during the SQL operation, more on this in a moment.
  • The FROM statement ensures you are pulling data from the documents table.
  • The WHERE embedding MATCH ? statement performs a similarity search between all of the vectors in your database and the query vector. The returned data will include a distance column. This distance is just a floating point number measuring the similarity between the query and database vectors. The higher the number, the closer the vectors are. sqlite-vec provides a few options for how to calculate this similarity. 
  • The ORDER BY distance makes sure to order the retrieved vectors in descending order of similarity (high -> low).
  • LIMIT 3 ensures we only get the top three documents that are nearest to our query embedding vector. You can tweak this number to see how retrieving more or less vectors affects your results.

Given our query of “What is general relativity?”, the following documents were pulled. It did a pretty good job!

Top 3 most similar documents:

(‘general_relativity.txt’, ‘Einstein’s theory of general relativity redefined our understanding of gravity. Instead of viewing gravity as a force acting at a distance, it interprets it as the curvature of spacetime around massive objects. Light passing near a massive star bends slightly, galaxies deflect beams traveling millions of light-years, and clocks tick at different rates depending on their gravitational potential. This groundbreaking theory led to predictions like gravitational lensing and black holes, phenomena later confirmed by observational evidence, and it continues to guide our understanding of the cosmos.’, 0.8316285610198975)

(‘newton.txt’, ‘In classical mechanics, Newton’s laws of motion form the foundation of how we understand the movement of objects. Newton’s first law, often called the law of inertia, states that an object at rest remains at rest and an object in motion continues in motion unless acted upon by an external force. This concept extends into more complex physics problems, where analyzing net forces on objects allows us to predict their future trajectories and behaviors. Over time, applying Newton’s laws has enabled engineers and scientists to design safer vehicles, more efficient machines, and even guide spacecraft through intricate gravitational fields.’, 1.2036118507385254)

(‘quantum.txt’, ‘Quantum mechanics revolutionized our understanding of the microscopic world. Unlike classical particles, quantum entities such as electrons can exhibit both wave-like and particle-like behaviors. Phenomena like quantum superposition suggest that particles can exist in multiple states at once, and the act of measurement often “collapses” these states into one observed outcome. This strange and counterintuitive theory underpins modern technologies like semiconductors and lasers, and it provides a conceptual framework for emerging fields like quantum computing and cryptography.’, 1.251380205154419)

We can then stuff the context of the model with these three documents and have it attempt to answer our question. 

# Prepare the context for the query

context = "nn".join(top_contexts)

system_message = "You are a helpful assistant. Use the following context to answer the query."

# Send query and context to OpenAI

try:

   completion = client.chat.completions.create(

       model="gpt-4o-mini",

       messages=[

           {"role": "system", "content": system_message},

           {"role": "user", "content": f"Context: {context}nnQuery: {query_text}"}

       ]

   )

   print("Response:")

   print(completion.choices[0].message.content)

except Exception as e:

   print(f"Error generating response: {e}")

Here is what the model said:

General relativity is a theory developed by Albert Einstein that redefines our understanding of gravity. Instead of viewing gravity as a force acting at a distance, general relativity interprets it as the curvature of spacetime caused by the presence of mass. According to this theory, massive objects like stars and planets create a distortion in the fabric of spacetime, and this curvature affects the motion of other objects, including light, which can bend when it passes near a massive body.

This is faithfully sticking to the documents we gave the model. Great job 4o-mini!

Conclusion

sqlite-vec is a project sponsored by the Mozilla Builders Accelerator program, so it has some significant backing behind it. Have to give a big thanks to Alex Garcia, the creator of sqlite-vec , for helping to push the SQLite ecosystem and making ML possible with this simple database. This is a well maintained library, with updates coming down the pipeline on a regular basis. As of November 20th, they even added filtering by metadata! Perhaps I should re-do my aforementioned RAG article using SQLite 🤔.

The extension also offers bindings for several popular programming languages, including Ruby, Go, Rust, and more.

The fact that we are able to radically simplify our RAG pipeline to the bare essentials is remarkable. To recap, there is no need for a database service to be spun up and spun down, like Postgres, MySQL, etc. There is no need for API calls to cloud vendors. If you deploy to a server directly via Digital Ocean or Hetzner, you can even avoid costly and unnecessary complexity associated with managed cloud services like AWS, Azure, or Vercel. 

I believe this simple architecture can work for a variety of applications. It is cheaper to use, easier to maintain, and faster to iterate on. Once you reach a certain scale it will likely make sense to migrate to a more robust database such as Postgres with the pgvector extension for RAG capabilities. For more advanced capabilities such as chunking and document cleaning, a framework may be the right choice. But for startups and smaller players, it’s SQLite to the moon. 

Have fun trying out sqlite-vec for yourself!

Simple RAG architecture. Image by author.
Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia launches research center to accelerate quantum computing breakthrough

The new research center aims to tackle quantum computing’s most significant challenges, including qubit noise reduction and the transformation of experimental quantum processors into practical devices. “By combining quantum processing units (QPUs) with state-of-the-art GPU technology, Nvidia hopes to accelerate the timeline to practical quantum computing applications,” the statement added.

Read More »

Keysight network packet brokers gain AI-powered features

The technology has matured considerably since then. Over the last five years, Singh said that most of Keysight’s NPB customers are global Fortune 500 organizations that have large network visibility practices. Meaning they deploy a lot of packet brokers with capabilities ranging anywhere from one gigabit networking at the edge,

Read More »

Adding, managing and deleting groups on Linux

$ sudo groupadd -g 1111 techs In this case, a specific group ID (1111) is being assigned. Omit the -g option to use the next available group ID (e.g., sudo groupadd techs). Once a group is added, you will find it in the /etc/group file. $ grep techs /etc/grouptechs:x:1111: Adding

Read More »

BP makes first divestment of target $20bn asset sale

Energy firm BP has sold a stake worth $1 billion (£733m) in the Trans-Anatolian Natural Gas Pipeline (TANAP) to Apollo as part of the first tranche of a $20bn asset sale target. The US asset manager will take a 25% non-operated stake in BP Pipelines (TANAP) (BP TANAP) which itself holds BP’s 12% interest in TANAP, owner and operator of the pipeline that carries natural gas from Azerbaijan across Turkey. The sale comes after BP chief executive Murray Auchincloss unveiled plans to review assets for a potential sale including its core lubricants business, Castrol and its solar business, BP Lightsource. The $20bn target was announced alongside a “fundamental reset” for the firm as it turns focus to its traditional oil and gas production business. The deal marks the second such sale agreed with US fund manager, Apollo. Last year Apollo snapped up another $1bn BP-owned stake in Trans Adriatic Pipeline (TAP). TANAP, running for approximately 1,120 miles (1,800km) across Turkey, is the central section of the Southern Gas Corridor project (SGC) pipeline system. The SGC transports gas from the BP-operated Shah Deniz gas field in the Azerbaijan sector of the Caspian Sea to markets in Europe, including Italy and Greece. It connects to TAP at the Greek-Turkish border, which crosses Northern Greece, Albania and the Adriatic Sea before coming ashore in Southern Italy to connect to the Italian natural gas network. BP said the deal allows it to “monetise” its interest in TANAP while retaining control of the asset. BP executive vice president for gas and low carbon energy William Lin said: “This unlocks capital from our global portfolio while retaining our role in this strategic asset for bringing Azerbaijan gas to Europe. BP and Apollo will continue to explore further strategic cooperation and mutually beneficial opportunities.” Apollo partner Skardon Baker

Read More »

‘First Major Project’ for GB Energy Announced

A release posted on the UK government website on Friday announced that the “first major project” for Great British Energy (GB Energy) “is to put rooftop solar panels on around 200 schools and 200 NHS sites, saving hundreds of millions on their energy bills”. Hundreds of schools, NHS trusts and communities across the UK will benefit from new rooftop solar power and renewable schemes to save money on their energy bills, thanks to a total GBP 200 million ($258.6 million) investment from the UK government and Great British Energy, the release stated. “In England around GBP 80 million ($103.4 million) in funding will support around 200 schools, alongside GBP 100 million ($129.3 million) for nearly 200 NHS sites, covering a third of NHS trusts, to install rooftop solar panels that could power classrooms and operations, with potential to sell leftover energy back to the grid,” the release noted. The first panels are expected to be in schools and hospitals by the end of summer 2025, according to the release. The release stated that local authorities and community energy groups will also be supported by nearly GBP 12 million ($15.5 million) to help build local clean energy projects. A further GBP 9.3 million ($12.0 million) will power schemes in Scotland, Wales, and Northern Ireland including community energy or rooftop solar for public buildings, the release added. “Great British Energy’s first major project will be to help our vital public institutions save hundreds of millions on bills to reinvest on the frontline,” Energy Secretary Ed Miliband said in the release. “Great British Energy will provide power for pupils and patients,” he added. “Parents at the school gate and patients in hospitals will experience the difference Great British Energy can make. This is our clean energy superpower mission in action, with lower bills

Read More »

Raymond James Sees Biggest Crop of New Oil Projects in a Decade

A flurry of oil projects from Brazil to Saudi Arabia are set to come online this year, providing the biggest infusion of new crude production in more than a decade.  Fresh oil field output is expected to total about 2.9 million barrels a day in 2025, up from about 800,000 barrels last year, according to data from Raymond James. That’s the most in data stretching back to 2015. Among the largest projects are the Tengiz field in Kazakhstan and Bacalhau in Brazil, as well as the Berri and Marjan expansions in Saudi Arabia. The projections for this year and next are subject to delays, and could change. Global oil forecasters have been projecting a supply overhang for 2025 as countries including Guyana and Brazil bring on new output and OPEC+ plans to start reviving idled output in April. Meanwhile, US President Donald Trump’s trade policies have fanned concerns about reduced global energy demand. The US Energy Information Administration projects supply will exceed demand by 100,000 barrels a day this year, and the International Energy Agency sees a surplus of 600,000 barrels a day. While Raymond James didn’t provide full forecasts for production and consumption, the firm projects that supply will outstrip demand by 280,000 barrels a day toward the end of 2025.  “Investors have not fully grasped just how much new supply from projects is on deck in 2025,” said Pavel Molchanov, an analyst at Raymond James. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy. MORE FROM THIS

Read More »

Net zero cannot be ‘in isolation’ from fossil fuels, industry chief to say

The drive for net zero cannot be “in isolation from the hydrocarbon sector”, the head of the CBI (Confederation of British Industry) will say today. Rain Newton-Smith is due to speak at the group’s annual lunch in Edinburgh, alongside First Minister John Swinney, where she will laud the oil and gas industry as “the bridge” to net zero. But the business boss will also lament government action which has hit the fossil fuels industry. The speech comes at a time when the idea of net zero is becoming unpopular with the Conservatives and a surging Reform UK – which has made opposing them a key plank of their offering to the public. “Despite the voices being raised against net zero, the fact is Scotland is sitting on a goldmine of green energy,” Newton-Smith is expected to say. “The numbers don’t lie. The opportunities are there. “Since 2022, Scotland’s net-zero sector has grown 20% and created 16,000 more jobs while average UK growth has near-flatlined. “So, let me be crystal clear. Business is behind net zero. Business is invested in our energy transition. And we’re behind the plans to go further. “But we can’t see net zero in isolation from the hydrocarbon sector. “Especially in Scotland. Oil and gas are still tens of thousands of jobs here. From the latest data it still makes up over 10% of Scotland’s GDP. “It will still be a part of the energy mix and the bridge to net zero, for some time yet. The infrastructure, the investment, the skills and knowledge of these industries will be mission critical for the transition. “But too often, they have been left out of the picture, hit by repeated tax changes and uncertainty. “On one hand, we need clear timelines and funding for net-zero commitments government has already

Read More »

Iberdrola Bags Contract to Power Honda Car Production in Mexico

Honda Motor Co. Ltd. has signed a deal to buy renewable power from Iberdrola SA to electrify the Japanese car maker’s Mexican operations in Celaya, Guanajuato, and El Salto, Jalisco. The electricity will come from Iberdrola Mexico’s wind farms. “Honda de México will use this power for its production in the country, where it has an installed capacity to manufacture up to 200,000 cars a year at its Guanajuato plant and more than 120,000 motorcycles at the one in Jalisco, in both cases for the local and export markets”, the Spanish power utility said in an online statement. “Supplying clean energy to Honda will prevent the annual release of 63,826 tonnes of CO2 [carbon dioxide] into the atmosphere, the equivalent of the carbon captured by more than a million trees over 10 years. “The agreement includes the acquisition of Guarantees of Origin, an instrument that makes it possible to certify the reduction of greenhouse gases by ensuring, with full traceability, that the electricity supply comes from clean sources”. Last week Iberdrola announced a long-term agreement to supply Air Liquide with clean power for the French company’s operations in Spain and Portugal. “It will allow Air Liquide to continue developing innovative and sustainable solutions for the supply of industrial gases and will enable its industrial and medical customers to meet their ambitions to reduce the carbon footprint associated with their final products”, Iberdrola said March 14. Air Liquide supplies gases to the industrial and healthcare sectors. Iberdrola has long-term contracts to supply power in several countries including Australia, Brazil, Germany, Italy, Mexico, Poland, Portugal, Spain, the United Kingdom and the United States. The electricity comes from offshore and onshore wind projects, as well as solar projects, according to the company. Other international firms that have committed to buying power from Iberdrola are ABInBev, Amazon, Apple, Bayer, Burger

Read More »

Res Integra to Build 4 MW Green Hydrogen Plant in Sicily

IREM SpA’s Res Integra SRL has partnered with Ohmium International Inc. to construct a 4-megawatt (MW) electrolyzer in an industrial area of Siracusa, which Res Integra said is one of the biggest petrochemical sites in Europe. Newark, California-based electrolyzer developer Ohmium will supply a production system with a capacity of 700 metric tons a year. The PEM electrolyzer is planned to be shipped this year, Ohmium said in a press release. The plant, which will use solar power, will support efforts to decarbonize Sicily’s industry and transport, Ohmium said. “We sought advanced electrolyzer technology that was efficient, cost-effective, and quick to deliver”, commented Dario Niciforo, managing director of Res Integra. “We explored multiple options, but Ohmium’s hyper-modular, easily scalable solutions and expedient delivery timeline made the choice clear for us”. Ohmium said, “The modular units feature integrated advanced power electronics enabling rapid dynamic ramping, essential for pairing with intermittent renewable energy”. “Building on Ohmium’s success delivering similar projects in Central and Southern Europe, we knew we could meet Res Integra’s technical requirements, even on an accelerated timeline”, said Ohmium chief executive Arne Ballantine. The project “leads us towards the construction of green energy plants in line with the objectives of the European economic community”, said IREM chief executive Giovanni Musso. IREM is a design, construction, supply and maintenance contractor in the oil, gas, power and chemical sectors. The European Union aims to reach 10 million metric tons of hydrogen production capacity by 2030. Recently the European Commission completed a second auction under the European Hydrogen Bank, a financing platform to scale up the renewable hydrogen value chain in the EU and partner countries. The auction was oversubscribed by over four times with about 6.3 gigawatts (GW) of project proposals asking for over EUR 4.8 billion ($5.2 billion), the Commission said

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model. Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication. AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »