Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Patronus AI announced today the launch of what it calls the industry’s first multimodal large language model-as-a-judge (MLLM-as-a-Judge), a tool designed to evaluate AI systems that interpret images and produce text.
The new evaluation technology aims to help developers detect and mitigate hallucinations and reliability issues in multimodal AI applications. E-commerce giant Etsy has already implemented the technology to verify caption accuracy for product images across its marketplace of handmade and vintage goods.
“Super excited to announce that Etsy is one of our ship customers,” said Anand Kannappan, cofounder of Patronus AI, in an exclusive interview with VentureBeat. “They have hundreds of millions of items in their online marketplace for handmade and vintage products that people are creating around the world. One of the things that their AI team wanted to be able to leverage generative AI for was the ability to auto-generate image captions and to make sure that as they scale across their entire global user base, that the captions that are generated are ultimately correct.”
Why Google’s Gemini powers the new AI judge rather than OpenAI
Patronus built its first MLLM-as-a-Judge, called Judge-Image, on Google’s Gemini model after extensive research comparing it with alternatives like OpenAI’s GPT-4V.
“We tended to see that there was a slighter preference toward egocentricity with GPT-4V, whereas we saw that Gemini was less biased in those ways and had more of an equitable approach to being able to judge different kinds of input-output pairs,” Kannappan explained. “That was seen in the uniform scoring distribution across the different sources that they looked at.”
The company’s research yielded another surprising insight about multimodal evaluation. Unlike text-only evaluations where multi-step reasoning often improves performance, Kannappan noted that it “typically doesn’t actually increase MLLM judge performance” for image-based assessments.
Judge-Image provides ready-to-use evaluators that assess image captions on multiple criteria, including caption hallucination detection, recognition of primary and non-primary objects, object location accuracy, and text detection and analysis.
Beyond retail: How marketing teams and law firms can benefit from AI image evaluation
While Etsy represents a flagship customer in e-commerce, Patronus sees applications extending far beyond retail.
These include “marketing teams across companies that are generally looking at being able to scalably create descriptions and captions against new blocks in design, especially marketing design, but also product design,” Kannappan said.
He also highlighted applications for enterprises dealing with document processing: “Larger enterprises like venture services companies and law firms typically might have engineering teams that are using relatively legacy technology to be able to extract different kinds of information from PDFs, to be able to summarize the content inside of larger documents.”
As AI becomes increasingly critical to business processes, many companies face the build-versus-buy dilemma for evaluation tools. Kannappan argues that outsourcing AI evaluation makes strategic and economic sense.
“As we’ve worked with teams, [we’ve found that] a lot of folks may start with something to see if they can develop something internally, and then they realize that it’s, one, not core to their value prop or the product they’re developing. And two, it is a very challenging problem, both from an AI perspective, but also from an infrastructure perspective,” he said.
This applies particularly to multimodal systems, where failures can occur at multiple points in the process. “When you’re dealing with RAG systems or agents, or even multimodal AI systems, we’re seeing that failures happen across all parts of the system,” Kannappan noted.
How Patronus plans to make money while competing with tech giants
Patronus offers multiple pricing tiers, starting with a free option that allows users to experiment with the platform up to certain volume limits. Beyond that threshold, customers pay as they go for evaluator usage or can engage with the sales team for enterprise arrangements with custom features and tailored pricing.
Despite using Google’s Gemini model as its foundation, the company positions itself as complementary rather than competitive with foundation model providers like Google, OpenAI and Anthropic.
“We don’t necessarily see the technology that we build or the solutions that we build as competitive with foundational companies, but rather very complementary and additional new powerful tools in the toolkit that ultimately help folks develop better LLM systems, as opposed to LLMs themselves,” Kannappan said.
Audio evaluation coming next as Patronus expands multimodal oversight
Today’s announcement represents one step in Patronus’s broader strategy for AI evaluation across different modalities. The company plans to expand beyond images into audio evaluation soon.
“We’re excited because this is the next phase of our vision towards multimodal, and specifically focused on images today — and then over time, we’re excited about what we’ll do, especially with audio in the future,” Kannappan confirmed.
This roadmap aligns with what Kannappan describes as the company’s “research vision towards scalable oversight” — developing evaluation mechanisms that can keep pace with increasingly sophisticated AI systems.
“We continue to develop new systems, products, frameworks, methods that ultimately are equally capable as the intelligent systems that we intend to want to have oversight over as humans in the long run,” he said.
As businesses race to deploy AI systems that can interpret images, extract text from documents, and generate visual content, the risk of inaccuracies, hallucinations and biases grows. Patronus is betting that even as foundation models improve, the challenges of evaluating complex multimodal AI systems will remain — requiring specialized tools that can serve as impartial judges of increasingly human-like AI output. In the high-stakes world of commercial AI deployment, these digital judges may prove as valuable as the models they evaluate.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
