Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Lightricks, the company behind popular creative apps like Facetune and VideoLeap, announced today the release of its most powerful AI video generation model to date. The LTX Video 13-billion-parameter model (LTXV-13B) generates high-quality AI video up to 30 times faster than comparable models while running on consumer-grade hardware rather than expensive enterprise GPUs.
The model introduces “multiscale rendering,” a novel technical approach that dramatically increases efficiency by generating video in progressive layers of detail. This enables creators to produce professional-quality AI videos on standard desktop computers and high-end laptops instead of requiring specialized enterprise equipment.
“The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs,” said Zeev Farbman, co-founder and CEO of Lightricks, in an exclusive interview with VentureBeat. “Our users can now create content with more consistency, better quality, and tighter control.”
How Lightricks democratizes AI video by solving the GPU memory problem
A major challenge for AI video generation has been the enormous computational requirements. Leading models from companies like Runway, Pika, and Luma typically run in the cloud on multiple enterprise-grade GPUs with 80GB or more of VRAM (video memory), making local deployment impractical for most users.
Farbman explained how LTXV-13B addresses this limitation: “The major dividing line between consumer and enterprise GPUs is the amount of VRAM. Nvidia positions their gaming hardware with strict memory limits — the previous generation 3090 and 4090 GPUs maxed out at 24 gigabytes of VRAM, while the newest 5090 reaches 32 gigabytes. Enterprise hardware, by comparison, offers significantly more.”
The new model is designed to operate effectively within these consumer hardware constraints. “The full model, without any quantization, without any approximation, you will be able to run on top consumer GPUs — 3090, 4090, 5090, including their laptop versions,” Farbman noted.
Inside ‘multiscale rendering’: The artist-inspired technique that makes AI video generation 30X faster
The core innovation behind LTXV-13B‘s efficiency is its multiscale rendering approach, which Farbman described as “the biggest technical breakthrough of this release.”
“It allows the model to generate details gradually,” he explained. “You’re starting on the coarse grid, getting a rough approximation of the scene, of the motion of the objects moving, etc. And then the scene is kind of divided into tiles. And every tile is filled with progressively more details.”
This process mirrors how artists approach complex scenes — starting with rough sketches before adding progressively finer details. The advantage for AI is that “your peak amount of VRAM is limited by a tile size, not the final resolution,” Farbman said.
The model also features a more compressed latent space, which requires less memory while maintaining quality. “With videos, you have a higher compression ratio that allows you, while you’re in the latent space, to just take less VRAM,” Farbman added.

Why Lightricks is betting on open source when AI markets are increasingly closed
While many leading AI models remain behind closed APIs, Lightricks has made LTXV-13B fully open source, available on both Hugging Face and GitHub. This decision comes during a period when open-source AI development has faced challenges from commercial competition.
“A year ago, things were closed, but things are kind of opening up. We’re seeing really a lot of cool LLMs and diffusion models opening up,” Farbman reflected. “I’m more optimistic now than I was half a year ago.”
The open-source strategy also helps accelerate research and improvement. “The main rationality for open-sourcing it is to reduce the cost of your R&D,” Farbman explained. “There are a ton of people in academia that use the model, write papers, and you’re starting to become this curator that understands where the real gold is.”
How Getty and Shutterstock partnerships help solve AI’s copyright challenges
As legal challenges mount against AI companies using scraped training data, Lightricks has secured partnerships with Getty Images and Shutterstock to access licensed content for model training.
“Collecting data for training AI models is still a legal gray area,” Farbman acknowledged. “We have big customers in our enterprise segment that care about this kind of stuff, so we need to make sure we can provide clean models for them.”
These partnerships allow Lightricks to offer a model with reduced legal risk for commercial applications, potentially giving it an advantage in enterprise markets concerned about copyright issues.
The strategic gamble: Why Lightricks offers its advanced AI model free to startups
In an unusual move for the AI industry, Lightricks is offering LTXV-13B free to license for enterprises with under $10 million in annual revenue. This approach aims to build a community of developers and companies who can demonstrate the model’s value before monetization.
“The thinking was that academia is off the hook. These guys can do whatever they want with the model,” Farbman said. “With startups and industry, you want to create win-win situations. I don’t think you can make a ton of money from a community of artists playing with AI stuff.”
For larger companies that find success with the model, Lightricks plans to negotiate licensing agreements similar to how game engines charge successful developers. “Once they hit ten million in revenue, we’re going to come to talk with them about licensing,” Farbman explained.
Despite the advances represented by LTXV-13B, Farbman acknowledges that AI video generation still has limitations. “If we’re honest with ourselves and look at the top models, we’re still far away from Hollywood movies. They’re not there yet,” he said.
However, he sees immediate practical applications in areas like animation, where creative professionals can use AI to handle time-consuming aspects of production. “When you think about production costs of high-end animation, the real creative work, people thinking about key frames and the story, is a small percent of the budget. But key framing is a big resource thing,” Farbman noted.
Looking ahead, Farbman predicts the next frontier will be multimodal video models that integrate different media types in a shared latent space. “It’s going to be music, audio, video, etc. And then things like doing good lip sync will be easier. All these things will disappear. You’re going to have this multimodal model that knows how to operate across all these different modalities.”
LTXV-13B is available now as an open-source release and is being integrated into Lightricks’ creative apps, including its flagship storytelling platform, LTX Studio.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
