Lightricks, a visual content tech pioneer with AI, has released an upgraded open source model that meets the latest advancements in large tech rivals such as Openai, Meta, and Google, and sets new standards for rapid, high-fidelity video generation.
The LTXV-13B is the successor to LTX Video, the original and highly efficient video generator of Lightricks. It was compact enough to generate high-quality videos at record speeds on consumer-grade hardware.
The Lightricks Leadership Team, which unveiled a new version this week, considers the LTXV-13B a significant upgrade. Using 13 billion parameters, it offers a much more refined parameter than the previous 200 million parameter model. It also boasts cutting-edge features that are not found in LTXV 0.9.6. This includes new multi-scale rendering features that reduce latency while increasing the user’s ability to improve output details.
“Our users are now able to create content with more consistency, better quality and more stringent control,” Lightricks CEO Zeev Farbman said in a statement. “This new version of LTX video runs on consumer hardware, but it will remain faithful to anything that violates speed, creativity and usability.”
Commitment to open source
With the release of the LTXV-13B, Lightricks doubles its belief that the open source approach is the best way to accelerate innovation within the AI community. When the company first launched the original LTXV model, developers wanted to make the model as accessible as possible, encouraging AI enthusiasts and academics to experiment as much as possible.
Lightricks’ leadership believes this strategy is necessary to advance an industry that is making progress by tinkering with small startups, individual developers, and even AI models, contributing to the codebase and tinkering with AI models to build innovative integrations. However, ecosystems can only benefit from this when there is open access to the most sophisticated models.
Something like Openai’s Sora and Adobe’s Firefly are thought to be at the cutting edge of video generation, but are locked behind the paywall’s API. This creates a major barrier to entry into newcomers, but the unique licensing nature of the models offered by Big Tech means it is impossible to build them.
“The best models on the market are closed today,” Ferbman told Calculist in November. “This creates problems that go beyond the cost. For example, gaming companies want to create simple graphics and experiment with these models with visual styles, but closed models don’t allow that.”
Like its predecessor, the LTXV-13B is open sourced and anyone can experiment via hug hugs. Users can customize it to their preferences to customize, tweak it, build on it, add new features, and enhance training data. As always, Lightricks wants to see what the community can do.
“We set out on an adventure to distribute open models so academia and the industry can use it, add capabilities and develop new features. This will make the competition more intense,” Ferbman said.
Community contribution and ethical production
According to the release announcement, the LTXV-13B benefits from major contributions from the open source community, enhancing aspects such as creative adaptability, movement consistency and scene consistency, improving the overall quality of the output. The company highlights new upsampling controls for video editing that help improve frame granularity and reverse noise. Another important advance is the inference of the VACE model. This simplifies tasks such as video to video, reference to video, video to video editing, and more.
Furthermore, the open source community can run efficiently on consumer hardware despite being much larger than the original LTX video model, as it has improved scalability on the LTXV-13B and helps optimize inference tasks using the Q8 kernel with a diffuser.
Users can also reassure the “ethical” nature of the LTXV-13B trained on stock photo and video company Getty Images and Shutterstock license data. This is in stark contrast to the questionable practices of high-tech giants like Openai. This has attracted a great deal of controversy by using content scraped off from the Internet’s top publishers to train data, raising concerns about copyright infringement.
Additionally, the high quality of Getty and Shutterstock visual assets has led to a dramatic improvement in the overall caliber of the LTXV-13B’s output, says Lightricks.
The model’s performance has been further enhanced with new multi-scale rendering capabilities. This approach gives creators more control over the details of the video they generate, and is much faster by rendering times up to 30 times faster than models of similar sizes.
Competition for AI video models intensifies
The launch of the LTXV-13B highlights the rapid pace of development in AI video generation less than three weeks after Lightricks announced a major upgrade to the original model using the LTXV 0.9.6.
The release received warm receptions from users who pointed out that it was particularly fast increase, excellent speed compliance and greater consistency among other benefits. The default resolution of that output increased significantly to 1216 x 704 pixels, especially at 30 frames per 30 frames, resulting in more fluid video. Lightricks also provided a “distilled” version of its model, increasing its ability to produce high-quality output on low-power hardware.
Without a doubt, the AI community is keen to see how the latest improvements to the LTXV-13B stack up with other industries. AI Video Arms Race has been dramatically strengthened in recent months, with dozens of competitors making great strides of their own. In March, Runway announced that it was profiting visually and faithfully by launching the Gen-4 model.
Powers such as Openai, Google and Adobe have attracted a lot of attention with their latest proprietary video models such as Sora, Veo 2 and Firefly. Meanwhile, Alibaba Cloud released a series of models based on the 14B and 1.3B parameter WAN 2.1 model, throwing hats at the open source ring in February.
Lightricks also faces competition in terms of ethical AI video generation from a startup called Moonvalley, which launched its first video generation model, Marey in March, and was trained only with “clean” data that the company has fully licensed or owned.
One of the advantages of the LTXV-13B is its integration with Lightricks’ LTX Studio Platform. Users can download models themselves, but it is much easier to access through web apps used by professional and enthusiast creators.
LTX Studio provides users with additional control over the editing process with features like camera motion control, keyframe editing, and multi-shot sequencing, making it easy to refine the output of your model. The platform also provides access to third-party models such as VEO 2 and Flux, allowing for further experimentation.
🚀Want to share the story?
Submit your stories to TechStartUps.com in front of thousands of founders, investors, PE companies, tech executives, decision makers and tech leaders.
Please attract attention
Source link