Alibaba Releases Open-Source Wan 2.1 Suite of AI Video Generation Models, Claimed to Outperform OpenAI’s Sora

Date:

Alibaba released a suite of artificial intelligence (AI) video generation models on Wednesday. Dubbed Wan 2.1, these are open-source models that can be used for both academic and commercial purposes. The Chinese e-commerce giant released the models in several parameter-based variants. Developed by the company’s Wan team, these models were first introduced in January and the company claimed that Wan 2.1 can generate highly realistic videos. Currently, these models are being hosted on the AI and machine learning (ML) hub Hugging Face.

Alibaba Introduces Wan 2.1 Video Generation Models

The new Alibaba video AI models are hosted on Alibaba’s Wan team’s Hugging Face page. The model pages also detail the Wan 2.1 suite of large language models (LLMs). There are four models in total — T2V-1.3B, T2V-14B, I2V-14B-720P, and I2V-14B-480P. The T2V is short for text-to-video while the I2V stands for image-to-video.

The researchers claim that the smallest variant, Wan 2.1 T2V-1.3B, can be run on a consumer-grade GPU with as little as 8.19GB vRAM. As per the post, the AI model can generate a five-second-long video with 480p resolution using an Nvidia RTX 4090 in about four minutes.

While the Wan 2.1 suite is aimed at video generation, they can also perform other functions such as image generation, video-to-audio generation, and video editing. However, the currently open-sourced models are not capable of these advanced tasks. For video generation, it accepts text prompts in Chinese and English languages as well as image inputs.

Coming to the architecture, the researchers revealed that the Wan 2.1 models are designed using a diffusion transformer architecture. However, the company innovated the base architecture with new variational autoencoders (VAE), training strategies, and more.

See also  TSMC’s Sales Beat Estimates in Good Sign for AI Chip Demand

Most notably, the AI models use a new 3D causal VAE architecture dubbed Wan-VAE. It improves spatiotemporal compression and reduces memory usage. The autoencoder can encode and decode unlimited-length 1080p resolution videos without losing historical temporal information. This enables consistent video generation.

Based on internal testing, the company claimed that the Wan 2.1 models outperform OpenAI’s Sora AI model in consistency, scene generation quality, single object accuracy, and spatial positioning.

These models are available under the Apache 2.0 licence. While it does allow for unrestricted usage for academic and research purposes, commercial usage comes with multiple restrictions.

Alibaba released a suite of artificial intelligence (AI) video generation models on Wednesday. Dubbed Wan 2.1, these are open-source models that can be used for both academic and commercial purposes. The Chinese e-commerce giant released the models in several parameter-based variants. Developed by the company’s Wan team, these models were first introduced in January and the company claimed that Wan 2.1 can generate highly realistic videos. Currently, these models are being hosted on the AI and machine learning (ML) hub Hugging Face.

Alibaba Introduces Wan 2.1 Video Generation Models

The new Alibaba video AI models are hosted on Alibaba’s Wan team’s Hugging Face page. The model pages also detail the Wan 2.1 suite of large language models (LLMs). There are four models in total — T2V-1.3B, T2V-14B, I2V-14B-720P, and I2V-14B-480P. The T2V is short for text-to-video while the I2V stands for image-to-video.

The researchers claim that the smallest variant, Wan 2.1 T2V-1.3B, can be run on a consumer-grade GPU with as little as 8.19GB vRAM. As per the post, the AI model can generate a five-second-long video with 480p resolution using an Nvidia RTX 4090 in about four minutes.

See also  iPhone 16 Pro, iPhone 16 Pro Max Battery Replacements Cost More in India Than Previous Models

While the Wan 2.1 suite is aimed at video generation, they can also perform other functions such as image generation, video-to-audio generation, and video editing. However, the currently open-sourced models are not capable of these advanced tasks. For video generation, it accepts text prompts in Chinese and English languages as well as image inputs.

Coming to the architecture, the researchers revealed that the Wan 2.1 models are designed using a diffusion transformer architecture. However, the company innovated the base architecture with new variational autoencoders (VAE), training strategies, and more.

Most notably, the AI models use a new 3D causal VAE architecture dubbed Wan-VAE. It improves spatiotemporal compression and reduces memory usage. The autoencoder can encode and decode unlimited-length 1080p resolution videos without losing historical temporal information. This enables consistent video generation.

Based on internal testing, the company claimed that the Wan 2.1 models outperform OpenAI’s Sora AI model in consistency, scene generation quality, single object accuracy, and spatial positioning.

These models are available under the Apache 2.0 licence. While it does allow for unrestricted usage for academic and research purposes, commercial usage comes with multiple restrictions.

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...