Adobe Firefly Video Model With AI-Powered Video Generation Feature Teased

Date:

Adobe Firefly Video Model, the upcoming artificial intelligence (AI) model capable of video generation, was previewed by the company on Wednesday. The software giant first announced the under-development video model in April and has now shared more details about it. The large language model (LLM) will be able to generate videos from text prompts as well as image inputs. Users can also generate videos from various camera angles, styles, and effects. The company also stated that the video model will be available in beta later this year.

Adobe Firefly Video Model Previewed

In a newsroom post, the company detailed the capabilities of the native AI video model. A YouTube video was also shared to showcase its features. Once launched, the Firefly Video Model will join Adobe’s existing generative models including the Image Model, Vector Model, and Design Model.

Based on the YouTube video, it appears the Adobe Firefly Video Model can generate videos from both text and image-based inputs. This means users will be able to write a detailed prompt or share an image as the reference for the output video.

Users will also be able to make complex requests such as multiple camera angles, lighting conditions, styles, zoom, and motions, the company claimed. Notably, the AI-generated videos shared by the company appeared to be on par with what was teased with OpenAI’s Sora.

Additionally, the company also demonstrated the Generative Extend feature, which was first revealed (but not showcased) in April. The feature essentially allows users to extend the duration of a shot by adding extra frames. These frames are generated using AI by taking reference from the preceding and following frames. This can give editors the option to lengthen a video or allow the camera to pan on a shot a couple of seconds longer.

See also  Amazon Great Republic Day Sale 2025: Best Deals on TWS Earbuds from JBL, Sony and More

Citing Alexandru Costin, VP of generative AI at Adobe, The Verge reports that the maximum length of the AI-generated videos has been kept at five seconds, which is on par with similar tools available in the market. Notably, while the company said the Firefly Video Model will be available as a standalone app, it will also be integrated within the Creative Cloud, Experience Cloud and Adobe Express workflows.

Further, the company claims that the AI video model is “commercially safe” and has only been trained on licenced content, data available in the public domain, and those taken from Adobe Stock. The software giant also highlighted that the AI model will not be trained on user data.

Adobe Firefly Video Model, the upcoming artificial intelligence (AI) model capable of video generation, was previewed by the company on Wednesday. The software giant first announced the under-development video model in April and has now shared more details about it. The large language model (LLM) will be able to generate videos from text prompts as well as image inputs. Users can also generate videos from various camera angles, styles, and effects. The company also stated that the video model will be available in beta later this year.

Adobe Firefly Video Model Previewed

In a newsroom post, the company detailed the capabilities of the native AI video model. A YouTube video was also shared to showcase its features. Once launched, the Firefly Video Model will join Adobe’s existing generative models including the Image Model, Vector Model, and Design Model.

Based on the YouTube video, it appears the Adobe Firefly Video Model can generate videos from both text and image-based inputs. This means users will be able to write a detailed prompt or share an image as the reference for the output video.

See also  Reliance Jio Led Indian Market In Network Speed and Mobile Coverage; Airtel Offered Best 5G Gaming in H2 2024

Users will also be able to make complex requests such as multiple camera angles, lighting conditions, styles, zoom, and motions, the company claimed. Notably, the AI-generated videos shared by the company appeared to be on par with what was teased with OpenAI’s Sora.

Additionally, the company also demonstrated the Generative Extend feature, which was first revealed (but not showcased) in April. The feature essentially allows users to extend the duration of a shot by adding extra frames. These frames are generated using AI by taking reference from the preceding and following frames. This can give editors the option to lengthen a video or allow the camera to pan on a shot a couple of seconds longer.

Citing Alexandru Costin, VP of generative AI at Adobe, The Verge reports that the maximum length of the AI-generated videos has been kept at five seconds, which is on par with similar tools available in the market. Notably, while the company said the Firefly Video Model will be available as a standalone app, it will also be integrated within the Creative Cloud, Experience Cloud and Adobe Express workflows.

Further, the company claims that the AI video model is “commercially safe” and has only been trained on licenced content, data available in the public domain, and those taken from Adobe Stock. The software giant also highlighted that the AI model will not be trained on user data.

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...