Mistral Announces Pixtral 12B Multimodal AI Model With ‘Computer Vision’ Feature

Date:

Mistral released its first multimodal artificial intelligence (AI) model dubbed Pixtral 12B on Wednesday. The AI firm, known for its open-source large language models (LLMs), has also made the latest AI model available on GitHub and Hugging Face for users to download and test out. Notably, despite being multimodal, Pixtral can only process images using computer vision technology and answer queries about them. Two special encoders have been added for this functionality. It cannot generate images like the Stable Diffusion models or Midjourney’s Generative Adversarial Networks (GANs).

Mistral Releases Pixtral 12B

Gaining a reputation for minimalist announcements, the official account of Mistral on X (formerly known as Twitter) released the AI model in a post by sharing its magnet link. The total file size of Pixtral 12B is 24GB, and it will require an NPU-enabled PC or one with a powerful GPU to run the model.

The Pixtral 12B comes with 12 billion parameters and is built using the company’s existing Nemo 12B AI model. Mistral highlights users will also need the Gaussian Error Linear Unit (GeLU) as the vision adapter and 2D Rotary Position Embedding (RoPE) as the vision encoder.

Notably, users can upload image files or URLs to the Pixtral 12B and it should be able to answer queries about the image such as identifying the objects, counting the number of objects, and sharing additional information. Since it is built on Nemo, the model will also be adept at completing all the typical text-based tasks as well.

A Reddit user posted an image about the benchmarking scores of Pixtral 12B, and it appears that the LLM outperforms Claude-3 Haiku and Phi-3 Vision in multimodal capabilities on the ChartQA bench. It also outperforms both rival AI models on the Massive Multitask Language Understanding (MMLU) bench for multimodal knowledge and reasoning.

See also  Apple to Switch to OLED Displays for All Upcoming iPhone Models From 2025: Report

Citing the company spokesperson, TechCrunch reports that the Mistral AI model can be fine-tuned and used under an Apache 2.0 license. This means the outputs from the model can be used for personal or commercial usage without restrictions. Additionally, Sophia Yang, the Head of Developer Relations at Mistral clarified in a post that Pixtral 12B will soon be available on Le Chat and Le Platforme.

For now, users can directly download the AI model using the magnet link provided by the company. Alternatively, the model weights have also been hosted on Hugging Face and GitHub listings.

Mistral released its first multimodal artificial intelligence (AI) model dubbed Pixtral 12B on Wednesday. The AI firm, known for its open-source large language models (LLMs), has also made the latest AI model available on GitHub and Hugging Face for users to download and test out. Notably, despite being multimodal, Pixtral can only process images using computer vision technology and answer queries about them. Two special encoders have been added for this functionality. It cannot generate images like the Stable Diffusion models or Midjourney’s Generative Adversarial Networks (GANs).

Mistral Releases Pixtral 12B

Gaining a reputation for minimalist announcements, the official account of Mistral on X (formerly known as Twitter) released the AI model in a post by sharing its magnet link. The total file size of Pixtral 12B is 24GB, and it will require an NPU-enabled PC or one with a powerful GPU to run the model.

The Pixtral 12B comes with 12 billion parameters and is built using the company’s existing Nemo 12B AI model. Mistral highlights users will also need the Gaussian Error Linear Unit (GeLU) as the vision adapter and 2D Rotary Position Embedding (RoPE) as the vision encoder.

See also  iQOO 13 to Feature New-Generation BOE Q10 Display With 2K Resolution

Notably, users can upload image files or URLs to the Pixtral 12B and it should be able to answer queries about the image such as identifying the objects, counting the number of objects, and sharing additional information. Since it is built on Nemo, the model will also be adept at completing all the typical text-based tasks as well.

A Reddit user posted an image about the benchmarking scores of Pixtral 12B, and it appears that the LLM outperforms Claude-3 Haiku and Phi-3 Vision in multimodal capabilities on the ChartQA bench. It also outperforms both rival AI models on the Massive Multitask Language Understanding (MMLU) bench for multimodal knowledge and reasoning.

Citing the company spokesperson, TechCrunch reports that the Mistral AI model can be fine-tuned and used under an Apache 2.0 license. This means the outputs from the model can be used for personal or commercial usage without restrictions. Additionally, Sophia Yang, the Head of Developer Relations at Mistral clarified in a post that Pixtral 12B will soon be available on Le Chat and Le Platforme.

For now, users can directly download the AI model using the magnet link provided by the company. Alternatively, the model weights have also been hosted on Hugging Face and GitHub listings.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Elon Musk unveils Tesla’s Robovan, Robotaxi, humanoid robots

Tesla's recent "We, Robot" event showcased a number of...

AI-powered scam targets 2.5 billion Gmail users in sophisticated phishing attacks

​​Gmail is used by nearly 2.5 billion users worldwide,...

Fox News AI Newsletter: AI-powered scam targets Gmail users

IN TODAY’S NEWSLETTER:- AI-powered scam targets 2.5 billion Gmail...

iOS 18: Maximize your privacy by turning off these iPhone settings now

IOS 18 brings a host of new features to...