MediaTek Announces Optimisation of Microsoft’s Phi-3.5 AI Models on Dimensity Chipsets

Date:

MediaTek announced on Monday that it has now optimised several of its mobile platforms for Microsoft’s Phi-3.5 artificial intelligence (AI) models. The Phi-3.5 series of small language models (SLMs), comprising Phi-3.5 Mixture of Experts (MoE), Phi-3.5 Mini, and Phi-3.5 Vision, was released in August. The open-source AI models were made available on Hugging Face. Instead of being typical conversational models, these were instruct models that require users to input specific instructions to get the desired output.

MediaTek Optimises Dimensity Chipsets for Phi-3.5 SLMs

In a blog post, MediaTek announced that its Dimenisty 9400, Dimensity 9300, and Dimensity 8300 chipsets are now optimised for the Phi-3.5 AI models. With this, these mobile platforms can efficiently process and run inference for on-device generative AI tasks using MediaTek’s neural processing units (NPUs).

Optimising a chipset for a specific AI model involves tailoring the hardware design, architecture, and operation of the chipset to efficiently support the processing power, memory access patterns, and data flow of that particular model. After optimising, the AI model will show reduced latency and power consumption, and increased throughput.

MediaTek highlighted that its processors are not only optimised for Microsoft’s Phi-3.5 MoE but also for Phi-3.5 Mini which offers multi-lingual support and Phi-3.5 Vision which comes with multi-frame image understanding and reasoning.

Notably, the Phi-3.5 MoE has 16×3.8 billion parameters. However, only 6.6 billion of them are active parameters when using two experts (typical use case). On the other hand, Phi-3.5 features 4.2 billion parameters and an image encoder, and the Phi-3.5 Mini has 3.8 billion parameters.

See also  Placebo Effect Link Discovered With Previously Unassociated Parts of the Brain

Coming to performance, Microsoft claimed that the Phi-3.5 MoE outperformed both Gemini 1.5 Flash and GPT-4o mini AI models on the SQuALITY benchmark which tests readability and accuracy when summarising a block of text.

While developers can leverage Microsoft Phi-3.5 directly via Hugging Face or the Azure AI Model Catalogue, MediaTek’s NeuroPilot SDK toolkit also offers access to these SLMs. The chip maker stated that the latter will enable developers to build optimised on-device applications capable of generative AI inference using the AI models across the above mentioned mobile platforms.

MediaTek announced on Monday that it has now optimised several of its mobile platforms for Microsoft’s Phi-3.5 artificial intelligence (AI) models. The Phi-3.5 series of small language models (SLMs), comprising Phi-3.5 Mixture of Experts (MoE), Phi-3.5 Mini, and Phi-3.5 Vision, was released in August. The open-source AI models were made available on Hugging Face. Instead of being typical conversational models, these were instruct models that require users to input specific instructions to get the desired output.

MediaTek Optimises Dimensity Chipsets for Phi-3.5 SLMs

In a blog post, MediaTek announced that its Dimenisty 9400, Dimensity 9300, and Dimensity 8300 chipsets are now optimised for the Phi-3.5 AI models. With this, these mobile platforms can efficiently process and run inference for on-device generative AI tasks using MediaTek’s neural processing units (NPUs).

Optimising a chipset for a specific AI model involves tailoring the hardware design, architecture, and operation of the chipset to efficiently support the processing power, memory access patterns, and data flow of that particular model. After optimising, the AI model will show reduced latency and power consumption, and increased throughput.

See also  Pancreatic Cancer Awareness Low Among Under-50 Adults, Despite Rising Cases

MediaTek highlighted that its processors are not only optimised for Microsoft’s Phi-3.5 MoE but also for Phi-3.5 Mini which offers multi-lingual support and Phi-3.5 Vision which comes with multi-frame image understanding and reasoning.

Notably, the Phi-3.5 MoE has 16×3.8 billion parameters. However, only 6.6 billion of them are active parameters when using two experts (typical use case). On the other hand, Phi-3.5 features 4.2 billion parameters and an image encoder, and the Phi-3.5 Mini has 3.8 billion parameters.

Coming to performance, Microsoft claimed that the Phi-3.5 MoE outperformed both Gemini 1.5 Flash and GPT-4o mini AI models on the SQuALITY benchmark which tests readability and accuracy when summarising a block of text.

While developers can leverage Microsoft Phi-3.5 directly via Hugging Face or the Azure AI Model Catalogue, MediaTek’s NeuroPilot SDK toolkit also offers access to these SLMs. The chip maker stated that the latter will enable developers to build optimised on-device applications capable of generative AI inference using the AI models across the above mentioned mobile platforms.

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...