News Week
Magazine PRO

Company

See also  UCLA Unveils SPLITTER, a Tethered Jumping Robot for Space Exploration

Meta Reportedly Partnering With Arm to Bring Advanced AI Capabilities to Smartphones

Date:

Meta Connect 2024, the company’s developer conference, took place on Wednesday. During the event, the social media giant unveiled several new artificial intelligence (AI) features and wearable devices. Apart from that Meta reportedly also announced a partnership with the tech giant Arm on building special small language models (SLMs). These AI models are said to be used to power smartphones and other devices and introduce newer ways of using these devices. The idea behind it is to provide on-device and edge computing options to keep AI inference fast.

Meta and Arm Partner to Build AI Models

According to a CNET report, Meta and Arm are planning to build AI models that can carry out more advanced tasks on devices. For instance, the AI could act as the device’s virtual assistant and can make a call or click a picture. This is not too far-fetched as today, AI tools can already perform a plethora of tasks such as editing images and drafting emails.

However, the main difference is that users have to interact with the interface or type particular commands to get AI to do these tasks. At the Meta event, the duo reportedly highlighted they wanted to do away with this and make AI models more intuitive and responsive.

One way to do this would be by bringing the AI models on-device or keeping the servers very close to the devices. The latter is also known as edge computing and is used by research institutions and large enterprises. Ragavan Srinivasan, vice president of product management for generative AI at Meta told the publication that developing these new AI models is a good way to tap into this opportunity.

See also  Tata Tiago EV Surpasses Milestone of 50,000 Units Delivery Since Launch

For this, the AI models will have to be smaller in size. While Meta has developed large language models (LLMs) as large as 90 billion parameters, these are not suitable for smaller devices or faster processing. The Llama 3.2 1B and 3B models are believed to be ideal for this.

However, another issue is that AI models will also have to be equipped with newer capabilities beyond simple text generation and computer vision. This is where Arm comes in. As per the report, Meta is working closely with the tech giant to develop processor-optimised AI models that can adapt to the workflows of devices such as smartphones, tablets, and even laptops. No other details about the SLMs have been shared currently.

Meta Connect 2024, the company’s developer conference, took place on Wednesday. During the event, the social media giant unveiled several new artificial intelligence (AI) features and wearable devices. Apart from that Meta reportedly also announced a partnership with the tech giant Arm on building special small language models (SLMs). These AI models are said to be used to power smartphones and other devices and introduce newer ways of using these devices. The idea behind it is to provide on-device and edge computing options to keep AI inference fast.

Meta and Arm Partner to Build AI Models

According to a CNET report, Meta and Arm are planning to build AI models that can carry out more advanced tasks on devices. For instance, the AI could act as the device’s virtual assistant and can make a call or click a picture. This is not too far-fetched as today, AI tools can already perform a plethora of tasks such as editing images and drafting emails.

See also  Xreal One, Xreal One Pro AR Glasses With X1 Spatial Computing Chip Launched: Price, Specifications

However, the main difference is that users have to interact with the interface or type particular commands to get AI to do these tasks. At the Meta event, the duo reportedly highlighted they wanted to do away with this and make AI models more intuitive and responsive.

One way to do this would be by bringing the AI models on-device or keeping the servers very close to the devices. The latter is also known as edge computing and is used by research institutions and large enterprises. Ragavan Srinivasan, vice president of product management for generative AI at Meta told the publication that developing these new AI models is a good way to tap into this opportunity.

For this, the AI models will have to be smaller in size. While Meta has developed large language models (LLMs) as large as 90 billion parameters, these are not suitable for smaller devices or faster processing. The Llama 3.2 1B and 3B models are believed to be ideal for this.

However, another issue is that AI models will also have to be equipped with newer capabilities beyond simple text generation and computer vision. This is where Arm comes in. As per the report, Meta is working closely with the tech giant to develop processor-optimised AI models that can adapt to the workflows of devices such as smartphones, tablets, and even laptops. No other details about the SLMs have been shared currently.

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...