Liquid AI Announces Generative AI Liquid Foundation Models With Smaller Memory Footprint

Date:

Liquid AI, a Massachusetts-based artificial intelligence (AI) startup, announced its first generative AI models not built on the existing transformer architecture. Dubbed Liquid Foundation Model (LFM), the new architecture moves away from Generative Pre-trained Transformers (GPTs) which is the foundation for popular AI models such as the GPT series by OpenAI, Gemini, Copilot, and more. The startup claims that the new AI models were built from first principles and they outperform large language models (LLMs) in the comparable size bracket.

Liquid AI’s New Liquid Foundation Models

The startup was co-founded by researchers at the Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory (CSAIL) in 2023 and aimed to build newer architecture for AI models that can perform at a similar level or surpass the GPTs.

These new LFMs are available in three parameter sizes of 1.3B, 3.1B, and 40.3B. The latter is a Mixture of Experts (MoE) model, which means it is made up of various smaller language models and is aimed at tackling more complex tasks. The LFMs are now available on the company’s Liquid Playground, Lambda for Chat UI and API, and Perplexity Labs and will soon be added to Cerebras Inference. Further, the AI models are being optimised for Nvidia, AMD, Qualcomm, Cerebras, and Apple hardware, the company stated.

LFMs also differ significantly from the GPT technology. The company highlighted that these models were built from first principles. The first principles ar essentially a problem-solving approach where a complex technology is broken down to its fundamentals and then built up from there.

See also  OnePlus Nord Buds 3 Design, Key Features Revealed Ahead of September 17 India Launch

According to the startup, these new AI models are built on something called computational units. Put simply, this is a redesign of the token system, and instead, the company uses the term Liquid system. These contain condensed information with a focus on maximising knowledge capacity and reasoning. The startup claims this new design helps reduce memory costs during inference, and increases performance output across video, audio, text, time series, and signals.

The company further claims that the advantage of the Liquid-based AI models is that its architecture can be automatically optimised for a specific platform based on their requirements and inference cache size.

While the clams made by the startup are tall, their performance and efficiency can only be gauged as developers and enterprises begin using them for their AI workflows. The startup did not reveal its source of datasets, or any safety measures added to the AI models.

Liquid AI, a Massachusetts-based artificial intelligence (AI) startup, announced its first generative AI models not built on the existing transformer architecture. Dubbed Liquid Foundation Model (LFM), the new architecture moves away from Generative Pre-trained Transformers (GPTs) which is the foundation for popular AI models such as the GPT series by OpenAI, Gemini, Copilot, and more. The startup claims that the new AI models were built from first principles and they outperform large language models (LLMs) in the comparable size bracket.

Liquid AI’s New Liquid Foundation Models

The startup was co-founded by researchers at the Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory (CSAIL) in 2023 and aimed to build newer architecture for AI models that can perform at a similar level or surpass the GPTs.

See also  Apple Arcade Adds Balatro+, NBA 2K25 Arcade Edition and More in September and October

These new LFMs are available in three parameter sizes of 1.3B, 3.1B, and 40.3B. The latter is a Mixture of Experts (MoE) model, which means it is made up of various smaller language models and is aimed at tackling more complex tasks. The LFMs are now available on the company’s Liquid Playground, Lambda for Chat UI and API, and Perplexity Labs and will soon be added to Cerebras Inference. Further, the AI models are being optimised for Nvidia, AMD, Qualcomm, Cerebras, and Apple hardware, the company stated.

LFMs also differ significantly from the GPT technology. The company highlighted that these models were built from first principles. The first principles ar essentially a problem-solving approach where a complex technology is broken down to its fundamentals and then built up from there.

According to the startup, these new AI models are built on something called computational units. Put simply, this is a redesign of the token system, and instead, the company uses the term Liquid system. These contain condensed information with a focus on maximising knowledge capacity and reasoning. The startup claims this new design helps reduce memory costs during inference, and increases performance output across video, audio, text, time series, and signals.

The company further claims that the advantage of the Liquid-based AI models is that its architecture can be automatically optimised for a specific platform based on their requirements and inference cache size.

While the clams made by the startup are tall, their performance and efficiency can only be gauged as developers and enterprises begin using them for their AI workflows. The startup did not reveal its source of datasets, or any safety measures added to the AI models.

 

See also  PayPal Launches Crypto Buying, Selling, and Holding for US Business Accounts Owing to User Demand

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

AI-powered scam targets 2.5 billion Gmail users in sophisticated phishing attacks

​​Gmail is used by nearly 2.5 billion users worldwide,...

Fox News AI Newsletter: AI-powered scam targets Gmail users

IN TODAY’S NEWSLETTER:- AI-powered scam targets 2.5 billion Gmail...

iOS 18: Maximize your privacy by turning off these iPhone settings now

IOS 18 brings a host of new features to...

Easy ways to make calls when vision is a challenge

Technology can be wonderfully convenient and provide a great...