Sakana AI Announces AI CUDA Engineer That Can Speed Up Model Development and Deployment

Date:

Sakana AI, a Tokyo-based artificial intelligence (AI) firm, introduced a new artificial intelligence (AI) agentic framework that can improve the development and deployment speeds of large language models (LLMs). Announced on Thursday, the company unveiled the AI CUDA Engineer that improves both the pre-training and inference speeds of an AI model by optimising the codebase. The AI firm highlighted that the entire process is driven by AI agents and is end-to-end automated. Notably, Sakana AI introduced The AI Scientist last year which can conduct scientific research.

Sakana AI Unveils AI CUDA Engineer

In a post, the Japanese AI firm stated that after developing AI systems that can create new models, and fully automate the AI research process, it began working on ways to speed up the deployment and inference speeds of an LLM.

The company said that the research led to the development of the AI CUDA Engineer. It is a fully automated, comprehensive agent framework for CUDA (Compute Unified Device Architecture) kernel discovery and optimisation.

CUDA kernels can be understood as specialised functions that run on Nvidia GPUs, allowing parallel execution of code across multiple threads. Due to parallelism, it is more optimised than traditional methods and allows for the acceleration of computational tasks, especially those with large datasets. As such, this is considered a great way to optimise AI models’ deployment and inference.

Sakana AI said the AI CUDA Engineer can automatically convert PyTorch modules into optimised CUDA kernels, to significantly improve deployment speedups. It can generate kernels that are said to be 10-100 times faster than its PyTorch counterpart.

See also  NASA Perseverance Rover Discovers Strange Zebra-Striped Rock on Mars

The process includes four steps. First, the agent framework converts the PyTorch code into working kernels. Then, the agent implements optimisation techniques to ensure only the best kernels are generated. Then, kernel crossover prompts are added, which combine multiple optimised kernels to create new kernels. Finally, the AI agent preserves the high-performance CUDA kernels in an archive, which are used to deliver performance improvements. The company has also published a study that further details the process.

Alongside the paper, Sakana AI is also publishing the AI CUDA Engineer Archive, which is a dataset consisting of more than 30,000 kernels generated by the AI. These kernels are released under the CC-By-4.0 license and can be accessed via Hugging Face.

Additionally, the Japanese firm also launched a website that lets visitors interactively explore 17,000 verified kernels and their profiles. The website allows users to explore these kernels across 230 tasks, and also lets them compare CUDA kernels across individual experiments.

Sakana AI, a Tokyo-based artificial intelligence (AI) firm, introduced a new artificial intelligence (AI) agentic framework that can improve the development and deployment speeds of large language models (LLMs). Announced on Thursday, the company unveiled the AI CUDA Engineer that improves both the pre-training and inference speeds of an AI model by optimising the codebase. The AI firm highlighted that the entire process is driven by AI agents and is end-to-end automated. Notably, Sakana AI introduced The AI Scientist last year which can conduct scientific research.

Sakana AI Unveils AI CUDA Engineer

In a post, the Japanese AI firm stated that after developing AI systems that can create new models, and fully automate the AI research process, it began working on ways to speed up the deployment and inference speeds of an LLM.

See also  Lava Blaze 3 5G Confirmed to Launch in India Soon; Teased to Get 50-Megapixel Rear Camera

The company said that the research led to the development of the AI CUDA Engineer. It is a fully automated, comprehensive agent framework for CUDA (Compute Unified Device Architecture) kernel discovery and optimisation.

CUDA kernels can be understood as specialised functions that run on Nvidia GPUs, allowing parallel execution of code across multiple threads. Due to parallelism, it is more optimised than traditional methods and allows for the acceleration of computational tasks, especially those with large datasets. As such, this is considered a great way to optimise AI models’ deployment and inference.

Sakana AI said the AI CUDA Engineer can automatically convert PyTorch modules into optimised CUDA kernels, to significantly improve deployment speedups. It can generate kernels that are said to be 10-100 times faster than its PyTorch counterpart.

The process includes four steps. First, the agent framework converts the PyTorch code into working kernels. Then, the agent implements optimisation techniques to ensure only the best kernels are generated. Then, kernel crossover prompts are added, which combine multiple optimised kernels to create new kernels. Finally, the AI agent preserves the high-performance CUDA kernels in an archive, which are used to deliver performance improvements. The company has also published a study that further details the process.

Alongside the paper, Sakana AI is also publishing the AI CUDA Engineer Archive, which is a dataset consisting of more than 30,000 kernels generated by the AI. These kernels are released under the CC-By-4.0 license and can be accessed via Hugging Face.

Additionally, the Japanese firm also launched a website that lets visitors interactively explore 17,000 verified kernels and their profiles. The website allows users to explore these kernels across 230 tasks, and also lets them compare CUDA kernels across individual experiments.

 

See also  Amazon Great Indian Festival 2024 Sale: Best Discounts on Portable Speakers Under Rs. 10,000

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...