News Week
Magazine PRO

Company

See also  Chimpanzees’ Task Performance Improves With Human Audience, Study Finds

Google Introduces Secure AI Framework, Shares Best Practices to Deploy AI Models Safely

Date:

Google introduced a new tool to share its best practices for deploying artificial intelligence (AI) models on Thursday. Last year, the Mountain View-based tech giant announced the Secure AI Framework (SAIF), a guideline for not only the company but also other enterprises building large language models (LLMs). Now, the tech giant has introduced the SAIF tool that can generate a checklist with actionable insight to improve the safety of the AI model. Notably, the tool is a questionnaire-based tool, where developers and enterprises will have to answer a series of questions before receiving the checklist.

Google Introduces SAIF Tool for Enterprises and Developers

In a blog post, the Mountain View-based tech giant highlighted that it has rolled out a new tool that will help others in the AI industry learn from Google’s best practices in deploying AI models. Large language models are capable of a wide range of harmful impacts, from generating inappropriate and indecent text, deepfakes, and misinformation, to generating harmful information including Chemical, biological, radiological, and nuclear (CBRN) weapons.

Even if an AI model is secure enough, there is a risk that bad actors can jailbreak the AI model to make it respond to commands it was not designed to. With such high risks, developers and AI firms must take enough precautions to ensure the models are safe for the users as well as secure enough. Questions cover topics like training, tuning and evaluation of models, access controls to models and data sets, preventing attacks and harmful inputs, and generative AI-powered agents, and more.

Google’s SAIF tool offers a questionnaire-based format, which can be accessed here. Developers and enterprises are required to answer questions such as, “Are you able to detect, remove, and remediate malicious or accidental changes in your training, tuning, or evaluation data?”. After completing the questionnaire, users will get a customised checklist that they need to follow in order to fill the gaps in securing the AI model.

See also  2024 Marks Earth's Hottest Year Ever, Surpassing Critical 1.5 degree Celsius Warming Limit

The tool is capable of handling risks such as data poisoning, prompt injection, model source tampering, and others. Each of these risks is identified in the questionnaire and the tool offers a specific solution to the problem.

Alongside, Google also announced adding 35 industry partners to its Coalition for Secure AI (CoSAI). The group will jointly create AI security solutions in three focus areas — Software Supply Chain Security for AI Systems, Preparing Defenders for a Changing Cybersecurity Landscape and AI Risk Governance.

Google introduced a new tool to share its best practices for deploying artificial intelligence (AI) models on Thursday. Last year, the Mountain View-based tech giant announced the Secure AI Framework (SAIF), a guideline for not only the company but also other enterprises building large language models (LLMs). Now, the tech giant has introduced the SAIF tool that can generate a checklist with actionable insight to improve the safety of the AI model. Notably, the tool is a questionnaire-based tool, where developers and enterprises will have to answer a series of questions before receiving the checklist.

Google Introduces SAIF Tool for Enterprises and Developers

In a blog post, the Mountain View-based tech giant highlighted that it has rolled out a new tool that will help others in the AI industry learn from Google’s best practices in deploying AI models. Large language models are capable of a wide range of harmful impacts, from generating inappropriate and indecent text, deepfakes, and misinformation, to generating harmful information including Chemical, biological, radiological, and nuclear (CBRN) weapons.

Even if an AI model is secure enough, there is a risk that bad actors can jailbreak the AI model to make it respond to commands it was not designed to. With such high risks, developers and AI firms must take enough precautions to ensure the models are safe for the users as well as secure enough. Questions cover topics like training, tuning and evaluation of models, access controls to models and data sets, preventing attacks and harmful inputs, and generative AI-powered agents, and more.

See also  Vivo Y29 5G Price in India Leaked Alongside Expected Discounts and Bank Offers

Google’s SAIF tool offers a questionnaire-based format, which can be accessed here. Developers and enterprises are required to answer questions such as, “Are you able to detect, remove, and remediate malicious or accidental changes in your training, tuning, or evaluation data?”. After completing the questionnaire, users will get a customised checklist that they need to follow in order to fill the gaps in securing the AI model.

The tool is capable of handling risks such as data poisoning, prompt injection, model source tampering, and others. Each of these risks is identified in the questionnaire and the tool offers a specific solution to the problem.

Alongside, Google also announced adding 35 industry partners to its Coalition for Secure AI (CoSAI). The group will jointly create AI security solutions in three focus areas — Software Supply Chain Security for AI Systems, Preparing Defenders for a Changing Cybersecurity Landscape and AI Risk Governance.

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...