Microsoft Introduces Safety Tools To Azure AI Studio

Microsoft has introduced new safety, privacy, and security capabilities to help enterprises better use generative AI applications.

The new Microsoft AI Studio safety tools will ensure the safety of your content and prevent malware risks.

Note that large language models (LLM) without appropriate safeguards can be remarkably vulnerable to attacks, which might lead to privacy exposure. This exposes applications built on them, and enterprises using them, to risks including hacking and lawsuits for copyright infringement.

Microsoft AI Studio Safety Tools, how to use safety tools in Microsoft AI Studio, AI studio security enhancements

Microsoft AI Studio Safety Tools and Their Unique Features

The tech giant on Sept. 24 introduced new capabilities such as evaluations in Azure AI Studio, correction, Embedded Content Safety, and confidential inferencing to mitigate the exposure possibilities. 

Microsoft ai studio safety tools
Microsoft ai studio safety tools

The correction feature is available in preview as part of the Azure AI Studio — a suite of safety tools designed to detect vulnerabilities, find “hallucinations,” and block malicious prompts.

Once enabled, the correction system will scan and identify inaccuracies in AI output by comparing it with a customer’s source material.

It will then highlight the mistake, provide information about why it’s incorrect, and rewrite the content in question — all “before the user can see” the inaccuracy. 

Microsoft’s tool for probing vulnerabilities, Azure AI Evaluate, can either be accessed via the Azure AI Studio interface or the Azure AI Evaluation SDK.

Azure AI Evaluate enables enterprise users to simulate indirect prompt injection attacks on their generative AI model or application and measure how often it fails to detect and deflect attacks in categories such as manipulated content intrusion or information gathering, Minsoo Thigpen, senior product manager at Microsoft’s Azure AI division, wrote in a blog post.

Another feature, Prompt Shields, aims to help developers detect and block or mitigate any attacks that come in through user prompts. It can be activated via Microsoft’s Azure Content Safety AI Service, she wrote.

Prompt Shields seeks to block prompts that may lead to unsafe AI outputs. It can also identify document attacks where harmful content is embedded within user-provided documents.

Also Read: Microsoft Teams Extends Deadline for Windows Users

Microsoft AI Studio Safety Tools Only A Technique To Mitigate Generative AI Challenges

Microsoft’s new AI safety features are the latest entrants in a wave of attempts by tech vendors to overcome some of the hurdles enterprises face in using generative AI tools and applications.

While the market is moving from the idea conception phase to implementation, hallucinations, in which AI models output falsehoods, still afflict many generative AI models.

“There are a lot of these challenges that organizations need to overcome to take generative AI to that next level in their implementations,” Gartner analyst Jason Wong said. “It’s not going away anytime soon. You can only try to mitigate it using various techniques.”

Interact with us via our social media platforms:

Facebook: Silicon Africa.
Instagram: Siliconafricatech.
Twitter: @siliconafrite.

Recommendations

Abdullahi Kafayat
Abdullahi Kafayat

Abdullahi Kafayat is an enthusiastic writer interested in the tech world. She's a graduate of Obafemi Awolowo University and has a BSc in Chemistry. You can reach her at Kafayatabdullahi17@gmail.com.

Articles: 718