in ,

Microsoft brings a DeepSeek model to its cloud

OpenAI, a close partner and collaborator of Microsoft, as well as may be claiming that DeepSeek breached its terms of service and misappropriated its intellectual property. However, DeepSeek’s dazzling new models are still desired by Microsoft for its cloud computing platform.

Microsoft Adds DeepSeek R1 to Azure

Today, Microsoft said that R1, DeepSeek’s “logic model,” is accessible on Azure AI Foundry, the company’s portal that combines many AI services for businesses under one roof. Microsoft stated in a blog post that the latest R1 version of the Azure AI Foundry has “experienced thorough monitoring assurance analyses,” which include “extensive checks for safety to address possible risks” and “computerized analyses of simulation performance.”

Want a Free Website

Users will soon be allowed to utilize “distilled” versions of R1 remotely on Copilot Use personal computers, Microsoft’s line of Windows equipment that satisfies specific AI readiness standards, according to Microsoft.

“We’re thrilled to see how entrepreneurs and developers use […] R1 to tackle real-world problems while offering revolutionary services as we keep growing the algorithm collection in Azure AI Foundry,” Microsoft added in the Relief.

Microsoft Investigates DeepSeek

Given that Microsoft has started an investigation into DeepSeek’s possible fraudulent use of both its and OpenAI’s services, the inclusion of R1 in Microsoft’s cloud offerings seems odd. Microsoft security professionals believe that in the autumn of 2024, DeepSeek might have used OpenAI’s API to exfiltrate a significant quantity of data. According to Bloomberg, OpenAI was alerted to the questionable activities by Microsoft, as well as who additionally happened to be its biggest shareholder.

However, R1 is a hot topic, and Microsoft could have been encouraged to include it in its cloud portfolio when it’s still appealing.

 

 

It’s uncertain if Microsoft changed the model in any way to increase its precision and counteract its censoring. In a test conducted by the information-reliability group News Guard, R1 answers questions regarding news-related subjects incorrectly or not at all 83% of the time. According to a different test, R1 rejects 85% of Chinese-related instructions, which may be a result of administrative restrictions that AI models created there have to deal with.

Want a Free Website

Written by zeeshan khan

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

IT Industry Powers Pakistan’s Economy with 10.5% Export Share in FY24

IT Industry Powers Pakistan’s Economy with 10.5% Export Share in FY24