The landscape of Artificial Intelligence (AI) is rapidly evolving, with new models and platforms emerging to drive innovation and transform industries. At the forefront of this revolution is DeepSeek R1, a cutting-edge AI model now available on Azure AI Foundry and GitHub. This powerful combination provides developers and enterprises with unparalleled opportunities to build transformative AI applications with enhanced efficiency, security, and scalability. DeepSeek R1 joins a diverse portfolio of over 1,800 models on Azure AI Foundry, solidifying its position as a leading platform for AI development. This blog post delves into the features, benefits, and practical steps to leverage DeepSeek R1 on Azure AI Foundry, along with exploring its availability on GitHub and security considerations.
Key Takeaways
DeepSeek R1 is available on Azure AI Foundry and GitHub
It's part of a large portfolio of models and is designed for enterprise integration with strong security
Azure AI Foundry enables rapid AI development with tools for evaluation and iteration.
Microsoft emphasizes building trustworthy AI with safety measures like content filtering and security reviews
Microsoft Security offers comprehensive protection for AI applications.
This includes threat protection, posture management, data security, and compliance.
Microsoft Defender for Cloud provides AI-specific security posture management and monitors for threats
Microsoft Purview helps manage data security and prevent sensitive data leaks
It can discover the use of Generative AI apps, assess risks, and set up controls.
Table of Content
What is DeepSeek R1?
DeepSeek R1 represents a significant leap forward in AI reasoning capabilities. As a first-generation reasoning model, it is designed to tackle complex problems that require advanced analytical and problem-solving skills.

Key Features and Capabilities
DeepSeek R1 comes with impressive capabilities:
Reasoning: DeepSeek R1 excels in tasks that demand logical reasoning and inference.
Coding: The model demonstrates proficiency in code generation and understanding.
Mathematics: It exhibits strong performance in solving mathematical problems.
Language Understanding: It shows a high degree of accuracy in understanding and processing both English and Chinese.
DeepSeek R1 and Azure AI Foundry: A Powerful Partnership
What is Azure AI Foundry?
Azure AI Foundry is a comprehensive platform designed to streamline the development and deployment of custom AI applications and APIs. It brings together the best AI models in one place, accelerating innovation and unlocking new possibilities for enterprises worldwide.

Benefits of Using DeepSeek R1 on Azure AI Foundry
Integrating DeepSeek R1 with Azure AI Foundry offers several advantages:
Scalability: Azure AI Foundry provides a scalable environment, ensuring that applications can handle increased workloads without compromising performance.
Enterprise readiness: The platform is built to meet enterprise-level requirements, offering the reliability and support needed for critical applications.
Seamless Integration: Azure AI Foundry enables businesses to integrate advanced AI capabilities seamlessly into their existing systems.
Key Advantages for Developers and Enterprises
Rapid Experimentation and Iteration
One of the primary benefits of using DeepSeek R1 on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows. This rapid accessibility transforms how developers and enterprises leverage cutting-edge intelligence.
Built-in Model Evaluation Tools
Azure AI Foundry includes built-in model evaluation tools that allow developers to quickly compare outputs and benchmark performance6. These tools are essential for optimizing AI-powered applications and ensuring they meet the required standards.
Scalability and Integration
The platform enables the scaling of AI-powered applications, ensuring they can handle growing demands. The ease of integration allows businesses to incorporate AI into their processes without significant disruptions.
Developing Trustworthy AI Applications
Rigorous Safety Evaluations
Microsoft is dedicated to ensuring that customers can build production-ready AI applications quickly while maintaining the highest levels of safety and security. DeepSeek R1 has undergone rigorous red teaming and safety evaluations to mitigate potential risks.
Mitigating Potential Risks
The safety evaluations include:
Automated Assessments: Automated assessments of model behavior.
Extensive Security Reviews: Thorough security reviews to identify and address vulnerabilities.
Leveraging Azure AI Content Safety
Built-in Content Filtering
Azure AI Content Safety offers built-in content filtering to detect and block malicious, harmful, or ungrounded content. This feature is enabled by default, providing an added layer of security for AI applications.
Customization Options
While content filtering is enabled by default, Azure AI Content Safety provides opt-out options for flexibility. This allows developers to customize the filtering settings based on their specific needs and compliance requirements.
Ensuring a Secure Environment
These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently build and deploy AI solutions.
How to Get Started with DeepSeek R1 on Azure AI Foundry
To start using DeepSeek R1 on Azure AI Foundry, follow these steps:
Sign Up for an Azure Account: If you don’t have an Azure subscription, sign up for an Azure account.
Search for DeepSeek R1: In the model catalog, search for DeepSeek R1.
Open the Model Card: Open the model card in the model catalog on Azure AI Foundry.
Deploy the Model: Click on deploy to obtain the inference API and key and also to access the playground.
Access the Deployment Page: The deployment page will show you the API and key in less than a minute. You can try out your prompts in the playground.
Use the API and Key: Use the API and key with various clients.
Accessing the Model Catalog
The model catalog on Azure AI Foundry is a central repository for a wide range of AI models. It includes frontier models, open-source models, industry-specific models, and task-based AI models, making it easy to find the right model for any application.
Deployment and Inference API
Once you have deployed DeepSeek R1, you can obtain the inference API and key. This API allows you to integrate the model into your applications and start generating predictions.
Exploring DeepSeek R1 on GitHub
Accessing Resources and Guides
In addition to Azure AI Foundry, DeepSeek R1 is also available on GitHub, where you can explore additional resources and step-by-step guides to integrate DeepSeek R1 seamlessly into your applications. The GitHub repository provides valuable information for developers looking to customize and extend the model's capabilities.

DeepSeek R1 Model Details and Architecture
DeepSeek-R1-Zero and DeepSeek-R1
DeepSeek R1 includes two primary models: DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero is trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT), demonstrating remarkable reasoning performance. DeepSeek-R1 incorporates cold-start data before RL to address challenges like repetition and readability.
Reinforcement Learning Approach
The models are trained based on DeepSeek-V3-Base, using reinforcement learning to enhance their reasoning capabilities. This approach allows the models to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero.
Distillation and Smaller Models
DeepSeek AI demonstrates that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance. Several dense models, including 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series, have been open-sourced to the community.
Evaluation Results and Performance Benchmarks
Performance Across Various Tasks
DeepSeek R1 achieves state-of-the-art performance across various benchmarks:
English: Excels in tasks like MMLU, DROP, and GPQA-Diamond.
Code: Demonstrates high proficiency in LiveCodeBench and Codeforces.
Math: Shows strong performance in AIME 2024 and MATH-500.
Chinese: Performs well in CLUEWSC and C-Eval.
Comparison with Other Models
DeepSeek R1 outperforms several other models, including Claude-3.5-Sonnet-1022, GPT-4o 0513, and OpenAI models, in various tasks. The distilled models also exhibit competitive performance, often surpassing OpenAI-o1-mini in benchmarks.
Running DeepSeek R1 Locally
DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized similarly to Qwen or Llama models17. This makes it easier for developers familiar with these models to integrate DeepSeek-R1-Distill into their projects.
Tools and Recommendations
You can easily start a service using tools like vLLM:
vLLM serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
or SGLang:
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
Usage Recommendations for Optimal Performance
Temperature Settings
Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
Prompt Engineering
Avoid adding a system prompt; all instructions should be contained within the user prompt. For mathematical problems, include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}".
Ensuring Thorough Reasoning
Enforce the model to initiate its response with "<think>\n" at the beginning of every output to ensure thorough reasoning.
Security Measures and Compliance
Microsoft Security for AI Applications
Microsoft Security provides threat protection, posture management, data security, compliance, and governance to secure AI applications.
AI Security Posture Management
Microsoft Defender for Cloud’s AI security posture management capabilities help security teams gain visibility into AI workloads, discover AI cyberattack surfaces and vulnerabilities, and get recommendations to strengthen their security posture against cyber threats.

Cyberthreat Protection
Defender for Cloud continuously monitors DeepSeek AI applications for unusual activity. It integrates with Azure AI Content Safety to block prompt injection cyberattacks in real time.
Securing the DeepSeek App
Microsoft Defender for Cloud Apps: Microsoft Defender for Cloud Apps provides risk assessments for Generative AI apps, including the DeepSeek app, and allows organizations to discover their use and control access based on risk factors.

Data Security and DLP: Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into data security and compliance risks. Microsoft Purview Data Loss Prevention (DLP) enables you to prevent users from pasting sensitive data or uploading files containing sensitive content into Generative AI apps from supported browsers.
Conclusion
DeepSeek R1 on Azure AI Foundry marks a significant milestone in AI innovation. By providing a powerful, secure, and scalable platform, Microsoft empowers developers and enterprises to build transformative AI applications that drive real-world impact. With its impressive capabilities, rigorous safety measures, and comprehensive security features, DeepSeek R1 is poised to shape the future of AI. Explore Azure AI Foundry and GitHub to unlock the full potential of DeepSeek R1 and embark on a journey of AI-driven innovation.
Comments