Blog
Blog posts on .NET, Azure, and more.
Chat with your data - Semantic Kernel powered RAG application
February 28, 2025 by Anuraj
dotnet AI RAG SemanticKernel
In this blog post, we’ll learn how to chat with your data - building a Semantic Kernel powered RAG application. In this blog post I will be extending the copilot application with custom data. First I will be asking a question about ICC champions trophy 2025 - Since it is not part of knowledge - it will respond something like this.
Running DeepSeek-R1 locally for free
January 28, 2025 by Anuraj
dotnet AI DeepSeek
In this blog post, we’ll learn how to run DeepSeek-R1 locally for free. DeepSeek R1 is a new large language model. DeepSeek R1 is explicitly designed as a “reasoning model”. It’s remarkable to say that reasoning model is going to drive key advancement in large model progress in 2025.
Build your own copilot with Semantic Kernel
January 25, 2025 by Anuraj
dotnet AI
In this blog post, we’ll learn how to build your own copilot with Semantic Kernel and C#. Today I had to take a session on this topic on K-MUG. For this demo I am using a Console Application, but we can use any type of .NET application windows or web. A copilot is a special type of agent that is meant to work side-by-side with a user. In this blog post I am using GPT 4o model from GitHub Models.
Create Images using Semantic Kernel and Azure Open AI
December 31, 2024 by Anuraj
dotnet AI
In this blog post, we’ll learn how to create images using Semantic Kernel and Azure Open AI in C#. We can use DALL-E model to generate the image. First we need to create a console application, we can use the command dotnet new console and then we need to add the Semantic Kernel nuget package. We can do it with dotnet add package Microsoft.SemanticKernel command.
Using Azure OpenAI global batch deployments in .NET and C#
December 19, 2024 by Anuraj
dotnet AI
The Azure OpenAI Batch API is designed to handle large-scale and high-volume processing tasks efficiently. Process asynchronous groups of requests with separate quota, with 24-hour target turnaround, at 50% less cost than global standard. In the Azure portal, we need to deploy the model as Global Batch. For this demo I am deploying Gpt 4o model with Global Batch as the deployment type.
Copyright © 2025 Anuraj. Blog content licensed under the Creative Commons CC BY 2.5 | Unless otherwise stated or granted, code samples licensed under the MIT license. This is a personal blog. The opinions expressed here represent my own and not those of my employer. Powered by Jekyll. Hosted with ❤ by GitHub