How to Build AI-First Apps with Microsoft Semantic Kernel Plugins — A Practical To-Do Manager Example

How to Build AI-First Apps with Microsoft Semantic Kernel Plugins — A Practical To-Do Manager Example

Introduction

Large Language Models (LLMs) like GPT-4 are amazing at answering questions and drafting text — but what if you want them to run real code, query your data, or interact with your systems?

This is where Microsoft Semantic Kernel shines. It lets you build AI-first applications by combining:

  • LLMs (like OpenAI or Azure OpenAI)
  • Your own code plugins
  • External APIs, databases, or business logic

In this post, I’ll show you how to use Semantic Kernel’s plugin concept with a simple, practical example: an AI To-Do Manager that understands your natural language instructions and automatically runs Python functions to add, list, and remove tasks.

By the end, you’ll see exactly how to:

  • Build an AI agent with function calling
  • Write your own native plugins
  • Extend this pattern for real-world use cases

Let’s get started!


What Are Plugins in Semantic Kernel?

In Semantic Kernel, a plugin is a collection of functions that the AI can use to do things it can’t do on its own — like call your code, fetch data, or update something in the real world.


Article content
source:

When a user sends a prompt, the LLM decides:

  • Do I just answer directly?
  • Or do I call one or more functions to get information or run an action?

This is called function calling — the LLM plans what needs to happen, the kernel runs your code, and then the LLM generates a final response with the results.


How This Works

Here’s the basic flow:

  1. User Prompt: “Add ‘Finish my report’ to my tasks.”
  2. LLM Decides: Calls AddTask function with "Finish my report" as input.
  3. Plugin Runs: Your Python function adds it to the task list.
  4. LLM Responds: “Got it! I’ve added ‘Finish my report’ to your list.”

This pattern is simple, but powerful — because it means your AI agent can do real work, not just chat.


A Practical Example: The AI To-Do Manager

Let’s look at a practical example: a To-Do Manager plugin.

Here’s what it does:

  • AddTask: Add a task to a list
  • ListTasks: Show your current tasks
  • RemoveTask: Remove a task by its number

Below is the plugin code:

from semantic_kernel.functions import kernel_function

class ToDoPlugin:
    def __init__(self):
        self.tasks = []

    @kernel_function(
        name="AddTask",
        description="Add a new task to the to-do list"
    )
    def add_task(self, task: str) -> str:
        self.tasks.append(task)
        return f'Task added: "{task}"'

    @kernel_function(
        name="ListTasks",
        description="List all tasks in the to-do list"
    )
    def list_tasks(self) -> str:
        if not self.tasks:
            return "No tasks yet!"
        return "\n".join(f"{idx+1}. {t}" for idx, t in enumerate(self.tasks))

    @kernel_function(
        name="RemoveTask",
        description="Remove a task by its index (starting at 1)"
    )
    def remove_task(self, index: int) -> str:
        if 0 < index <= len(self.tasks):
            removed = self.tasks.pop(index - 1)
            return f'Removed task: "{removed}"'
        return "Invalid task index!"        

Each function is decorated with @kernel_function so Semantic Kernel knows how to expose it to the LLM.


Putting It All Together

In your main app (agent.py), you:

  • Create a Kernel
  • Add your LLM service (OpenAI or Azure OpenAI)
  • Add your ToDoPlugin as a plugin
  • Start a simple chat loop

The AI automatically calls your plugin functions as needed:

from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
from semantic_kernel.contents.chat_history import ChatHistory
from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.azure_chat_prompt_execution_settings import AzureChatPromptExecutionSettings

from todo_plugin import ToDoPlugin

import asyncio, os
from dotenv import load_dotenv

async def main():
    load_dotenv()

    kernel = Kernel()

    # Add Azure OpenAI chat completion
    chat_completion = AzureChatCompletion(
        deployment_name=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
        api_key=os.getenv("AZURE_OPENAI_API_KEY"),
        base_url=os.getenv("AZURE_OPENAI_BASE_URL"),
    )
    kernel.add_service(chat_completion)

    # Add your plugin
    kernel.add_plugin(ToDoPlugin(), plugin_name="ToDo")

    # Enable planning (automatic function calling)
    execution_settings = AzureChatPromptExecutionSettings()
    execution_settings.function_choice_behavior = FunctionChoiceBehavior.Auto()

    history = ChatHistory()

    print("💡 AI To-Do Manager is ready! Type your tasks or 'exit' to quit.\n")

    while True:
        user_input = input("You > ")
        if user_input.lower() == "exit":
            break

        history.add_user_message(user_input)

        result = await chat_completion.get_chat_message_content(
            chat_history=history,
            settings=execution_settings,
            kernel=kernel,
        )

        print("Assistant >", str(result))
        history.add_message(result)

if __name__ == "__main__":
    asyncio.run(main())
        

When you run this, you can try:

You > Add 'Finish project proposal'
You > Add 'Call mom'
You > Show my tasks
You > Remove the second task
You > What tasks are left?        

The AI calls your plugin functions automatically.


How to Run This Yourself

I’ve packaged this as a starter GitHub repo so you can try it right away.

View on GitHub

How to run:

  1. Clone the repo
  2. Set up a Python virtual environment
  3. Add your Azure OpenAI or OpenAI keys to .env
  4. Install dependencies: pip install -r requirements.txt
  5. Run it: python agent.py


How This Pattern Helps in the Real World

This simple To-Do Manager shows a real pattern you can use in many scenarios:

  • ✔️ Customer Support Copilot: Call plugins to look up tickets or update records in your CRM.
  • ✔️ Smart Home Agent: Plugins to turn lights on/off, check security cameras.
  • ✔️ Database Assistant: Plugins that query your SQL database and summarize results.
  • ✔️ Knowledge Assistant: Use Semantic Memory to find facts from your company docs.

Plugins make your AI agent truly useful — connecting the LLM’s language skills to your real business logic and data.


Key Takeaways

  • Plugins = AI + your code.
  • Use @kernel_function to expose native Python functions.
  • The LLM plans when to call them with function calling.
  • You can build an AI that does real work — not just chat.


🔗 Try the Repo

👉 Get the Starter Repo Feel free to fork it, extend it, and make it your own.


📚 Learn More


✨ Happy Building!

I hope this shows you how easy it is to combine AI with your own systems using Semantic Kernel plugins.

A powerful multi-agent tool by Azure. Recently developed a multi-agent chatbot for a client—more robust than LangGraph and LangChain, though some features are still experimental.

Like
Reply

To view or add a comment, sign in

More articles by Vishal Khondre

Others also viewed

Explore content categories