Te ayudamos a construir tu Copilot personalizado

Building your custom Copilot: a pro-code guide to speed up the process

The potential of generative AI is causing many IT professionals to want or need to develop their own virtual assistant and integrate it into everyday tools such as Teams.

Microsoft offers two solutions that allow us to create more advanced products, depending on our technical level or time available for development. On the one hand, we have the Teams AI library available that we can use in parallel with Azure OpenAI Services. On the other hand, the new Microsoft Copilot Studio tool.

Teams, as a collaboration and videoconferencing platform, is positioned as the perfect environment to incorporate these virtual assistants. It enables new forms of collaboration and natural conversation. Thanks to this tool, conversations can be held in chats, channels and meetings, providing relevant answers and coordinating with other team members.

HOW TO CREATE YOUR 100% PERSONALISED COPILOT?

Depending on the application’s goals and development preferences, there are several tools available for the creation of a custom virtual assistant. Microsoft offers two main development paths:

  1. Azure OpenAI and Teams AI Library: This option is aimed at professional code developers focused on creating virtual assistants in Teams. Azure OpenAI and the Teams AI Library provide a comprehensive set of tools that simplify virtual assistant development and offer specific Teams-based functionality. It is ideal for developers who require flexibility and power in professional coding environments and are looking to expand their application model to other aspects of Teams. In particular, message extensions, tabs, meeting applications, link deployment, bot calls and personal applications.
  2. Copilot Studio: For developers of all levels. Provides a graphical or natural language-based development environment for creating and deploying virtual assistants across multiple channels. It is the primary choice for less technical users. However, professional code developers can take advantage of the advanced functionality within the platform. And, in addition, extend their virtual assistant through the capabilities of the Microsoft Bot Framework.

Both options enable the creation of impactful customised virtual assistants for any organisation. If you think Copilot Studio is right for you, you can learn more and try it for free.

WE HELP YOU TO CREATE YOUR PERSONALISED CO-PILOT IN A QUICKER AND MORE AGILE WAY

Do you want to create effective virtual assistants in Teams faster and more agile? Read on. Next, we’ll dive into the professional code path using Azure OpenAI and the Teams AI Library.

The Copilot stack is structured as follows. The back-end, an AI orchestration layer and the front-end. Each level of this stack has sub-layers that are recommended for the creation of the virtual assistant.

  • Back-end: 

The back-end level comprises the AI infrastructure that hosts the large language model (LLM). As well as the data that can personalise the model and make the answers relevant to users. At this point in the Copilot stack, the contribution of Azure OpenAI Service and Azure is crucial.

Azure OpenAI Service provides access to OpenAI’s powerful range of GPT language models, backed by Azure’s leading AI infrastructure. It enables a base model to be connected to data in Azure and ensures support for enterprise security and compliance. Along with dedicated AI features such as accountable content filtering. Global, scalable deployment of models on Azure infrastructure is done with a few clicks. It leverages the supercomputing performance of Azure N-series virtual machines and thousands of NVIDIA H100 GPUs.

Teams is compatible with a variety of model platforms. But the choice of Azure OpenAI and Azure for the back-end of the Copilot stack provides a solid foundation. It supports the responsible growth of AI. Allowing developers to focus on creating the best AI application for users’ needs.

  • AI orchestration:

AI orchestration manages the multiple AI components and services within your personalised virtual assistant. In doing so, it coordinates them to accomplish complex tasks together. Instead of summarising a meeting, your AI can identify, create and complete multiple follow-up tasks for the meeting as needed.

To create a Teams-centric virtual assistant, Azure OpenAI and the Teams AI library provide a complete set of orchestration capabilities. By using the Teams AI library, various functionalities are automatically integrated into the AI application. These include conversational logic and an advanced scheduling engine to identify user intent and assign corresponding actions.

Other areas, such as notification engineering, are simplified to instruct the AI application on how to interact with users. And even give it an engaging personality. This library allows the AI to perform multi-step actions independently and reliably. Optimising these areas depends on the needs of the application, and experimentation can help discover what works best for users in Teams. Azure OpenAI Playground allows for rapid testing of different applications. While the Teams AI library includes several examples to showcase its functionalities.

  • Front-end:

The front-end comprises the user experience (UX) for interacting with the AI application in a conversational way. This can be a chat interface in a bot or a dashboard in Microsoft 365 applications. Such as, for example, Microsoft’s Copilot.

To interact directly with users using their own AI, the Teams AI library serves as an interface to large language models and user intent engines. This makes it easy to integrate into Teams with a complete development toolkit.

With pre-built components available, you can concentrate on adding business logic. Simplifying the process of adding Teams-specific capabilities, such as message extensions and adaptive cards. Additional features, such as conversational session history, allow the AI to remember context in messages and provide relevant responses to a dynamic conversation.

In short, the Teams AI library quickly transforms the model into a complete conversational experience. It fits seamlessly into the workflow. Group chats, meetings and channels are enriched with AI that can perform complex intellectual tasks with ease. The entire user experience in Teams is ready for the application of AI.

CREATE YOUR OWN VIRTUAL ASSISTANT THANKS TO COPILOT STACK, AZURE OPENAI AND TEAMS AI LIBRARY

Here are the fundamental steps to create your own virtual assistant and deploy it in Teams, using the Copilot stack together with Azure OpenAI and the Teams AI library:

Step 1 – Model selection:

First of all, you need to set up an Azure account. If you don’t have one, you can create one for free. After registering in Azure OpenAI, you will be able to access the service through the REST APIs, the Python SDK or the web interface. In the Deployments section, you will be able to create a new deployment and select from several OpenAI models available. A recommended model to start creating your own virtual assistant in Teams is the gpt-35-turbo-16k. It is part of the GPT family and is highly capable and cost-effective. Once you have selected the model, assign a name to the implementation that you will need to specify in your application code. When the status of the model implementation is marked as “correct”, it is ready for use.

Step 2 – Adding data (optional):

After selecting the model and underlying platform, you can customise and add contextual relevance to the model by linking it to available data. Azure OpenAI allows you to connect your data to compatible chat models, such as GPT-35-Turbo and GPT-4. There is no need to re-train or re-tune models. Multiple data sources can be used, such as the Azure Cognitive Search index, the Azure Blob Storage container, local files, or open source methods such as Vectra. Integration with the Teams AI library allows for various data connection methods.

Step 3 – Obtain the Azure OpenAI connection point and key:

When you are ready, access the “Keys and Connection Point” section in Azure OpenAI. This will give you the connection point and key you need to make calls to the service.

Step 4 – Integration with Teams and Microsoft 365:

To bring your app to Teams, follow the guide “Compiling your first app with the Teams AI library”. In it, you can easily create an AI application with the Teams Toolkit extension for Visual Studio Code or another development environment. Add the Azure OpenAI key and connection point to call the large language model (LLM). The Teams AI library works well with other model platforms, such as Open AI. Just add the appropriate key. Customise your AI assistant’s responses by adding instructions in the skprompt file, which will give it a focus and even a personality, if you wish.

Once these steps are complete, update the model name in the configuration files to launch your AI application locally or via Azure in Teams.

With these simple steps you have created your own virtual assistant in Teams. Now, you can explore ways to further customise your assistant to provide the best experience for your users.
Agustín Plaza Alcántara – Lead Developer at Itequia