This workshop will guide you step-by-step through the process of developing an enterprise bot using the Azure Bot Service, the Microsoft Bot Framework, and the Virtual Assistant Template. It's designed to acclerate the development of a Minimal Viable Product (MVP) while minimizing the time-to-value. This workshop IS NOT about how to build A BOT , it's about how to build YOUR BOT! Think of it as the first sprint of your bot's MVP milestone.
Although this workshop focuses primarily on the Plan and Build phases it does pull select topics from the Test, Publish, and Manage phases where needed. To support your post-workshop journey to production, you can use the links and resources included in the Test, Publish, Manage, and Learn phases to round out your bot skills and accelerate your development all the way through to production deployment.
Overview
Plan
Workshop Prechecks and Prerequisites
Identify Scenarios
Author Dialog, Design Cards, and Visualize
Identify and Curate Q&A Content
(Post Workshop) Define Actions and Supporting Activities
(Post Workshop) Review Design Guidelines
Build
Preface - Conversational AI fundamentals and Motivation
Step 1 - Create Your Virtual Assistant
Step 1.1 - Create, Deploy, and Run Your Virtual Assistant
Step 1.2 - Update NuGet Packages
Step 1.3 – Add Source Code to Local Git Repo
Step 2 - Customize your Virtual Assistant
Step 3 - Update QnA Maker Knowledge Base (KB)
Step 3.1 – Edit "faq" Knowledge Base Name
Step 3.2 – Delete Default Content
Step 3.3 – Add QnA pair
Step 3.4 – Add Alternate Questions
Step 3.5 – Add Metadata to Questions
Step 3.6 – Turn on Active Learning
Step 3.7 – Update Your Local Models
(Optional) Collaborate
Step 4 - Add Your Core Skill
Step 4.1 – Create, Provision, Run, and Connect Your Core Skill
Step 4.2 - Update NuGet Packages
Step 5 - Implement Your Skill's Core Scenario
Step 5.1 - Add LUIS Intent in the LUIS portal
Step 5.2 - Update Skill and Assistant to include new Intent
Step 5.3 - Update Skill to Act on Intent and Begin Dialog with User
Step 6 - (Optional) Add Multiturn QnA Prompts to Your Assistant
Step 7 - (Optional) Add a Built-in Skill to Your Virtual Assistant
(Optional) What do these parameters mean?
Step 8 - Wrapping Up Build Stage
Changing Endpoint
Adding Secure Web Chat Control
Configure DevOps
Unit Testing Bots (to complete CI/CD pipeline)
Analyzing Bot Usage
How to Update Templates and Deployment Scripts to the Latest Version
Test
Test Phase Resources
Publish
Publish Phase Resources
Manage
Manage Phase Resources
Learn
Learning Phase Resources
Appendix – Important Links
Appendix – Publish Virtual Assistant or Skill using Visual Studio
- Workshop Prechecks and Prerequisites
- Identify Scenarios
- Author Dialogs, Design Cards, Visualize
- Identify and Curate Q&A Content
- (Post Workshop) Define Actions and Supporting Activities
- (Post Workshop) Review Design Guidelines
In the planning stage you'll take steps to prepare for an effective workshop that's been designed to accelerate the development of an enterprise grade bot. These steps do not require participants to have any prior experience with the Microsoft Bot Framework. A step-by-step video of the entire Planning phase described below, including screen-by-screen installation steps, can be found here. Spending quality time preparing for the workshop is the best way to insure a successful outcome. Steps 1 through 3 should be discussed in a pre-call with architect who will be leading the workshop. Steps 4 and 5 can be reviewed for context before the workshop and followed up on post workshop.
This workshop centers around the Microsoft Bot Framework's Virtual Assistant which serves to define what an enterprise bot looks like on the Microsoft Bot Platform. A critical step in preparing for the Bot Acclerator Workshop is installing the Virtual Assistant and making sure you have enough permissions in your Azure Subscription to create all the resources and services required by the Virtual Assistant Template.
Installing the required prerequisites can be done before the workshop starts or during the workshop itself. If you decide to do the installation steps during the workshop then its super important to confirm that the attendees have enough permissions to install software on their development PC before the workshop starts (i.e. admin rights to the PC).
There are two sets of prerequisites that must be installed for this workshop:
- Virtual Assistant Template and its prerequisites which can be found here
- Skills Template and its prerequisites which can be found here
The installation instructions for both must be followed exactly or you will experience strange errors later when you try to build your bot and it won't be obvious that the errors you are seeing are a result of improper installation. To help you follow the installation instructructions you can take a look at this Planning phase video that includes screen-by-screen instructions for installing the Virtual Assistant here.
Note: For Skills, you only need to do step 1 where you install the Skills template since the rest of those installation steps are redundant.
If you already have Visual Studio 2019 installed, it's not a bad idea to check to make sure Visual Studio is up to date which you can do by launching it and choosing Help | Check for Updates. This can safely be done before workshop begins.
In addition to the above installation prerequisites, developers will need to ensure their subscription contains registrations for each provider required by the Virtual Assistant or deployment will fail. Here is a list of required resource providers:
[add list of required providers here]
Developers will also need to have contributor rights at the subscription or resource group level for the following:
- Azure portal rights to create:
- Azure Active Directory Application Registrations
- Resource Groups
- Azure App Service
- Azure Web App Bot
- Azure Cosmos DB
- Azure App Service Plan
- Azure Cognitive Services (LUIS & QnA Maker)
- Azure Search
- Azure Application Insights
- Azure Storage Account
Although not a strict requirement, the ability to add Azure Active Directory Application Registrations and Resource Groups is very helpful. If your organizations restricts those permissions, you can pass those values as command arguments to the Bot Framework Tools and the Virtual Assistant Deployment scripts.
All these services will be created in this workshop and your developers should check to make sure they have permission to create them. One way to confirm they have enough permissions is to create each of those resources/services in the Azure Portal and then turn right around and delete them.
Knowing which scenarios the bot will be expected to handle is as important to the bot as knowing a person's job responsibilities are for a human. Thinking about the bot as a real person can be very helpful in discovering scenarios. When you hire a new employee, you tell them what you expect of them and what job duties they are expected to perform. Think of the bot as a human performing a specific role and job duties. What kinds of specific job duties do you want your bot to perform? The answer to that question will be your scenarios.
If your bot were performing the role of a bank teller, some of the specific job duties might be:
- Open bank account
- Close bank account
- Make deposit
- Make withdrawal
- Provide account balance
Continuing with this idea of thinking about the bot as a person, and having identified one or two key scenarios, it would be only natural to now wonder how each scenario should be handled. What would a conversation for each scenario look like? What kind of back and forth would be necessary for someone to gather enough information to carry out the goal of the scenario? Essentially, you'll need to create a script, and in Conversational AI parlance, we call that script a "dialog". Dialog in this context means the conversation between the user and the bot, not a rectangle form on a computer screen.
The easiest way to model and design a dialog for a scenario is to simply jot it down and label each sentence with the name of who spoke those words. So, it might be something like:
- Bot: Hello, I'm Angie, your Contoso Bank teller
- Bot: I can help with various things like opening or closing an account, making deposits or withdraws, or getting an account balance.
- Bot: What can I help you with today?
- User: I'd like to open an account
- Bot: Super, let's do that! I'll need to ask a few questions first, but it won't take long.
- Bot: What is your name?
- User: Russ Williams
- Bot: Russ, what is your email?
- etc., etc., etc.
- Bot: Alright, that's all we need to setup your account Russ. You'll receive a confirmation email shortly to keep for your records
- Bot: What else can I help you with?
It is possible to create a rich mockup of a dialog design using the Chatdown tool which is part of Microsoft's bot-builder tools. You can use this tool in cases where you'd like to share a realistic "design comp" to internal business owners and stakeholders to get feedback and approval before spending the effort to build it. For dialogs that only require simple answers (e.g. text answers) for each step in the conversation, the Chatdown tool might not provide any more value than the jot-down technique described previously. But, for dialogs that will flow things like dropdown controls, radio boxes, buttons, and carousel cards, the Chatdown tool is very helpful in capturing that. Note: Not all channels support rich dialogs (i.e. dialogs with non-text conversation) so you'll want to consider that when designing your dialogs.
An important aspect of an intelligent bot is its ability to answer questions users will likely ask. Many organizations already maintain various collections of questions and answers (QnA). During the workshop, these collections of QnA will be imported into the QnA Maker service and turned into a knowledgebase that the bot will then draw on to provide answers to questions users ask.
The previous topic discussed what are called multi-turn dialogs where conversation flows from user to bot and bot back to user multiple times before the dialog completes. QnA interactions are called single-turn conversations where the user asks a question and the bot provides an answer to end that dialog. The Virtual Assistant Template handles single-turn and multi-turn dialogs differently so the planning for each is different. It's these simple question and answer pairs we want to focus on in this topic.
During the Planning phase, you will need to identify all the relevant QnA content that should be imported during the workshop. To get an understanding of what types of content can be imported by the QnA Maker service, take a look at the QnA Maker documentation here
Once you've identified all the relevant QnA content, the next step is to curate that content and here's way. It's very common for QnA content to be written in a very technical manner that's totally appropriate for a FAQ page but it's not the way people actually talk. This creates a problem in at least 2 ways.
First, the technically worded questions are not at all like the way an average person would ask a question. Users are not domain experts and they won't form questions with the same technical precision and specificity you'll typically find in traditional FAQs. This creates a mismatch and will result in poor recognition results.
Second, the answers you'll typically see in traditional QnA can be long winded and overly technical, bordering on legalistic. Just as it would be out of place for a call center representative to respond to a caller's question with a lengthy, over technical response, the bot needs to respond with a clear and conversational answer.
So curation of QnA content involves coming up with alternative ways that users are likely to ask each question in the QnA content. Find some way to capture a list of these alternative questions so that they can be manually added during the workshop after the content has been imported. For example, you might create an Excel spread sheet that lists the original question followed by a list of alternatives. Then, after importing that content, you can easily go through each imported question, find it in the spread sheet and then add all the alternatives listed.
Curation will also involve reviewing each and every answer to see if it needs to be shortened and reworded to be more conversational without losing meaning or correctness. This can be challenging but it's absolutely necessary to provide the best experience to the user.
Actions carry out the intent of the scenarios. The goal of the dialog for a scenario is to gather enough information to be able to carry out its intent which you can think of as an action. These actions will be integration points between the conversational AI of your bot and the application backend that will execute that task.
Some scenarios will also require additional external context from a database or web service to get the information required to guide the next step in the conversation. These supporting activities are also integration points between the conversation AI of your bot and the application backend that will provide that intermediate information.
Early on in the development of your bot it's wise to simply mock these integration points so that you can take an agile approach and quickly iterate over your scenarios as you tune the conversation. Integration points are always challenging and generally requires a lot of effort to implement so mocking them will allow you to concentrate on getting the conversation right without requiring the integration points to be constantly reworked.
Although it's not necessary for the workshop, reviewing the bot design guidelines here will help beginners learn how to design bots that align with best practices and lessons learned by the Bot Framework team over the past several years. These guidelines could be master after the workshop and used to finalize the bot's user experience.
Preface - Conversational AI Fundamentals and Motivation
Step 1 - Create Your Virtual Assistant
Step 2 - Customize your Virtual Assistant
Step 3 - Update QnA Maker Knowledgebase
Step 4 - Add a Custom Skill to Your Virtual Assistant
Step 5 - Implement Your Skill's Core Scenario
Step 6 - (Optional) Add Multiturn QnA Prompts to Your Assistant
Step 7 - (Optional) Add a Built-in Skill to You Virtual Assistant
Step 8 - Wrapping Up Build Stage
The first thing we do in the workshop is to get an understanding of the big picture. Your workshop guide/instructor will take a few minutes to go over the fundamentals of Conversational AI and an overview of Microsoft Bot Framework so you'll understand key concepts and the various components of the Virtual Assistant architecture. You can also take a look at overviews of Conversation AI, Conversational AI Tools, Azure Bot Service, Microsoft Bot Framework, Virtual Assistant, Bot Framework Solutions, and last but not least, the Virtual Assistant Template.
If you haven't already installed the Bot Builder prerequisites, follow the instructions here and then return here to Step 1 to continue with the workshop.
The first step in developing your bot is to create an enterprise baseline for your new Virtual Assistant bot. When you finish this step, you'll have a generic Virtual Assistant that's ready to be customized.
Throughout this workshop you'll be using the Bot Framework Solutions documentation. It's tasks-based, step-by-step instruction for building your bot. You'll be pointed to the first step and you'll progress and complete the task by following the step navigation links at the lower righthand portion of each page.
So to create, deploy, and run your Virtual Assistant baseline follow the steps here and your cue to return to this documentation will be when see the "Next Steps" navigation link.
Notes:
-
During the process of creating and deploying your bot you'll be asked to run PowerShell Core, not the old PowerShell you've traditionally run. To launch it, type pwsh.exe in the Windows Cortana Search field or Windows Key + R and typing pwsh.exe.
-
For development environments, use Debug packages when deploying your bot, not Release.
-
It can be helpful to author and save the CLI commands in something like Notepad since you'll no doubt run those commands over and over as you begin your bot journey. Keep in mind those commands will have secrets in them so guard that file accordingly.
Although not exactly necessary, it's a good idea to update the NuGet packages for each project in the solution (including the Skills projects you'll create later). After launching the Visual Studio and using the Virtual Assistant template under a New Project (Steps 1-4 under the Create your assistant section), please perform the following steps to update the NuGet Packages.
You can update the NuGet packages with these steps:
- Open NuGet Manager for Assistant project by right-clicking the project and choosing Manage NuGet packages…
- Select the Updates tab and then type in "Microsoft.bot" in the Search field to filter out all non-Bot Framework packages
- Update these packages first and in this order to avoid import errors: Schema, Connector, Builder, Configuration
- Click the Select
- Make sure you accept any EULAs that pop up.
- Lastly, build your application to ensure that all your dependencies are built and we check for errors before we get going. We do this by right-clicking the Solution and Click Build
As you learn about developing bots, it's very helpful to know what files changed and how after the various bot tools are run (more about those tools later). The easiest way to do that is to create a local Git repo in Visual Studio using the Add to Source Control command in the lower right corner of the Visual Studio IDE and then choose Git in the popup list. You may see a warning dialog that says "The current solution has projects that are located outside the project folder. There projects will not be source controlled in the Git repository…". You can safely ignore this warning, all you project will be added to the local Git repo.
Now that you've got a generic Virtual Assistant enterprise baseline working, you're ready to customize it for your scenario. For this workshop, the only customization step we'll do is to edit the greeting. The customization of editing responses involves localizing resource strings which aren't leveraged in this workshop, so we'll skip that. Customizing cognitive models is covered elsewhere in this workshop, so we'll skip that too. To customize the greeting, complete the "edit your greeting" step here and your cue to return to this documentation will be when see the "Next Steps" navigation link.
The deployment steps you followed earlier created several services that the Virtual Assistant seamlessly stitches together to form a synergistic bot solution. One of those was a QnA Maker service called "faq". In this step, you'll modify that "faq" knowledge base (KB) by removing the default content and replacing it with QnA content of your own.
Navigate to the QnA Maker Portal at https://www.qnamaker.ai, sign in, and then follow these steps to change the name of the knowledge base from "faq" to something more specific to your bot like "<yourBotName> faq" so it will be easier to identify later when you have more knowledge bases. Knowledge base name will be the first setting you see on that page. For now, don't worry about changing any of the other settings in those instructions and proceed to the next step.
To delete the default QnA content, switch to the Edit page, and follow the instructions for deleting content you don't want here.
On the Edit page, select Add QnA pair, to add a new row to the knowledge base table as you can see here.
Note: QnA Maker can import FAQ web pages, manuals, .pdf files, and more as a means of quickly populating a QnA knowledge base. Although this may sound attractive, there is an ugly side to populating your QnA KB this way. Traditional Q and A content is typically written very technically for both question and answers. Since people don't generally speak with that same technical tone/voice, the initial results of testing an imported Q and A source can show areas where the results were poor. Not because QnA Maker performed poorly but because the questions were written poorly for this medium/scenario.
If you want to import QnA content, plan on looking over every question and answer pair and making the necessary edits so they'll match what real people might ask and make answers simple to understand.
To add QnA content, select the Setting page and then in the Manage knowledge base section, enter the URL to the content type you want to import in the URL field and then click Add URL. Repeat those steps for every source you'd like to add to the KB. When you're finished, click Save and train and then click Publish.
QnA Maker is really good at inferring what questions users are asking, even if they don't ask it exactly how they were entered in QnA Maker. Sometimes though, questions can be asked in such a different way that you'll need to add alternate forms of a question so QnA Maker will be able to map that to the answer. To add alternate question follow the instructions here.
Adding metadata to a question and answer set allows your client application to request filtered answers. This filter is applied before the first and second rankers are applied. Once metadata is added to a question-and-answer set, the client application can:
- Request answers that only match certain metadata.
- Receive all answers but post-process the answers depending on the metadata for each answer.
To add metadata to questions, follow the instructions here.
Note : Over time, as you develop your assistant, you might decide to add additional QnA Knowledgebases (or refactor your current Knowledgebase into multiple Knowledgebases) and the instructions for how to do that are located here in the Add an additional knowledgebase section.
Active Learning is a compelling feature that you can read about here and turn it on by following the instructions here.
You've been making changes to the knowledge base in the QnA Maker portal. In order to see those changes in the running bot, you'll need to run an update script to regenerate the Dispatcher.
Run the following command from within Powershell (pwsh.exe) within your assistant's project directory.
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
Now test your bot using the Bot Emulator
Although it not a concern during the workshop, many organizations will need to know how to setup QnA Maker so that several people can collaborate on a single QnA Maker KB. Instructions for doing that can be found here.
A key feature of the Virtual Assistant (VA) is its Skills-based architecture. Skills are the heart of a VA bot and they are what gives a VA bot its behavior. Skills are the unit of modularization for VA bots. Typically, a bot will have a single core skill and one or more companion skills that complement the core skill and create a richer, more knowledgeable bot.
In this step, you'll create a custom skill that you'll use to implement your bot's core skill following the same style of task-based instructions you followed when you created the Assistant where you'll find navigation links in the bottom right-hand corner of the page that will take you to the next step in the instructions. So to create, provision, connect, and test your Skill, follow the steps here and your cue to return to this documentation will be when you see the "Next Steps" navigation link.
Note: You should have already installed the VA Skill template in the Prerequisite Steps, but if not, go there now and follow the installation steps described there.
Also Note: When deploying your bot, use debug packages, not release.
Follow the same steps for updating the NuGet packages as you did for the Assistant earlier
The previous step added a custom skill to your virtual assistant but that was a baseline for a skill which doesn't understand how to do anything useful or specific to your business. In this step we'll teach your custom skill to understand what users are saying and then the skill can act on that understanding to take action.
Understanding what users are saying is the job of Azure's Language Understand (LUIS) service. This service allows your skill to understand what users mean (i.e. their intent), no matter how they say it. Part of what happened in the previous step when we deployed your core skill was the creation of your skill's LUIS application. The task now is to add a language model to your skill's LUIS application that can recognize when the user is asking for the core scenario of your bot.
To add your core LUIS Intent, you'll need to add an Intent for the skill's core scenario following the instructions here and, if applicable, add Entities following the instructions here.
You can test out the LUIS model in the portal to make sure it's recognizing utterances correctly and when you have that working you can move to the next step.
After you've added your bot's first core intents in the luis.ai portal, follow the steps below to update your Skill and Assistant to include the new Intent you created in the previous step.
- To update your skill from changes made in the luis.ai (or luis.azure.us) portal, run the following command from the Skill's project directory to update the .lu file (see "Update your local LU files for LUIS and QnAMaker" section here for more details).
.\Deployment\Scripts\update_cognitive_models.ps1 -RemoteToLocal
- To make new Intents visible to the botskills command and eventually your assistant, publish your skill using the command below or from Visual Studio (i.e. right-click skill project in the Solution Explorer and select Publish )
.\Deployment\Scripts\publish.ps1 -name <skill's app service name> -resourceGroup <rg name> -projFolder .\
- To update the assistant's dispatcher to reflect the changes made to the skill, run the following command from the Assistant's project directory (see "Update a Skill to your Virtual Assistant" section here for more details)
botskills update --botName <assistant's name> --remoteManifest "https://<skill name>.azurewebsites.net/api/skill/manifest" --cs --luisFolder "<full file path to skill's project folder>\Deployment\Resources\LU\en-us"
- Run this command from Skills project directory to update the <skill's name>.cs file
luisgen .\Deployment\Resources\LU\en-us\<skill's name>.luis -cs <skill's name>Luis -o .\Services
Note : Copy the botskills connect command you used early in step 4 when you added your Skill to your Assistant and then change the word "connect" to "update" to create the botskills update command
We're finally where the rubber meets the road and we're ready to code what action the skill should take in response to recognizing the intent of the user. In the planning phase you designed the core conversation your skill needs to be able to handle. In this step we're going to create a ComponentDialog that uses a WaterfallDialog to handle the conversation flow of your core scenario. To make life easy, we'll implement this core scenario using a dialog accelerator and step-by-step instructions that can be found here.
[Important!!!! The Virtual Assistant Template and bot framework tools have not yet been updated to respect multiturn follow-on prompts so the following is a temporary workaround and some follow-on prompts will not work properly. You can use this_ topic to explore follow-on prompts in your assistant, but the SDK will eventually be updated and obviate this workaround. Bottom line – this code should be removed before the assistant is deployed in production or once the SDK has been updated. When the SDK is updated, this section will be updated with the proper instructions on how to incorporate follow-on prompts]
In this next topic we'll use follow-up prompts to create a multiturn QnA conversation as described here. The new QnA Maker Follow-on prompts do not automatically appear in bot clients so you must add code to make that happen. This topic will show you how to add the ability to show the new QnA Maker follow-on prompts in your new assistant. Follow the steps described here skipping the first step related to creating and deploying your assistant since you've already done that.
Adding a built-in skill is an optional part of the workshop since and the built-in skill are not always a good fit for every bot. The Microsoft Bot Framework includes several built-in skills that can be added to your bot which can be found here. If you'd like to get a feel for how built-in skills works you might consider adding the To Do Skill since it's useful and straight forward.
Follow these steps to install the To Do Skill:
- Browse to the Bot Framework Solutions Repository here and clone it to your development PC by clicking the Clone or download button and then choose Download ZIP.
- Extract the ZIP to your local hard drive and then copy the botframework-solutions-master\skills\src\csharp\todoskill folder and paste it into the root folder of your Virtual Assistant solution (i.e. the folder that holds the .sln file)
- Open your Virtual Assistant solution in Visual Studio and right-click the solution in the Solution Explorer and choose Add | Existing Project… and add todoskill\todoskill\ToDoSkill.csproj
- Add the todoskill\todoskilltest\ToDoSkillTest.csproj the same way you did in the last step
- Now we'll deploy the todoskill. Open PowerShell Core 6 and run the following command to temporality set the execution policy to Bypass for the current PowerShell session. If this is not done, you'll get an error for attempting to run a script that is not digitally signed. When the PowerShell session ends the setting will revert to their previous settings.
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
- Deploy the todoskill by opening PowerShell Core 6 and change directory to todoskill project folder ( todoskill\todoskill ) and run the following command:
.\Deployment\Scripts\deploy.ps1
Parameter | Description | Required |
---|---|---|
name | Unique name for your bot. By default this name will be used as the base name for all your Azure Resources and must be unique across Azure so ensure you prefix with something unique and not MyAssistant | Yes |
location | The region for your Azure Resources. By default, this will be the location for all your Azure Resources | Yes |
appPassword | The password for the Azure Active Directory App that will be used by your bot. It must be at least 16 characters long, contain at least 1 special character, and contain at least 1 numeric character. If using an existing app, this must be the existing password. | Yes |
luisAuthoringKey | The authoring key for your LUIS account. It can be found at https://www.luis.ai/user/settings or https://eu.luis.ai/user/settings | Yes |
- Now follow same instructions for adding a custom skill skipping to the Test Your Skill step found here
Note: The most recent Virtual Assistant Template and Bot Tools now creates a production endpoint that a resource key instead of the old behavior that required you to follow the steps below. If you are using version 4.5.4 or later of the Virtual Assistant you can verify that the production endpoint is correctly configured to use the subscription key by publishing the LUIS app in the LUIS portal and checking to confirm that the associated key is not the LUIS Authoring Key.
Access to LUIS application endpoints are metered by endpoint keys. By default, the prediction endpoint of your LUIS applications is configured to use the Authoring Key which has a 1,000-call-per-month query limit. When building quick bot demos that will only see limited use, the Authoring Key is fine. For production development, you'll need:
- Create the endpoint key
- Assign the resource key to the LUIS app
- Modify your Assistant and Skills to use prediction endpoint
If you don't change your LUIS endpoint you'll run the risk of running out of quota and hitting HTTP 403 or HTTP 429 errors.
A very common deployment scenario for bots is to embed a Web Chat Control in a web page. The Azure portal makes this easy by providing an HTML <iframe> snippet that can be copied and pasted into a web app. The problem with that approach is that it exposes your bot secret which would allow any client to connect to it. There is an alternative that allows you to embed the Web Chat Control in a secure fashion and instructions for do that can be found here.
Setting up DevOps as early as possible in development is critical to modern development and you can establish a core DevOps Continuous Deployment baseline by following the instructions here. As your bot development progresses, you'll need to mature your DevOps pipeline to target new deployment environments that might not exist early on. Having a core DevOps baseline will allow you to incrementally automate bot specific processes and discipline in a very pragmatic fashion.
https://channel9.msdn.com/Series/DevOps-for-the-Bot-Frameworkwhich has pointers to all stages.
https://www.microsoft.com/developerblog/2017/01/20/unit-testing-for-bot-applications/
One of the most critical phases in the life of a bot is the Evaluation stage and a key part of that stage is analyzing bot usage, but because this workshop ends at the Build stage and because analyzing bot usage can be a useful tool during Usability Testing (i.e. testing with representative users or internal project stakeholders and business owners) we include a discussion of it here.
- Bot Analytics
- Bot Framework Power BI Template Overview
- Bot Framework Power BI Template Download
- Custom Telemetry
- Active Learning for QnA Maker and LUIS
The Virtual Assistant Template Solution will continue to be improved and evolve so at some point in the future you may want or need to update some aspect of the template solution. For example, there could be changes in one or more of the various Azure services used in the solution that requires changes to be made to the deployment scripts. In any case, if you need to update any aspect of the overall Virtual Assistant Template Solution you can look at the Sample Project for the Assistant Template here or the Skills Template here to see what's changed. These Sample Projects are generated by the most recent templates and you can use them to compare files and see what's changed and merge in any of the changes you need.
Testing your bot is a critical part of development. During the workshop we only touched on the basics of testing. The following resources will drill down on the topic and provide a more advanced understanding.
- Testing and debugging guidelines, Trouble Shooting
- Bot emulator and Wiki
- Conversation Transcript
- Unit Testing and Load Testing Bots (coming soon)
Like the Test phase, we touched on the basics of publishing a bot. In the very early stages of a bot's development (as is the case in this workshop) you will use Visual Studio to publish your bot since it's quick and easy and necessary to connect a bot's skill to the assistant. As you progress in the bot's development you'll want to add more discipline and rigger to your development process and add Azure DevOps so deployments are fully automated and follow DevOps best practices. The links in this section will help you in both aspects of deployment.
Managing your bot will be an ongoing activity that has several different aspects to it and the links below will help you understand and master key aspect of this phase of the bot developent master plan.
- Manage using Azure Portal and Configure Bot Setting
- Enable channels
- Integrate with other Cognitive Services
Like the Management phase, the Learning phase is an ongoing activity that is a balance between competing priorities of progressing bot development and learning and apply new bot capabilities. Striking an effective balance between learning and doing is best achieved by "just-in-time" learning followed by immediate application which is easier said than done. If you'd like to try this "just-in-time" approach to learning and doing, you should review content from the links in this section to get a high-level understanding of the art-of-the-possible and capabilities of the Microsoft Bot Framework and the Azure Bot Services so you can then plan out which sprints will drill into the details of the content in this section and apply those learnings pragmatically as you progress with the development of your bot.
- Read all our doc on Bot Framework Solutions, Azure Bot Service, LUIS, and QnA Maker
- Leverage Learning Journeys to drill in on bot basics (training does not use VA Template)
- Have questions or need help use one or more of the options discussed here
- Report bugs or issues for ]core SDK here](https://github.com/microsoft/botframework-sdk#issues-and-feature-requests), Bot Builder tools here, Bot Framework Solutions here, and other options here.
Bot Framework Solutions Repository for Conversational Assistant
Publishing a Virtual Assistant using Visual Studio is slightly different than what most developers are used to doing when they use the Visual Studio Publishing Wizard since the App Service has already been created. This means the flow changes to allow for the existing App Service to be selected in the publishing wizard rather than created as is normally the case. Here are the steps.
- In Visual Studio, right-click the Virtual Assistant or Skill in the Solution Explorer and choose Publish
- Make sure App Service is selected then choose Select Existing and then click the Advanced … link
- Select Debug for the Configuration setting and expand File Publish Options and check Remove additional files at destination to insure there are no pesky leftover after any future deployment and then click Save.
- Now click Publish
- Select your Azure account and subscription then expand the App Service folder that contains the Virtual Assistant App Service you want to publish to and then click OK and the publishing process will begin. You can open the View | Output window to watch the progress and check to make sure it completes successfully