Jan 17 Webinar: Integrating Bot Framework Skills with Power Virtual Agents

We are excited to announce that the modular and reusable conversation Bot Framework skills can be integrated with Power Virtual Agents.

January 17th 12:00pm PST Senior Program Pawan Taparia and Managers Murali Kumanduri from the Power Virtual Agents team will be hosting a live webinar to walk through what Bot Framework Skills are, how to integrate Bot Framework Skills with Power Virtual Agents and what was just updated with SDK version 4.7 launch.

If you are not familiar with Bot Framework Skills, they are re-usable conversational skill building-blocks covering conversational use-cases enabling you to add extensive functionality to a Bot within minutes. Skills include language understanding (LUIS) models, dialogs and integration code and delivered as source code enabling you to customize and extend as required. They can be a complex Virtual Assistant or perhaps an Enterprise Bot seeking to stitch together multiple bots within an organization.


 

When: January 17th 12PM PST

 

About our presenters:

Pawan Taparia

Senior Program Manager, Microsoft Corporation. Building intelligent conversational virtual agents using Power Virtual Agents. Creative and self-driven product manager with over 11 years’ of experience building and shipping consumer and enterprise technical solutions across retail, academia, healthcare & technology.
* Experienced leader with a track record of mentoring & guiding teams across disciplines to run a outcome driven development for delivering software solutions end-users and partners want
* Proven ability to innovate in all stacks of technology from embedded hardware, mobile app to cloud technology
* Strong execution and prioritization skills in delivering outcomes spanning product teams & divisions on time & with high quality
* Deep analytical skills with experience building data models to influence product direction


 

Getting your first Bot Running

 

I wanted to play with Power Virtual Agents Skills and needed to build a bot by hand and must be the most unlucky person in the world as I just seemed to hit issue after issue getting a bone stock SDK based Azure bot up and running with the Azure SDK

The issues I ran into the:

  1. The referenced solution on Git Hub fails to compile with the error: “The Current .NET SDK does not support targeting .NET Core 3.0”
  2. The bot solution on github fails to run with the error ‘Service EndPoint for CosmosDB is required. (Parameter ‘CosmosDBEndpoint’)’
  3. The bot solution on github fails to run with the exception from the TranscriptLoggerMiddleware()
  4. Despite being able to run and interact from the VS debugger, the Bot Framework Emulator refuses to bind to the local bot
  5. The Bot Framework Emulator refuses to bind to the published bot

 

 

 

 

Issues with the Answers I found:

  1. The referenced solution on Git Hub fails to compile with the error: “The Current .NET SDK does not support targeting .NET Core 3.0”

Resolution: Upgraded to using Visual Studio 2019

  1. The bot solution on github fails to run with the error ‘Service EndPoint for CosmosDB is required. (Parameter ‘CosmosDBEndpoint’)’

Resolution: Downloaded the sample from the Azure Portal rather than using the SDK.

Other samples do require this and directions can be found here: https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-state-azure-cosmosdb?view=azure-bot-service-3.0

 

  1. The bot solution on github fails to run with the exception from the TranscriptLoggerMiddleware()

Resolution: Downloaded the sample from the Azure Portal rather than using the SDK.

  1. Despite being able to run and interact from the VS debugger, the Bot Framework Emulator refuses to bind to the local bot

Resolution: Supplied the Aplication ID and Password to the emulator from the JSON

 

  1. The Bot Framework Emulator refuses to bind to the published bot

Resolution: In simulator settings set the path to the ngrok network tunneling software.

Power Platform Sessions at Community Summit Barcelona

 

Working on a list of Microsoft sessions for Community Summit in Barcelona…please note this is still draft while I confirm the speakers!

 


 

April 9-12 the Power Platform product team will be in Barcelona for the Community Summit which brings users and partners of the Microsoft Business Applications platform together to learn, collaborate, and connect about Dynamics 365 and Power Platform Applications.

Free of sales and marketing gimmicks, this event is designed and curated by the community to share real learnings, actionable practices, and valuable lessons, delivered by the experts in the industry, so you can put your new bits of knowledge to use the very next day. Enjoy Barcelona in springtime, and get ready for 4 days of value-packed learning, connecting with like-minded peers, lunch meals and a nice welcome reception on the house, and the most fun you’ve had at a Community Summit ever.

To register please see: https://www.summiteurope.com/

 

Power BI Sessions

 

Building enterprise-grade models with Power BI Premium

Power BI Premium enables you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will dive deep into exciting, new and upcoming features including aggregations for big data to unlock petabyte-scale datasets that was not possible before! We will uncover how the trillion-row demo was built in Power BI on top of HDI Spark. The session will focus on performance, scalability, and application lifecycle management (ALM). Learn how to use Power BI Premium to create semantic models that are reused throughout large, enterprise organizations

 


Kasper de Jonge is Principal Program Manager on the Power BI team at Microsoft. Over the past decade at Microsoft, he has developed features for Power BI, Power Pivot, and other Analysis Services products, such as the Tabular model. He is frequently a speaker at conferences such as Microsoft Data Insight Summit, Ignite, SQLPASS, and SQLSaturday, and he is the creator of https://www.kasperonbi.com, one of the leading Microsoft Power BI blogs. He lives in the Netherlands.

 

Microsoft Power BI: Delivering business value with AI

Learn about the latest AI capabilities in Power BI and the upcoming roadmap with a focus on the latest changes in Q&A, AI visualizations, and AutoML. Learn how these innovations can deliver business value.


Justyna Lucznik


is a program manager in Power BI team focusing on AI features on the product and service. In this interview, she talks about AI features announced in Microsoft Ignite 2018 in Power BI Desktop, Power BI Service, and Power Query.

 

Working with Data in the Power Platform

Data is critical for the success of every organization. The Microsoft Power Platform (Power BI, PowerApps and Flow) provides a suite of tools to measure, act and automate processes around data. This foundational session gives you a 360-degree view for how to connect to data from any data source, shape, and size and get it ready to be used within all the Power Platform tools and experiences.

 


Miguel Llopis
works as a Program Manager in the Power Query team at Microsoft.
Power Query delivers market-leading Data Connectivity and Data Preparation capabilities for Power BI, Excel and Analysis Services.

 

Power Apps Sessions

 

What is new and exciting for Power Apps by Evan Chaki or Clay Wesener

This session will look at the large milestones just shipped and the future of the maker experience in the Business Applications platform, including details on what it means that Flow and canvas PowerApps.

Otherwise: Marco Rocca    PowerApps and Flow – Best practice for best support Power Automate

 


Evan Chaki

Leading organizations into profitable ventures through calculated strategies.
Expertise in strategic planning, client relationships, operational efficiency, project management and team achievement. Senior manager with a business and technology background.
Specialties: CRM, Business Process Automation, Architecture, Business Intelligence, Process Improvement

 

Introduction and Roadmap for AI Builder, the no-code AI experience of the Power Platform

AI Builder es la nueva funcionalidad de inteligencia artificial en PowerApps y Flow (Power Automate). Está será una sesión con numerosas demos donde podrás aprender cómo añadir inteligencia artificial a tus aplicaciones y procesos sin necesidad de tener ninguna noción de programación o data science.

La sesión será liderada por Joe Fernandez de Microsoft.


Joe Fernandez
Program Manager, Microsoft Joe is an amazing community advocate, single handedly answers most of the forum questions and a Program Manager on the AI Builder team at Microsoft.

 

 

Introducing Power Apps portals for external users

Learn how Microsoft Power Apps customers can create websites over data stored in Common Data Service that can be accessed by external users with a wide variety of identities including personal accounts, LinkedIn, and other Azure Active Directory organizations as well as allow anonymous browsing of content.

 

Dileep Singh

is a Senior Program Manager at Dynamics CRM. He has a wide scope and knowledge of Business solutions and applications. In his current position at Microsoft, Dileep is responsible for CRM Customer Service and portals. In addition to that, he has worked in past on a variety of areas like SharePoint Integration, Application infrastructure etc. A Bachelor Tech degree holder from NIT Allahabad, India and has been working in Microsoft from last 6 years.

 

Power Automate Sessions

 

Introducing UI flows to Power Automate

This session will introduce you to the next level of application automation with UI Flows. UI Flows Robotic process automation (RPA) feature within Microsoft Power Automate that helps you save time and effort. Enable anyone to automate manual business processes across all on-premises and cloud apps and services.

 


 

Stephen Siciliano

 is a Principal Group Program Manager at Microsoft. He believes that there is not a single User Interface that cannot be improved. His passion is to identify and execute on such improvements.  

Stephen enjoys being outdoors, hiking and running trails. He enjoys traveling and considers himself a transit enthusiast. 

 

Intelligent automation with Power Automate by Stephen Siciliano

 


 

Stephen Siciliano

 is a Principal Group Program Manager at Microsoft. He believes that there is not a single User Interface that cannot be improved. His passion is to identify and execute on such improvements.  

Stephen enjoys being outdoors, hiking and running trails. He enjoys traveling and considers himself a transit enthusiast. 

 

 

Power Virtual Agents Sessions

 

Getting Started with Power Virtual Agents by Charles Sterling

In this nearly all demo session we walk through how to get started with Power Virtual Agents, If you haven’t heard of Power Virtual Agents, they empower teams to easily create powerful bots using a guided, no-code graphical interface without the need for data scientists or developers. Power Virtual Agents addresses many of the major issues with bot building in the industry today. It eliminates the gap between the subject matter experts and the development teams building the bots, and the long latency between teams recognizing an issue and updating the bot to address it. It removes the complexity of exposing teams to the nuances of conversational AI and the need to write complex code.


Charles Sterling (Chuck) came to Microsoft from being a marine biologist working for United States National Marine Fisheries doing marine mammal research on the Bering Sea. He started out at Microsoft supporting Excel and moved through a couple of support teams to being an escalation engineer for Microsoft SQL Server. Taking his love for customers (and diving), Chuck moved to Australia as a product manager and developer evangelist for the .NET Framework. In 2008 he moved back to Redmond as a Visual Studio program manager then joined the Power Platform group focusing on Power BI and now continues his community passion and looking after the PowerApps influencers and MVPs.

 

 

 

Hands-On Labs

Creating your own bot with Power Virtual Agents

In this 2 hour hands on labs we will be build your very own bot! If you haven’t heard of Power Virtual Agents, they empower teams to easily create powerful bots using a guided, no-code graphical interface without the need for data scientists or developers. Power Virtual Agents addresses many of the major issues with bot building in the industry today. It eliminates the gap between the subject matter experts and the development teams building the bots, and the long latency between teams recognizing an issue and updating the bot to address it. It removes the complexity of exposing teams to the nuances of conversational AI and the need to write complex code.

 


Charles Sterling (Chuck) came to Microsoft from being a marine biologist working for United States National Marine Fisheries doing marine mammal research on the Bering Sea. He started out at Microsoft supporting Excel and moved through a couple of support teams to being an escalation engineer for Microsoft SQL Server. Taking his love for customers (and diving), Chuck moved to Australia as a product manager and developer evangelist for the .NET Framework. In 2008 he moved back to Redmond as a Visual Studio program manager then joined the Power Platform group focusing on Power BI and now continues his community passion and looking after the PowerApps influencers and MVPs.

 

 

Power Apps AI Builder in two Hours

In this 2 hour hands on lab you will get an introduction to the four major AI model types and walk through how to get forms processing working in your very own Power Apps application!


Joe Fernandez
Program Manager, Microsoft Joe is an amazing community advocate, single handedly answers most of the forum questions and a Program Manager on the AI Builder team at Microsoft.

 

 

Power BI & AI: Better Together

In this 2-hour hands on lab Justyna will walk the audience how to easily take your Power BI reports to the next level with the addition of AI.

 


Justyna Lucznik


is a program manager in Power BI team focusing on AI features on the product and service. In this interview, she talks about AI features announced in Microsoft Ignite 2018 in Power BI Desktop, Power BI Service, and Power Query.

 

AI Builder Object Detection Lab for Power Platform World Tour

Object detection

 

Object detection can be used to expedite or automate business processes in multiple industries. In the retail industry, it can be used to expedite the inventory management, allowing retail leaders to focus on on-site customer relationship building. In the manufacturing industry, technicians can use it to speed up the repair process to quickly pull out the manual of a piece of machinery for which the UPC/serial number isn’t standing out.

AI Builder object detection will allow companies of any size to add these capabilities for their own custom objects to their apps.

Object detection lets you count, locate, and identify selected objects within any image. You can use this model in PowerApps to extract information from pictures you take with the camera, or load into an app.

In this lab, we will build and train a detection model and build an app that uses the detection model to identify objects from available images.

Note: If you are building the first model in an environment, click on Explore Templates to get started.

 

Setup

Object detection maps objects to a Common Data Service Entity. To get started we need to create this entity.

Step 1. Log in to Power Apps.

Step 2. Navigate to “Data” and Select Entities and New Entity

 

Step 3. Create a new Entity.

Step 4. Add a field for inventory total named: aib_inventorytotal as type whole number

 

Step 5. Navigate to Data to add our products

Step 6. Add our Three Products

Green Tea Rose

Green Tea Cinnamon

Green Tea Mint

 

 

Step 7. Verify the data was entered into the entity

 

Exercise 1

In this exercise we will build and train the Object Detection model for three varieties of tea.

  1. In PowerApps maker, expand AI Builder and select Build. Select Object Detection.

 

  1. Name your model Green Tea Product Detection _Your name and
    Click create.

  1. Your screen should now look like the image here.

  1. Notice the progress indicator on the left. Those are the steps we will follow now to build and train our model.

  1. We are now going to define the objects we are tracking. Click on the Select object names.

 

  1. From the entity list, select Object Detection Product.

 

  1. Select the Name field and click Select field.

  1. Select the tea items and click Next.

 

  1. Notice the progress indicator has moved forward to the Add images step.

  1. Click add images.

     

    Images can be found here

  1. Select images from the set provided. You will need enough images to provide 15 samples for each type of tea we are tracking.

  1. Approve the upload of images. Click Upload images. After the upload completes, click Close.

 

  1. Click next to begin tagging the images.

  1. Select the first image to begin tagging.

  1. Hover over the image, near an item you wish to tag. A dotted-lined box should appear around the item. It has been detected as a single item that can be tagged.

  1. Click on the item and select the matching object name.

  1. If the pre-defined selector is not accurate, as in the below example, you can drag the container to draw it to accurately tag the item.

  1. Do this for each item in the image and for each image in your set. When you have tagged all of the images you uploaded click Done Tagging in the top right of the screen.

  1. Once you have completed tagging, you will get a summary of the tags. If you haven’t tagged enough for analysis, you will need to load and tag more examples.

 

  1. Once you have defined enough tags for training the model, you will be allowed to initiate the training. Click Next.

  1. Click Train.

  1. The training takes a few moments.

  1. Navigate to the saved model view and confirm your model has completed training.

  1. Select the model you just made.

  1. Select Quick test.

 

  1. Upload or drag and drop one of your test images to be analyzed.

     

  2. You will see the analysis and level of confidence for the match.

 

  1. Upload an image you know will not match. You will see the analysis and level of confidence for the match.

  1. Click close.
  2. Publish your model.

 

Exercise 2

We will now create a canvas app you can use for detecting the items that have been trained in our model. The product will be detected from the image and you will be able to adjust on-hand inventory for the item.

  1. Navigate to Apps, and select Create an app, then select Canvas. If asked, grant permission for the app to use your active CDS credentials.

  1. Select Blank app with Phone layout.

  1. On the maker canvas, select the Insert tab in the ribbon and expand AI Builder. Select Object detector to place this control on your app.

  1. Select the AI model you built.

  1. Resize the control to better use the space.

  1. Make sure to leave room for more items we will be placing soon.

  1. Play your app.

  1. Click on Detect.

  1. Choose one of your test images and click Open.

  1. The image will now be analyzed.

  1. Our model has detected each tea in the image.

  1. Exit the app player.

 

Bonus exercise- build out the data in your canvas app

 

  1. We will now select our data source. Select View from the ribbon and select Data Sources.

  1. Click + Add Data Source.

  1. Add the Common Data Service data source. Do not use Common Data Service (current environment).

  1. Select the Object Detection Products entity and click Connect.

  1. Close the Data pane.
  2. With Screen1 selected in the Tree view, navigate to the Insert ribbon tab, expand Gallery and select Blank vertical gallery.

  1. Rename the Gallery productGallery. You are re-naming the gallery so you can reference it from your formulas.

  1. Resize and move the gallery control to fit the available space on the screen, leaving some space at the bottom for using later.

  1. Select the edit icon from the gallery.

  1. Add a label to the gallery.

  1. Click edit again and add a Text input box to the gallery. Resize and place it to line up with the label we’ve already placed. We will be updating inventory counts in this text box.

  1. Rename the Text Input inventoryInput. You are renaming this control so you can reference it from your formulas.

  1. With focus on the Screen1 in the Tree view, click in the ribbon on Insert and select Button.

  1. Drag and move the button to the bottom of the screen, double click on it to edit the text. Rename it to Update.

  1. We will now add the user message to give the user confirmation their submission was accepted; we will define this logic later. With focus on Screen1, insert a label, drag it to the bottom of the screen.

 

  1. We will now add logic to the controls we’ve placed on the screen. Select the Gallery and replace the Items formula with the following.

    ‘Object Detection Products’

  1. Select the label in your gallery. Replace the Text formula with the following:

    ThisItem.Name

  1. Select inventoryInput and replace the formula for Default with the following:

    LookUp(‘Object Detection Products’,Name = ThisItem.Name).’Inventory Total’

  1. Select the other label (the one that shows at the bottom of the screen) and replace its text with the following:

    usermessage

  1. You’ll notice that area now looks blank. We will configure that message in our next step.

  1. Select the button control and replace the OnSelect with the following:

    ForAll(productGallery.AllItems,Patch(‘Object Detection Products’,LookUp(‘Object Detection Products’,Name=DisplayName),{‘Inventory Total’:Value(inventoryInput.Text)}));Set(usermessage,”Updated ” & CountRows(productGallery.AllItems) & ” items”)

  1. Play the app again.
  2. Click Detect.

  1. Select an image to evaluate.

  1. Update the quantity for the correct product and click Update.

  1. The bottom should show a message now.

AI Builder Forms Processing

This post was republished to Sterlings at 11:54:43 PM 11/21/2019

AI Builder Forms Processing

 

 

Account    Sterlings

 

Form processing

Form processing identifies the structure of your documents based on examples you provide to extract text from any matching form. Examples might include tax forms or invoices.

In this lab we will build and train a model for recognizing invoices. Then we will build a tablet app to show the detection in action and digitize the content.

Note: If you are building the first model in an environment, click on Explore Templates to get started.

 

Exercise 1

  1. From the left navigation expand AI Builder and select Build. Select Form Processing.

  1. Name your model. Because you are working in a shared environment make sure to include your name as part of the model name. This will make it easier to find later. Click create.

  1. Your screen should look like the following image. Select Add documents.

 

  1. Add the documents from the Train folder. You must have at least five documents to train the model.
  2. Confirm the selection and click Upload.

  3. Once your uploads are complete, select Analyze.

 

  1. Select the fields.

  2. Hover over the highlighted fields and confirm the fields that should be returned by the form when processing from our trained model.

  3. Once you have confirmed the fields, click Done.

  1. Train your model.

  1. Locate and open your saved model. If you need help finding it, type your name into the search box.

  1. Review the results of the trained model.

  1. Perform a test with the test invoice.
  2. Perform a test with another image or document.

  1. Publish the model.

 

 

Exercise 2

 

  1. Navigate to Apps and create a new Canvas App. Select Blank app with a tablet layout.

  1. Insert the Form Processor control from the AI Builder.

  1. Map it to your saved model.

  1. Drag and resize the control like the image below.

  1. Play your app.

  2. Click Analyze and add your test file.

  3. Your uploaded form will be analyzed

 

  1. You can see the mapped fields are recognized.

  1. Close the app player.

  2. Let’s take some of the data fields and place them on the screen for the user to review. Add three labels to the screen. Drag them to the right side of the screen and line them up like in the image below. Edit the text to “Invoice Number” , “Due Date” , and “Total”.

  3. Add Text input fields for each row and place them as below.

  4. Now we will map data from the analyzed document. Edit the default values for each field as follows:

     

Invoice Number

FormProcessor1.FormContent.Fields.INVOICE

Due Date

FormProcessor1.FormContent.Fields.’Due Date’

Total

FormProcessor1.FormContent.Fields.Total

 

  1. Play the app and add an invoice to be analyzed.

 

Forms Processing in AI Builder

Form processing

Form processing identifies the structure of your documents based on examples you provide to extract text from any matching form.  Examples might include tax forms or invoices.

In this lab we will build and train a model for recognizing invoices.  Then we will build a tablet app to show the detection in action and digitize the content.

Note:  If you are building the first model in an environment, click on Explore Templates to get started.

Exercise 1

  1. From the left navigation expand AI Builder and select Build.  Select Form Processing.
  • Name your model.  Because you are working in a shared environment make sure to include your name as part of the model name.  This will make it easier to find later.  Click create.
  • Your screen should look like the following image. Select Add documents.
  •  Add the documents from the Train folder.  You must have at least five documents to train the model.
  • Confirm the selection and click Upload.
  •  Once your uploads are complete, select Analyze.
  • Select the fields.
  • Hover over the highlighted fields and confirm the fields that should be returned by the form when processing from our trained model.



  • Once you have confirmed the fields, click Done.
  1. Train your model.
  1. Locate and open your saved model.  If you need help finding it, type your name into the search box.
  1. Review the results of the trained model.
  1. Perform a test with the test invoice.

  2.  Perform a test with another image or document.
  1. Publish the model.

Exercise 2

  1. Navigate to Apps and create a new Canvas App.  Select Blank app with a tablet layout.
  • Insert the Form Processor control from the AI Builder.
  • Map it to your saved model.
  • Drag and resize the control like the image below.
  • Play your app.




  • Click Analyze and add your test file.




  • Your uploaded form will be analyzed
  • You can see the mapped fields are recognized.
  • Close the app player.



  • Let’s take some of the data fields and place them on the screen for the user to review.  Add three labels to the screen.  Drag them to the right side of the screen and line them up like in the image below.  Edit the text to “Invoice Number” , “Due Date” , and “Total”.



  • Add Text input fields for each row and place them as below.



  • Now we will map data from the analyzed document. Edit the default values for each field as follows:
Invoice Number FormProcessor1.FormContent.Fields.INVOICE
Due Date FormProcessor1.FormContent.Fields.’Due Date’
Total FormProcessor1.FormContent.Fields.Total
  1. Play the app and add an invoice to be analyzed.

c.a.createRef(

Key phrase extraction with AI Builder

Key phrase extraction

The key phrase extraction model identifies the main points in a text document. For example, given input text “The food was delicious and there were wonderful staff”, the service returns the main talking points: “food” and “wonderful staff”. This model can extract a list of key phrases from unstructured text documents.

As this is a pre-built model, there is not training or configuration to tend to. We can jump right in to consuming it.

We will build a Flow to consume the text we provide, then extract out our key phrases and send an email notification with an HTML formatted list of those key phrases.

You can use this output in many ways using the Common Data Service, but for our limited lab purposes we will stick to the simple email scenario.

 

Exercise 1

  1. Navigate to https://make.powerapps.com/ and make sure you have the aibignite environment selected.

  1. Expand AI Builder and select Build.

  1. Select Solutions.

  1. Select the Default
    Solution. In a real project you wouldn’t add items directly to the default solution. However, in the interest of time for our lab, we will use that for our purposes.

  1. While viewing the Default Solution, click + New and select Flow.

  1. Search for trigger and select Manually trigger a flow.

  1. You will now add two inputs. The first one for My Text and the second for My Language. This will be how we add our text and language to be analyzed. Text is limited to a maximum of 5,120 characters and the following languages: Chinese. English, French, German, Italian, Japanese, Korean, Portuguese, and Spanish. Click Add an input.

  1. Select Text.

  1. Enter My Text for title and click Add an input again.

  1. Select Text again.
  2. Enter My Language for title and click + New Step.

  1. Search for predict and select Predict Common Data Service (current environment).

  1. Select KeyPhraseExtraction model, type {“text”:” in the Request Payload field and select My Text from the Dynamic Content pane.

  1. Type “, “language”:” and select My Language from the Dynamic Content pane.

  1. Add “} and click + New Step.

  1. Search for parsed and select Parse JSON.

  1. Click on the Content field and select Response Payload from the Dynamic Content pane.

  1. Copy the following JSON and paste in the Schema field.

    {

“type”: “object”,

“properties”: {

“predictionOutput”: {

“type”: “object”,

“properties”: {

“results”: {

“type”: “array”,

“items”: {

“type”: “object”,

“properties”: {

“phrase”: {

“type”: “string”

}

},

“required”: [

“phrase”

]

}

}

}

},

“operationStatus”: {

“type”: “string”

},

“error”: {}

}

}

  1. Save your flow.

 

Exercise 2

That’s all we need to build to use the model. Let’s now take the information produced and send it in an email notification.

  1. Click + New Step.

  1. Search for create and select Create HTML table.

  1. Click on the From field, select results from the Dynamic Content pane, and click Show Advanced Options.

  1. Select Automatic and click + New Step.

  1. Search for send an email and select Send an Email (V2).

  1. Enter the email of your lab user for To.
  2. Enter Key phrase for Subject.
  3. Click on the Body field and select My Text from Dynamic Content pane.

  1. Hit the [ENTER KEY] twice and select My Language from the Dynamic Content pane.
  2. Hit the [ENTER KEY] twice and select Output from the Dynamic Content pane.

  1. Click Save.
  2. Click Test.

  1. Select I’ll perform the trigger action and click Save & Test.

  1. Click Continue.

  1. Enter the following text and click Run flow.
    Text: More than 2 hours after my arrival with a pain scale of 10, i was never examined. i explained to the e.r. nurse and was told all I have to do is get up and leave if i can’t wait. so i did. very unprofessional and inhumane.
    Language: en

  1. Click Done.
  2. Confirm the successful flow run.

  1. Navigate to https://outlook.office365.com
  2. Go check your email for the results. You should see our email subject (1); the input (2) and the phrases that were extracted and formatted to our HTML table (3).

  1. Try more phrases. Make your own or try our examples.
    1. What can I say, I got into the hospital super sick and after a great care experience I am now fully recovered. I want to highlight the great human care provided by the doctors and nurses, they made me feel not like any other patient but like a unique human being.
    2. More than 2 hours after my arrival with a pain scale of 10, i was never examined. i explained to the e.r. nurse and was told all i have to do is get up and leave if i can’t wait. so i did. very unprofessional and inhumane.
    3. I went to this hospital today because I was suffering from a fever. I arrived at 9:30 am and I left 9:30pm. During that time, they gave me medication that you’re supposed to take with food. In about 20 minutes, my stomach hurts. I asked three people for food, one being my doctor, to no avail.
    4. Excellent care from Maternity staff – including consultant (and team), surgical staff who delivered both our sons via c-section and all nursing/supportstaff who helped with stay in hospital