AI Builder Object Detection Lab for Power Platform World Tour

Object detection

 

Object detection can be used to expedite or automate business processes in multiple industries. In the retail industry, it can be used to expedite the inventory management, allowing retail leaders to focus on on-site customer relationship building. In the manufacturing industry, technicians can use it to speed up the repair process to quickly pull out the manual of a piece of machinery for which the UPC/serial number isn’t standing out.

AI Builder object detection will allow companies of any size to add these capabilities for their own custom objects to their apps.

Object detection lets you count, locate, and identify selected objects within any image. You can use this model in PowerApps to extract information from pictures you take with the camera, or load into an app.

In this lab, we will build and train a detection model and build an app that uses the detection model to identify objects from available images.

Note: If you are building the first model in an environment, click on Explore Templates to get started.

 

Setup

Object detection maps objects to a Common Data Service Entity. To get started we need to create this entity.

Step 1. Log in to Power Apps.

Step 2. Navigate to “Data” and Select Entities and New Entity

 

Step 3. Create a new Entity.

Step 4. Add a field for inventory total named: aib_inventorytotal as type whole number

 

Step 5. Navigate to Data to add our products

Step 6. Add our Three Products

Green Tea Rose

Green Tea Cinnamon

Green Tea Mint

 

 

Step 7. Verify the data was entered into the entity

 

Exercise 1

In this exercise we will build and train the Object Detection model for three varieties of tea.

  1. In PowerApps maker, expand AI Builder and select Build. Select Object Detection.

 

  1. Name your model Green Tea Product Detection _Your name and
    Click create.

  1. Your screen should now look like the image here.

  1. Notice the progress indicator on the left. Those are the steps we will follow now to build and train our model.

  1. We are now going to define the objects we are tracking. Click on the Select object names.

 

  1. From the entity list, select Object Detection Product.

 

  1. Select the Name field and click Select field.

  1. Select the tea items and click Next.

 

  1. Notice the progress indicator has moved forward to the Add images step.

  1. Click add images.

     

    Images can be found here

  1. Select images from the set provided. You will need enough images to provide 15 samples for each type of tea we are tracking.

  1. Approve the upload of images. Click Upload images. After the upload completes, click Close.

 

  1. Click next to begin tagging the images.

  1. Select the first image to begin tagging.

  1. Hover over the image, near an item you wish to tag. A dotted-lined box should appear around the item. It has been detected as a single item that can be tagged.

  1. Click on the item and select the matching object name.

  1. If the pre-defined selector is not accurate, as in the below example, you can drag the container to draw it to accurately tag the item.

  1. Do this for each item in the image and for each image in your set. When you have tagged all of the images you uploaded click Done Tagging in the top right of the screen.

  1. Once you have completed tagging, you will get a summary of the tags. If you haven’t tagged enough for analysis, you will need to load and tag more examples.

 

  1. Once you have defined enough tags for training the model, you will be allowed to initiate the training. Click Next.

  1. Click Train.

  1. The training takes a few moments.

  1. Navigate to the saved model view and confirm your model has completed training.

  1. Select the model you just made.

  1. Select Quick test.

 

  1. Upload or drag and drop one of your test images to be analyzed.

     

  2. You will see the analysis and level of confidence for the match.

 

  1. Upload an image you know will not match. You will see the analysis and level of confidence for the match.

  1. Click close.
  2. Publish your model.

 

Exercise 2

We will now create a canvas app you can use for detecting the items that have been trained in our model. The product will be detected from the image and you will be able to adjust on-hand inventory for the item.

  1. Navigate to Apps, and select Create an app, then select Canvas. If asked, grant permission for the app to use your active CDS credentials.

  1. Select Blank app with Phone layout.

  1. On the maker canvas, select the Insert tab in the ribbon and expand AI Builder. Select Object detector to place this control on your app.

  1. Select the AI model you built.

  1. Resize the control to better use the space.

  1. Make sure to leave room for more items we will be placing soon.

  1. Play your app.

  1. Click on Detect.

  1. Choose one of your test images and click Open.

  1. The image will now be analyzed.

  1. Our model has detected each tea in the image.

  1. Exit the app player.

 

Bonus exercise- build out the data in your canvas app

 

  1. We will now select our data source. Select View from the ribbon and select Data Sources.

  1. Click + Add Data Source.

  1. Add the Common Data Service data source. Do not use Common Data Service (current environment).

  1. Select the Object Detection Products entity and click Connect.

  1. Close the Data pane.
  2. With Screen1 selected in the Tree view, navigate to the Insert ribbon tab, expand Gallery and select Blank vertical gallery.

  1. Rename the Gallery productGallery. You are re-naming the gallery so you can reference it from your formulas.

  1. Resize and move the gallery control to fit the available space on the screen, leaving some space at the bottom for using later.

  1. Select the edit icon from the gallery.

  1. Add a label to the gallery.

  1. Click edit again and add a Text input box to the gallery. Resize and place it to line up with the label we’ve already placed. We will be updating inventory counts in this text box.

  1. Rename the Text Input inventoryInput. You are renaming this control so you can reference it from your formulas.

  1. With focus on the Screen1 in the Tree view, click in the ribbon on Insert and select Button.

  1. Drag and move the button to the bottom of the screen, double click on it to edit the text. Rename it to Update.

  1. We will now add the user message to give the user confirmation their submission was accepted; we will define this logic later. With focus on Screen1, insert a label, drag it to the bottom of the screen.

 

  1. We will now add logic to the controls we’ve placed on the screen. Select the Gallery and replace the Items formula with the following.

    ‘Object Detection Products’

  1. Select the label in your gallery. Replace the Text formula with the following:

    ThisItem.Name

  1. Select inventoryInput and replace the formula for Default with the following:

    LookUp(‘Object Detection Products’,Name = ThisItem.Name).’Inventory Total’

  1. Select the other label (the one that shows at the bottom of the screen) and replace its text with the following:

    usermessage

  1. You’ll notice that area now looks blank. We will configure that message in our next step.

  1. Select the button control and replace the OnSelect with the following:

    ForAll(productGallery.AllItems,Patch(‘Object Detection Products’,LookUp(‘Object Detection Products’,Name=DisplayName),{‘Inventory Total’:Value(inventoryInput.Text)}));Set(usermessage,”Updated ” & CountRows(productGallery.AllItems) & ” items”)

  1. Play the app again.
  2. Click Detect.

  1. Select an image to evaluate.

  1. Update the quantity for the correct product and click Update.

  1. The bottom should show a message now.

AI Builder Forms Processing

This post was republished to Sterlings at 11:54:43 PM 11/21/2019

AI Builder Forms Processing

 

 

Account    Sterlings

 

Form processing

Form processing identifies the structure of your documents based on examples you provide to extract text from any matching form. Examples might include tax forms or invoices.

In this lab we will build and train a model for recognizing invoices. Then we will build a tablet app to show the detection in action and digitize the content.

Note: If you are building the first model in an environment, click on Explore Templates to get started.

 

Exercise 1

  1. From the left navigation expand AI Builder and select Build. Select Form Processing.

  1. Name your model. Because you are working in a shared environment make sure to include your name as part of the model name. This will make it easier to find later. Click create.

  1. Your screen should look like the following image. Select Add documents.

 

  1. Add the documents from the Train folder. You must have at least five documents to train the model.
  2. Confirm the selection and click Upload.

  3. Once your uploads are complete, select Analyze.

 

  1. Select the fields.

  2. Hover over the highlighted fields and confirm the fields that should be returned by the form when processing from our trained model.

  3. Once you have confirmed the fields, click Done.

  1. Train your model.

  1. Locate and open your saved model. If you need help finding it, type your name into the search box.

  1. Review the results of the trained model.

  1. Perform a test with the test invoice.
  2. Perform a test with another image or document.

  1. Publish the model.

 

 

Exercise 2

 

  1. Navigate to Apps and create a new Canvas App. Select Blank app with a tablet layout.

  1. Insert the Form Processor control from the AI Builder.

  1. Map it to your saved model.

  1. Drag and resize the control like the image below.

  1. Play your app.

  2. Click Analyze and add your test file.

  3. Your uploaded form will be analyzed

 

  1. You can see the mapped fields are recognized.

  1. Close the app player.

  2. Let’s take some of the data fields and place them on the screen for the user to review. Add three labels to the screen. Drag them to the right side of the screen and line them up like in the image below. Edit the text to “Invoice Number” , “Due Date” , and “Total”.

  3. Add Text input fields for each row and place them as below.

  4. Now we will map data from the analyzed document. Edit the default values for each field as follows:

     

Invoice Number

FormProcessor1.FormContent.Fields.INVOICE

Due Date

FormProcessor1.FormContent.Fields.’Due Date’

Total

FormProcessor1.FormContent.Fields.Total

 

  1. Play the app and add an invoice to be analyzed.

 

Forms Processing in AI Builder

Form processing

Form processing identifies the structure of your documents based on examples you provide to extract text from any matching form.  Examples might include tax forms or invoices.

In this lab we will build and train a model for recognizing invoices.  Then we will build a tablet app to show the detection in action and digitize the content.

Note:  If you are building the first model in an environment, click on Explore Templates to get started.

Exercise 1

  1. From the left navigation expand AI Builder and select Build.  Select Form Processing.
  • Name your model.  Because you are working in a shared environment make sure to include your name as part of the model name.  This will make it easier to find later.  Click create.
  • Your screen should look like the following image. Select Add documents.
  •  Add the documents from the Train folder.  You must have at least five documents to train the model.
  • Confirm the selection and click Upload.
  •  Once your uploads are complete, select Analyze.
  • Select the fields.
  • Hover over the highlighted fields and confirm the fields that should be returned by the form when processing from our trained model.



  • Once you have confirmed the fields, click Done.
  1. Train your model.
  1. Locate and open your saved model.  If you need help finding it, type your name into the search box.
  1. Review the results of the trained model.
  1. Perform a test with the test invoice.

  2.  Perform a test with another image or document.
  1. Publish the model.

Exercise 2

  1. Navigate to Apps and create a new Canvas App.  Select Blank app with a tablet layout.
  • Insert the Form Processor control from the AI Builder.
  • Map it to your saved model.
  • Drag and resize the control like the image below.
  • Play your app.




  • Click Analyze and add your test file.




  • Your uploaded form will be analyzed
  • You can see the mapped fields are recognized.
  • Close the app player.



  • Let’s take some of the data fields and place them on the screen for the user to review.  Add three labels to the screen.  Drag them to the right side of the screen and line them up like in the image below.  Edit the text to “Invoice Number” , “Due Date” , and “Total”.



  • Add Text input fields for each row and place them as below.



  • Now we will map data from the analyzed document. Edit the default values for each field as follows:
Invoice Number FormProcessor1.FormContent.Fields.INVOICE
Due Date FormProcessor1.FormContent.Fields.’Due Date’
Total FormProcessor1.FormContent.Fields.Total
  1. Play the app and add an invoice to be analyzed.

c.a.createRef(

Key phrase extraction with AI Builder

Key phrase extraction

The key phrase extraction model identifies the main points in a text document. For example, given input text “The food was delicious and there were wonderful staff”, the service returns the main talking points: “food” and “wonderful staff”. This model can extract a list of key phrases from unstructured text documents.

As this is a pre-built model, there is not training or configuration to tend to. We can jump right in to consuming it.

We will build a Flow to consume the text we provide, then extract out our key phrases and send an email notification with an HTML formatted list of those key phrases.

You can use this output in many ways using the Common Data Service, but for our limited lab purposes we will stick to the simple email scenario.

 

Exercise 1

  1. Navigate to https://make.powerapps.com/ and make sure you have the aibignite environment selected.

  1. Expand AI Builder and select Build.

  1. Select Solutions.

  1. Select the Default
    Solution. In a real project you wouldn’t add items directly to the default solution. However, in the interest of time for our lab, we will use that for our purposes.

  1. While viewing the Default Solution, click + New and select Flow.

  1. Search for trigger and select Manually trigger a flow.

  1. You will now add two inputs. The first one for My Text and the second for My Language. This will be how we add our text and language to be analyzed. Text is limited to a maximum of 5,120 characters and the following languages: Chinese. English, French, German, Italian, Japanese, Korean, Portuguese, and Spanish. Click Add an input.

  1. Select Text.

  1. Enter My Text for title and click Add an input again.

  1. Select Text again.
  2. Enter My Language for title and click + New Step.

  1. Search for predict and select Predict Common Data Service (current environment).

  1. Select KeyPhraseExtraction model, type {“text”:” in the Request Payload field and select My Text from the Dynamic Content pane.

  1. Type “, “language”:” and select My Language from the Dynamic Content pane.

  1. Add “} and click + New Step.

  1. Search for parsed and select Parse JSON.

  1. Click on the Content field and select Response Payload from the Dynamic Content pane.

  1. Copy the following JSON and paste in the Schema field.

    {

“type”: “object”,

“properties”: {

“predictionOutput”: {

“type”: “object”,

“properties”: {

“results”: {

“type”: “array”,

“items”: {

“type”: “object”,

“properties”: {

“phrase”: {

“type”: “string”

}

},

“required”: [

“phrase”

]

}

}

}

},

“operationStatus”: {

“type”: “string”

},

“error”: {}

}

}

  1. Save your flow.

 

Exercise 2

That’s all we need to build to use the model. Let’s now take the information produced and send it in an email notification.

  1. Click + New Step.

  1. Search for create and select Create HTML table.

  1. Click on the From field, select results from the Dynamic Content pane, and click Show Advanced Options.

  1. Select Automatic and click + New Step.

  1. Search for send an email and select Send an Email (V2).

  1. Enter the email of your lab user for To.
  2. Enter Key phrase for Subject.
  3. Click on the Body field and select My Text from Dynamic Content pane.

  1. Hit the [ENTER KEY] twice and select My Language from the Dynamic Content pane.
  2. Hit the [ENTER KEY] twice and select Output from the Dynamic Content pane.

  1. Click Save.
  2. Click Test.

  1. Select I’ll perform the trigger action and click Save & Test.

  1. Click Continue.

  1. Enter the following text and click Run flow.
    Text: More than 2 hours after my arrival with a pain scale of 10, i was never examined. i explained to the e.r. nurse and was told all I have to do is get up and leave if i can’t wait. so i did. very unprofessional and inhumane.
    Language: en

  1. Click Done.
  2. Confirm the successful flow run.

  1. Navigate to https://outlook.office365.com
  2. Go check your email for the results. You should see our email subject (1); the input (2) and the phrases that were extracted and formatted to our HTML table (3).

  1. Try more phrases. Make your own or try our examples.
    1. What can I say, I got into the hospital super sick and after a great care experience I am now fully recovered. I want to highlight the great human care provided by the doctors and nurses, they made me feel not like any other patient but like a unique human being.
    2. More than 2 hours after my arrival with a pain scale of 10, i was never examined. i explained to the e.r. nurse and was told all i have to do is get up and leave if i can’t wait. so i did. very unprofessional and inhumane.
    3. I went to this hospital today because I was suffering from a fever. I arrived at 9:30 am and I left 9:30pm. During that time, they gave me medication that you’re supposed to take with food. In about 20 minutes, my stomach hurts. I asked three people for food, one being my doctor, to no avail.
    4. Excellent care from Maternity staff – including consultant (and team), surgical staff who delivered both our sons via c-section and all nursing/supportstaff who helped with stay in hospital

Form processing in AI Builder

Form processing

Form processing identifies the structure of your documents based on examples you provide to extract text from any matching form. Examples might include tax forms or invoices.

In this lab we will build and train a model for recognizing invoices. Then we will build a tablet app to show the detection in action and digitize the content.

Note: If you are building the first model in an environment, click on Explore Templates to get started.

 

Exercise 1

  1. From the left navigation expand AI Builder and select Build. Select Form Processing.

  1. Name your model. Because you are working in a shared environment make sure to include your name as part of the model name. This will make it easier to find later. Click create.

  1. Your screen should look like the following image. Select Add documents.

 

  1. Add the documents from the Train folder. You must have at least five documents to train the model.
  2. Confirm the selection and click Upload.

  3. Once your uploads are complete, select Analyze.

 

  1. Select the fields.

  2. Hover over the highlighted fields and confirm the fields that should be returned by the form when processing from our trained model.

  3. Once you have confirmed the fields, click Done.

  1. Train your model.

  1. Locate and open your saved model. If you need help finding it, type your name into the search box.

  1. Review the results of the trained model.

  1. Perform a test with the test invoice.
  2. Perform a test with another image or document.

  1. Publish the model.

 

 

Exercise 2

 

  1. Navigate to Apps and create a new Canvas App. Select Blank app with a tablet layout.

  1. Insert the Form Processor control from the AI Builder.

  1. Map it to your saved model.

  1. Drag and resize the control like the image below.

  1. Play your app.

  2. Click Analyze and add your test file.

  3. Your uploaded form will be analyzed

 

  1. You can see the mapped fields are recognized.

  1. Close the app player.

  2. Let’s take some of the data fields and place them on the screen for the user to review. Add three labels to the screen. Drag them to the right side of the screen and line them up like in the image below. Edit the text to “Invoice Number” , “Due Date” , and “Total”.

  3. Add Text input fields for each row and place them as below.

  4. Now we will map data from the analyzed document. Edit the default values for each field as follows:

     

Invoice Number

FormProcessor1.FormContent.Fields.INVOICE

Due Date

FormProcessor1.FormContent.Fields.’Due Date’

Total

FormProcessor1.FormContent.Fields.Total

 

  1. Play the app and add an invoice to be analyzed.

Creating a Power Virtual Agent Bot

If you haven’t seen the news, the Power Platform has a new member of the family: Power Virtual Agents!

Power Virtual Agents, provide exceptional support to customers and employees with AI-driven virtual agents. Easily create and maintain bots with a no-code interface.

If you aren’t familiar with bots, a bot is a computer program that conducts a text conversation with your customers to direct them to what they need quickly without requiring your human agents to intervene. Bots are a great way to answer simple, repetitive questions from your customers and to help them do repeatable tasks like find out how to return or exchange an item, join your rewards program, or cancel an order (which you’ll learn how to do in this training). Bots save your agents time (and your company money) by freeing agents to focus on more complex problem-solving and handle more valuable customer interactions.

This blog post is a very introductory walk through on how to get started creating your own bot.

Step 1: Navigate to https://powervirtualagents.microsoft.com/en-us/ and select “Try Preview”

Step 2. Log in to the tenant you want to create your bot in.

This presumes you already have an environment created in your Office Tenant. If you don’t already have a Tenant or environment please see the directions can be found in an App an two hours in the Power Apps training: https://aka.ms/powerappstraining

Step 3. Create a new bot.

Being there is no bot in your tenant, you will get automatically get prompted to create one.

Step 4. Set up the bot options

If you don’t want to create your bot in the default environment, you can set this under “More Options”

Step 5. That is, it! Your bot is built, and it is time to test it!!!!!!!!

To do this turn on tracing

Step 6. Add some text that falls into your greeting phrase.

Step 7. Watch the processing and workflow

Step 8. Customize your bot

Now that you have seen your bot in action, let’s customize it!

The easiest place to start is the greeting button, changed if you toggle tracing back off you see this option.

Step 9. Change your greeting

Step 10. Verify your Updates!

Simply rerun your bot and greet it, the response should now come back with your new greeting!

Congratulations you have just created your own bot with no code and simple configuration changes.

In the next post we will show you how to deploy it!

For more information check out the forum at https://aka.ms/virtualagentforum

Using the Sentiment Analysis Action from PowerApps

 

AI Builder has some amazing features. This walk through will get you started with using their sentiment analysis from PowerApps

  1. Login to PowerApps


  1. Navigate to Solutions

     


 

  1. Create a new Solution

  1. Open your Solution

  1. Add a new Flow

  1. Set the trigger to PowerApps. Note I also named the flow at this step: “Sentiment from PowerApps”

  1. Add the “Predict” action to the Flow.

    Note: if you don’t see the “Predict” action, you are likely NOT in a solution. NOTE this is required!!!!!!!!!!

  1. Set the model to “SentimentAnalysis Model”

    Note: the other AI Builder Models I had created and available to the Flow

  1. Insert the following Text into the Request Payload

    {“text”:”My Text”, “language”:”My Language”}

     

    This from https://docs.microsoft.com/en-us/ai-builder/flow-sentiment-analysisc (watch those evil smart quotes!)

     

 

 

  1. Replace the “My Text” argument with Ask in PowerApps by clicking the “Ask in PowerApps” shape in the bottom of the action

  1. Replace the “My Language” argument with Ask in PowerApps by clicking the “Ask in PowerApps” shape in the bottom of the action

 

  1. Add a new Step and add the Parse JSON action

  1. Specify the content as the Response Payload. Specify the schema as the JSON below.

JSON for the Schema:

 

 

{

“type”: “object”,

“properties”: {

“predictionOutput”: {

“type”: “object”,

“properties”: {

“result”: {

“type”: “object”,

“properties”: {

“sentiment”: {

“type”: “string”,

“title”: “documentSentiment”

},

“documentScores”: {

“type”: “object”,

“properties”: {

“positive”: {

“type”: “number”

},

“neutral”: {

“type”: “number”

},

“negative”: {

“type”: “number”

}

}

},

“sentences”: {

“type”: “array”,

“items”: {

“type”: “object”,

“properties”: {

“sentiment”: {

“type”: “string”

},

“sentenceScores”: {

“type”: “object”,

“properties”: {

“positive”: {

“type”: “number”

},

“neutral”: {

“type”: “number”

},

“negative”: {

“type”: “number”

}

}

},

“offset”: {

“type”: “integer”

},

“length”: {

“type”: “integer”

}

},

“required”: [

“sentiment”,

“sentenceScores”,

“offset”,

“length”

]

}

}

}

}

}

},

“operationStatus”: {

“type”: “string”

},

“error”: {}

}

}

  1. Add a PowerApps Response Action to the Flow

  1. Set the PowerApps Response to return an output a text value that is documentsentiment object from the parse JSON action.

  1. Save flow
  2. Go back to Solutions

  1. Add a new Canvas App. ( in this is a Phone form factor…but it isn’t really that important)

  1. Add a button and a Text Input and a button control

 

  1. Add a label control and set the text equal to mysentiment.sentiment

    PowerApps will complain about this…ignore for the time now.

  1. Select the button you added above and select the menu “Actions”, select Flows then select the Flow you created above.

NOTE: This is currently not working as Flows can not be referenced from a PowerApps in a Solution.

It is working in our staging environment and should be working soon! (Where the screen shots were taken)

  1. And here it is running!

Note the AI action gives many sentiment heuristics such as the scores for each sentiment type