Natural Language Tutorial

Introduction

This tutorial shows the developer how to add to the core VANTIQ natural language processing capabilities. Natural language processing enables customers to interact with VANTIQ applications using language with which they are familiar, rather than having to learn the language of the application. The VANTIQ system provides a set of capabilities for interacting with the VANTIQ system itself, and is extensible so that application developers can add language appropriate to their application and/or user base.

In this tutorial, we will learn how to add natural language processing to a VANTIQ application, and how to extend the base system to include language specific to the application. Our approach in this tutorial is to move forward by incrementally building a working system. We will first add the VANTIQ-supplied natural language operational, then add the extensions specific to this application.

Throughout this tutorial, we demonstrate the working system by interacting with VANTIQ using Slack. Any client that can interoperate with a VANTIQ chatbot source is fine; here, we have chosen to use Slack.

The application in this tutorial is a simple hospital application. This application will allow hospital staff to query the VANTIQ system for patients based on condition and/or location. This tutorial imagines a hospital application where patients are admitted and their various test results cataloged. The natural language system is added to allow queries based on that patients’ conditions:

and their current location (room) in the hospital

Thus, rather than use query-like language to find this information (e.g. list patients where glucose > 100), the application user will be able to use a more (medical) people literate form (e.g. who’s in ER with diabetes).

All lessons assume the developer has a working knowledge of VANTIQ Modelo. It is recommended that a new developer completes the lessons in the Introductory Tutorial before starting the lessons in this tutorial. In addition, please see the Natural Language Guide for more information about the VANTIQ natural language processing system.

Overview

The overall structure of the VANTIQ natural language system is depicted in the following diagram.

VANTIQ Architecture Diagram

Architecturally, our LUIS application is responsible for interpreting these utterances, analyzing them, and returning the appropriate intent and associated entities. The VANTIQ application is responsible for understanding these intents and entities, and executing them to provide the user with the appropriate response.

In this tutorial, we will make use of the chatbot, a compatible chat client, and the LUIS application. We will perform the following actions.

We will start with the LUIS application.

Set Up the LUIS Application

In the subsequent sections, we will walk through the steps to create and modify our LUIS application. Please see the Learn about Language Understanding Intelligent Service (LUIS) for a more detailed explanation and walk through.

Create A Cognitive Services Account

LUIS applications run in Microsoft’s Azure cloud, and are part of a wider offering known as Cognitive Services. Information about setting up accounts and creating a service can be found at the LUIS home site. Before proceeding, we will need to have an account set up for our LUIS application.

Create the account now.

Import the VANTIQ Natural Language Subset

Once the account is set up, we will be able to log in and find the My Apps page.

My Apps Before Import

(In this case, this is a new account. If we have already imported or created other applications, some applications may be listed here.)

The next step is to import the VANTIQ natural language subset (for details, see the Natural Language Guide). The VANTIQ natural language subset contains various intents and entities that allow for simple queries against the VANTIQ system. These include things like

A zip file containing the VANTIQ NLS can be found here. Please download this file, keeping track of its location.

Once downloaded and unzipped, return to the *My Apps page depicted above. Press the Import App button, which will bring up an import dialog.

Import Dialog

Press the Choose File button, and select the LUISVantiqBase.json file downloaded. Set the name to “NatLangTutorial” (it will default to VANTIQ Base), and press Import. This should present us with a new application on our My Apps list.

My Apps After Import

Here, we see the new LUIS application NatLangTutorial listed.

Now, please click on the LUIS Application Name (in our example, NatLangTutorial). We will now be looking at the dashboard for our LUIS Application.

LUIS Application Dashboard

Train and Publish

Our next step is to make the core VANTIQ capabilities available without alteration. To do this, we need to train and publish our LUIS application.

So far, we have imported the natural language subset that can understand some commands about the VANTIQ system. However, to make this available, we must train the application and publish it.

Training Your LUIS Application

Training is a step where the definition of the natural language subset is processed by the LUIS system, and the resulting training data made ready to publish. The natural language subset contains a set of entities, intents, and utterances. (These are oulined in the LUIS documentation and in the VANTIQ Natural Language Guide.) The training process maps the utterances to intents and entities so that similar utterances presented by LUIS application users can be transformed into the appropriate intents and entities.

To train the LUIS application, we select the Train & Test menu from the LUIS Application Menu.

LUIS Application Menu

This will place us on the Training Page.

LUIS Application Training

Press the Train Application button. When this is done, we will see informational messages near the top of the browser window that give our status. Once we see that training is complete, we can test our application.

LUIS Application Testing

We do this by typing various phrases that our application should understand into the testing area (seen in the previous image with the phrase Type a test utterance & press Enter). Once we provide an utterance, we will see the result parsed below. We verify that it is correct (that the correct intent was selected, that the correct entities were identified, etc.).

For example, if we test our LUIS application with list people whose name is fred, we will see a result like the following.

LUIS Test People

In this image, we can see the Top Scoring Intent on the right (system.list with a score of 1 (1 means LUIS is very sure this is the correct intent)), and we see the utterance we tested broken down into the different entities of interest.

Similarly, if we test the our LUIS application with show active collaborations since the eighth of september, we see the following.

LUIS Test Collaborations

This has similar results – an top scoring intent (system.showActive) with a score of 0.98, and the entities relevant to our statement.

(Note: Once you begin developing your extensions, you will spend a good deal of time on this page. This is where you “debug” your LUIS application. If and when you get incorrect results, you will refine your language by adding utterances, and return to this page. We will see more of this in a subsequent section.)

Publishing Your LUIS Application

Once we are satisfied with the training, we will publish the application. The act of publishing the application makes it available for use outside the LUIS Console with which we have been working.

To publish the application, we select Publish App from the LUIS Application Menu. This will present us with the Publish App page.

Publish App Before Publication

For testing purposes, we select Staging for the Endpoint Slot, and press Publish.

Publish App After Publication

Here, we must take special note of the Endpoint url (parts of which are obscured in the image above). Specifically, we take note of the base URL (https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/665...) and the subscription key (the part after subscription-key= – in this case, the 0639... until just before the next ampersand. We will need these to create the source in VANTIQ.

At this point, we should have a functioning and available LUIS application. Now, we will connect this to VANTIQ.

Create a VANTIQ Application

Our next step is to connect the VANTIQ system to the LUIS application we have just created.

Create a LUIS Source

(For more detailed information about source creation, please see the Source Tutorial and/or the Remote Source Guide.)

To create a source in an existing Modelo project for communication with our LUIS application, use the Add button to select Source…, and, from there, use New Source.

LUIS Source in VANTIQ

We provide the Server URI that was saved from the publish step. The server URI should be set to the URI from the publish step, up to but not including the ampersand (&). It should include the LUIS application ID. It will look something like https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/665..., where the 665... is the application ID.

The query parameters and headers are provided as JSON documents, so we provide those as shown above. If the LUIS application was published in the Staging Endpoint slot, set the staging parameter to true in the Query Parameters section of the Request Default Properties section.

LUIS on Azure uses the Ocp-Apim-Subscription-Key header to provide the subscription key, so we set that under the Headers section. The subscription key value was saved from during the publish step.

The remaining fields can be left to their defaults. As required, they will be provided at runtime by the VANTIQ system.

Save the LUIS Source. The source is now ready to use with our published LUIS application.

Note that the VANTIQ system interacts with the published LUIS application using the POST method (as opposed to GET).

Create a Chatbot

To interact with our VANTIQ application, we will need a chatbot defined. The details regarding how to create a chatbot can be found in the Chatbot Guide. If a chatbot already exists in the VANTIQ namespace, we do not require another.

Once created, we will want to connect our chatbot to a channel. Information regarding how to do that is available at https://dev.botframework.com. As noted above, examples in this document use slack.

It is worth noting here that VANTIQ’s direct interaction with a chatroom (via the VANTIQ Mobile App) requires the chatbot to have a Direct Line Secret Key. When you set up the chatbot, make sure that you add that channel to your chatbot. Information about adding the Direct Line channel to the bot can be found in the Bot Framework documentation.

For the purposes of this example, the chatbot source we create in VANTIQ is named theChatbot. You are free to name yours whatever you desire, but be aware of any usages and substitute accordingly.

Create a Service to Process Utterances

At this point, we have the following parts

Referring to the overall stucture diagram provided above and described in the Natural Language Guide, the basic flow is as follows.

In this step, we will create a simple procedure that will be used by our rule.

By convention, we are collecting the procedures used in this tutorial in a service named NatLangTutorial. You are, of course, free to use any service name you wish; be aware of the name substitution as required.

Our simple procedure is called processUtterance, and is shown below.

PROCEDURE NatLangTutorial.processUtterance(utterance String, languageService String)

// Let's figure out if we can translate these into actions...
var response = "Default Response Value"

try {
    var interpretation = NaturalLanguageCore.interpretNLQuery(utterance, languageService)

    log.debug("ProcessUtterance() interpretation: {}", [interpretation.stringify(true)])

    if (interpretation.errorMsg != null) {
        // Then, we had some error.  Let's just dump that as the response and move on
        log.debug("ProcessUtterance(): Found Error: {} -- {}", [interpretation.errorMsg, languageService])
        response = interpretation.errorMsg
    } else if (interpretation.response.intent.startsWith("system.")) {
        log.debug("ProcessUtterance():  Attempting interpretation of intent: {}", [interpretation.response.intent])
        var interpretedString = NaturalLanguageCore.executeSystemIntent(interpretation.response)
        response = interpretedString.response
    } else {
       response = "I don't know how to execute intent: " + interpretation.response.intent
    }
}
catch (error) {
    log.error("ProcessUtterance(): Error Encountered: " + stringify(error))
    response = error.message
}

return response

The utterance and the name of the LUIS source are passed to this procedure. The procedure makes use of some of the procedures described in the Natural Language Guide, and operates as follows.

The else clause returns a simple “I don’t know how…” message if given a non-system intent. We will address this below.

Create Rule to Process Natural Language Requests

With the procedure defined, create a simple rule to link the receipt of the utterance with that procedure, completing the flow of information from the utterance to its execution. Once this is done, we will have a working VANTIQ application that makes use of our LUIS application.

To create a rule in an existing Modelo project, use the Add button to select Rule…, and, from there, use New Rule. Create a rule named ConverseViaRules, with the contents shown below. Our rule makes use of a few of the natural language procedures defined in the VANTIQ Resources section of the Natural Language Guide.

RULE ConverseViaRules
WHEN MESSAGE ARRIVES FROM theChatbot AS message where message.channelId != "directline"
// We'll let the chatroom do it's own thing -- that's why we're ignoring 'directline' stuff here.

log.debug("ConverseViaRules: message in: {}", [message.stringify(true)])

// Remove formatting or encoding that may have come from the chat channel

var preppedText = NaturalLanguageUtils.prepareText(message.channelId, true, message.text)

// Now, let's go perform the work

var response = NatLangTutorial.processUtterance(preppedText, "NatLangTutorialLUISSource")

// the following "prepare" is not necessary as publishReponse handles it for you
// message.text = NaturalLanguageUtils.prepareText(message.channelId, false, response)

message.text = response

log.debug("ConverseViaRules: message out: {}", [message.stringify(true)])

NaturalLanguageCore.publishResponse(message, message.text, "theChatbot")

This rule operates as follows.

Save the rule, and our VANTIQ Application is operable.

Verify the Overall Flow

At this point, before extending anything, we can verify that our application works as expected. As outlined, VANTIQ natural language set includes a variety of commands. We can use those to test our overall system. The following image shows examples from that set, including hi, please describe projects, and show active collaborations since the first of september. In these examples, the user utters something (types in an utterance), and the system responds with the results (i.e., it interprets the utterance, and executes the intent using the entities derived from the utterance, returning the result).

slack Application with VANTIQ Utterances

In the image above, the lines entitled …_local_bot are the response from VANTIQ to the slack user.

The data is already available for medical personnel to determine patients by their conditions, and is available via the VANTIQ intent set. (Note that in our tutorial, these are not runnable yet, as we have not yet added the Patient type and instances. If you with to run these now, please see the Add Sample Data section.)

slack List Patients

and, with some specifics

slack query Patients

However, this requires our medical personel to think like application people. That is, it requires (VANTIQ) application literacy.

Our goal is to add some people-literate capabilities to our application. In this case, we want the application to present itself in terms that medical people will understand.

Extend the LUIS Application

Once the we establish the complete flow, we are ready to enhance our natural language capabilities to accept more hospital-friendly lanaguge. We will start with the LUIS application.

(Note that we are not representing that this is the best medical interface. It is but an example.)

To enhance the LUIS application, we must consider the intent set we want to add. For this tutorial, we would like to be able to ask about our hospital’s patient population in terms that are appropriate for medical personel. Specifically, we would like to support utterances like these.

Recall that architecturally, our LUIS application is responsible for interpreting these utterances, analyzing them, and returning the appropriate intent and associated entities. The VANTIQ application is responsible for understanding these intents and entities, and executing them to provide the user with the appropriate response.

Language Design

In this case, all of these different phraseologies can be handled with a single intent. The intent will query the hospital population for the conditions and locations outlined above. The entities of interest are the conditions involved and the location queried. In the natural language to be developed, the user can specify either the patient condition, the patient location, neither, or both.

Add Entities

Although somewhat counterintuitive, we recommend defining the entities first. The entities, as outlined above, refer to the condition and location queried.

In order to maintain some structure, we will name our intents and entities using the prefix health. Recall that the intents and entities provided by the system are prefixed with system.. We saw this convention in use in our NaturalLanguageTutorial.processUtterance() procedure.

To create an entity, we go back to the LUIS console and select Entities from the LUIS application menu.

LUIS Application Menu

From there, we will see something like this.

LUIS Entities

To create an entity, select Add custom entity. This will present us with an add entity dialogue.

LUIS Add Hospital Room

Our entity will be named health.room_special, and its type is List. Type in the name, and select List for the entity type. When we press Save, we will be presented with screen on which to provide the list of values. This value list has notions of canonical values and synonyms. Which values you put in canonical vs. synonyms depends upon your usage.

List entities specify a set of values that are identified with that entity. If there is a known list of values, this makes understanding the entities when the IntentSpecification is returned much easier. However, it works only when there is a known set of values.

LUIS Add Hospital Room Values

Our second entity will also be aList entity.

For purposes of this tutorial, we will pretend that there are only 3 medical conditions of interest: diabetes, tachycardia, and hypertension. Thus, we create 3 (+ 1 for the general case) List entities.

To create List entities, select Add custom entity, add the name, and select List as the entity type. For names, we will use health.condition_ as the general prefix for this collection of condition lists, so they are easily recognized as health conditions.

LUIS Add List Entity

When we press Save, we will again be presented with screen on which to provide the lists. For our case, we will simply add diabetes and some synonyms.

LUIS Diabetes List

Similarly, we will define these entities for other conditions – first with Add custom entity, then filling in the lists as shown below.

LUIS Cardiovascular List

LUIS Hypertension List

We will also add a general case for folks who are just generally sick (meaning that the query is not condition specific).

LUIS Sick List

We will see the processing differences when we look at the processing required to handle these two types of entities.

With that, our entity list is complete.

Add intents

We now move on to adding our intent. As noted above, we can get by here with a single intent, one that we will call health.patientsByCondition.

To add an intent, we select Intents from the LUIS Application Menu. This will present us with something like the following.

LUIS Intents custom intents

To add an intent, select Add Intent. This will present you with a simple dialogue; provide the intent name health.patientsByCondition, and press Save.

LUIS Add Intent

Once we save this intent, we will be placed in the intent where we can begin to add utterances.

Thus far, we have defined the structure of our intent set with the entity and intent definitions. Adding the utterances is where this structure is used to interpret the language. As we add utterances, LUIS will attempt to categorize things and offer its interpretation, as shown in the following images. In these windows, we can correct that interpretation as required.

Using the set of utterances in Appendix Utterances, add these utterances now. Our first example is a simple one. We will add the utterance who is sick.

LUIS Add Utterances

Once added, we see it interpreted.

LUIS Add Utterance Done

(The interpretation of is here as a system.comparator_eq is due to that List entity in the VANTIQ natural language subset. Here, it will cause no problems so it can be ignored.)

For each utterance added, check that LUIS assigned the correct entity. Entities here are shown in square brackets, preceded with a dollar sign (e.g. [$health.condition_diabetes]).

In the next image, we add the utterance who’s in the ER with diabetes.

LUIS ER Diabetes

In the result, we can see that er was assigned to the health.room_special entity, and diabetes was assigned to health.condition_diabetes. List entities handle this work for us.

LUIS ER Diabetes Done

Now, repeat this process with other utterances from the appendix, some examples of which are shown below

Utterances with only a location – who’s in the ER.

LUIS Add Simple ER

And the result.

LUIS Add Simple ER Result

Another with only a location – who’s in icu.

LUIS Add Simple ICU

Again, completely interpreted

LUIS Add Simple ICU Result

Now, try only a health condition – who has tachycardia.

LUIS Add Simple Tachycardia

Resulting in the following.

LUIS Add Simple Tachycardia Result

Now, we will try a location that is a simple room number, not a “special hospital room.”

LUIS Add Room Number

When that is added, the 1234 is turned into a builtin entity in LUIS, specifically builtin.number.

LUIS Add Room Number Result

This appears in the LUIS console as [$number], but will appear in our entity list, when presented to our VANTIQ system, as builtin.number. We will see more about this when we look at Entity Processing.

Continue adding the utterances in the Appendix. When this is complete, we will have all 32 utterances entered. Press the Save button to associate these utterances with the intent. We should then see something like the following. (Note that the order of the utterances is not important.)

LUIS partial utterances

When testing our LUIS application finds weakness, generally we add more utterances to more fully train the LUIS application.

In building this tutorial, there were 32 utterances defined before testing was reasonably successful.

Train and Publish

Once we have our language defined, we train and publish as shown after our original import.

As part of training, we test that our new utterances are processed as expected. For example, who’s in ICU with diabetes

LUIS Test ICU

Here, we see that our Top Scoring Intent is correct (health.patientsByCondition), and the entities are properly identified.

In practice, we will end up back here often as we perfect our LUIS application.

We should note that the set of utterances provided is not the complete set understood. LUIS uses these to construct its understanding of the intent and will interpret other utterances as it understands them. For example, our collection of utterances includes who’s in the ER but does not include who’s in the ER with hypertension. LUIS will make that generalization. Sometimes this is good, sometimes not. This is why we must continue to test our LUIS application and refine its understanding.

Once training, testing, and publishing is done, our LUIS application is complete. We can now move on to extending our VANTIQ application to understand and use these new entities and intents.

Add Sample Data

To run our health queryies, we need some test data. We produce it with a simple procedure.

The Patients List

For this simple example, we have a very simple data structure. To create a type in an existing Modelo project, use the Add button to select Type…, and, from there, use New Type. We will define a single type, Patient, that contains the following information.

The following procedure creates that type and adds some sample data.

PROCEDURE NatLangTutorial.addPatients()

// First, let's see if the type already exists.  If so, we're done.
// Note here that we do not validate that this is the type for this tutorial.
// So if there's already a type with that name but different properties,
// confusion may result.

var ptType = SELECT ONE name from types where name == "Patients"
var result = "notCreated"

if (ptType == null ) {
    log.info("Creating type patients for Natural Language Tutorial")
    CREATE TYPE Patients (  name String,
                            room String,
                            age Integer,
                            bpSystolic Integer,
                            bpDiastolic Integer,
                            heartRate Integer,
                            glucose Integer)
        INDEX UNIQUE name
        WITH naturalKey = ["name"]

    result = "created"
}

var patientSet = []
patientSet.push({name: "fred", room: "ER", age: 50, bpSystolic: 120, bpDiastolic: 79, heartRate: 102, glucose: 99})
    // Fred has tachycardia
patientSet.push({name: "wilma", room: "ICU", age: 49, bpSystolic: 142, bpDiastolic: 95, heartRate: 77, glucose: 82})
    // Wilma has hypertension
patientSet.push({name: "barney", room: "102", age: 50, bpSystolic: 150, bpDiastolic: 102, heartRate: 88, glucose: 129})
    // barney has hypertension and diabetes
patientSet.push({name: "betty", room: "admissions", age: 52, bpSystolic: 120, bpDiastolic: 79, heartRate: 80, glucose: 85})
    // betty is healthy (according to these measurements)
patientSet.push({name: "kermit", room: "2204", age: 62, bpSystolic: 150, bpDiastolic: 110, heartRate: 92, glucose: 76})
    // Kermit has hypertension
patientSet.push({name: "cookie monster", room: "ICU", age: 25, bpSystolic: 95, bpDiastolic: 75, heartRate: 88, glucose: 150})
    // oscar has diabetes
patientSet.push({name: "animal", room: "ER", age: 43, bpSystolic: 120, bpDiastolic: 79, heartRate: 150, glucose: 99})
    // Animal has tachycardia

for (p in patientSet) {    
    upsert Patients(name = p.name, room = p.room, age = p.age, bpSystolic = p.bpSystolic,
                bpDiastolic = p.bpDiastolic, heartRate = p.heartRate, glucose = p.glucose)
}

return result

This procedure is designed so it can be run repeatedly to reset data if necessary. Please execute this procedure now so that we have a context and some data within which to work.

Now that we have some data and a structure, we can look at how to process it in the natural language context.

Extend the VANTIQ Application

From here, extending our VANTIQ application is not too difficult.

Entity Processing

The natural language interpreter returns the intent and the list of entities and their associated values. To understand the context required for intent execution, our VANTIQ application will have to intepret the entities in the context of the intent. See the intent specification section of the Natural Language Guide for more details.

Extend the Application: Health Conditions

Disclaimer

Any condition or interpretation thereof is not to be interpreted as medical advice, diagnosis, or anything remotely related.

Any condition interpretation is provided solely for the purpose of this tutorial.

In VANTIQ terms, we are going to turn the various health.condition entities into query conditions. That is, we will look at the condition and turn it into a where clause. To do this, we use query conditions.

We define the procedure determineConditionClause (still in the NatLangTutorial service).

PROCEDURE NatLangTutorial.determineConditionClause(interpretation Object)

var conditionEntity = null
var conditionClause = null
var foundError = null
var result

// First, search the entity list for a condition.  Not present is fine

for (e in interpretation.entities until (conditionEntity != null)) {
    if (e.name.startsWith("health.condition_")) {
        conditionEntity = e.name    // In this case, we don't need the details
    }
}

if (conditionEntity != null) {
    if (conditionEntity == "health.condition_diabetes") {
        conditionClause = {}
        var comparison = {}
        comparison["$gt"] = 99
        conditionClause.glucose = comparison
    } else if (conditionEntity == "health.condition_cardiovascular") {
        conditionClause = {}
        var comparison = {}
        comparison["$gt"] = 100
        conditionClause.heartRate = comparison
    } else if (conditionEntity == "health.condition_hypertension") {
        conditionClause = {}
        var comparisonSys = {}
        // systolic pressure > 139
        comparisonSys["$gt"] = 139
        var comparisonDiastolic = {}
        // diastolic pressure > 89
        comparisonDiastolic["$gt"] = 89
        var systolic = {}
        systolic.bpSystolic = comparisonSys
        var diastolic = {}
        diastolic.bpDiastolic = comparisonDiastolic
        // If either systolic or diastolic are over their limits...
        conditionClause["$or"] = [systolic, diastolic]
    } else if (conditionEntity == "health.condition_general") {
        // In this case, we do nothing -- no condition.  Leaving "if" statement in for documentation
    } else {
        foundError = "Condition " + conditionEntity + " was not recognized\n"
    }
}

result = { error: foundError, clause: conditionClause }

return result

This procedure looks for the various health.condition entities, constructing an appropriate condition. If no such entity is present, no query is constructed. This results in the same thing as the health.condition_general entity. We just get a list of patients in our hospital.

The value of using the List entities here is that as new terms are needed for, say, hypertension, they can be added to the LUIS application. The VANTIQ application remains unaware. Similarly, if this intent set is defined for a different culture (e.g. French, Chinese), those will result in defining the same entities. The underlying natural language mapping is completely different, but the VANTIQ application need not change.

Extend the Application: Hospital Locations

This section involves adding the understanding of hospital locations to our VANTIQ application. We allow these locations to be specified as one of a set of known locations (ER, ICU, admitting office) or by room number.

From the natural language processor, a location may appear in one of two ways:

Our procedure is defined as follows.

PROCEDURE NatLangTutorial.determineLocationClause(interpretation Object)

log.debug("determineLocationResults() from entities {}", [interpretation.entities])
var foundError = null
var result

// First, search the entity list for a condition.  Not present is fine

var locationClause = null
var locationEntity = null
var locationValue = null
for (e in interpretation.entities until (locationEntity != null)) {
    if (e.name == "health.room_special") {
        locationEntity = e.name
        if (["emergency", "emergency room", "er"].contains(e.value.toLowerCase())) {
            locationValue = "ER"
        } else if (["icu", "intensive care", "critical care", "critical care ward"].contains(e.value.toLowerCase())) {
            locationValue = "ICU"
        } else if (["admissions", "admitting office"].contains(e.value.toLowerCase())) {
            locationValue = "admissions"
        } else {
            foundError = "Unrecognized hospital location: " + e.value
        }
    } else if (e.name == "builtin.number") {
        locationEntity = e.name
        locationValue = e.value
    }

}

if (foundError == null) {
    if (locationEntity != null) {
        // Here, we could have either a room number or a health.room_special.  As it turns out, we treat them the same here.
        locationClause = {}
        locationClause.room = locationValue
    }
}
result = { error: foundError, clause: locationClause }

log.debug("determineLocationClause(): result {}, locationEntity: {}, locationValue{}", [result, locationEntity, locationValue])
return result

Here, if the builtin.number entity is present, we use the value as our query. If, however, the health.room_special entity is present, this procedure must examine the value of the entity, and construct a query condition based on the canonical value of that term.

In some cases this may be valuable; in others, it may be easier to provide a number of List entities as was done for the health.condition cases.

In any event, the query is constructed and returned.

Procedures for Custom Intents

To handle our new intent, we will add a new procedure that uses the entity evaluations above to provide our results.

PROCEDURE NatLangTutorial.executeCustomIntent(interpretation Object)

// Here, we'll need to determine which intent we got & interpret the entities appropriately.
// For our simple tutorial, we have only one intent, so we'll just deal with it here.

var response = ""
if (interpretation.intent == "health.patientsByCondition") {   
   // To determine our subfunction, we'll need to look at the entities returned.
   //
   // NOTE: These are samples for the tutorial.  THIS MUST NOT BE INTERPRETED AS MEDICAL ADVICE
   // 
   // In particular:
   //   health.condition_general -- list all Patients
   //   health.condition_diabetes -- list Patients with glucose > 99
   //   health.condition_cardiovascular -- list Patients with a heart rate > 100
   //   health.condition_hypertension -- list Patients with bpSystolic > 139 OR byDiastolic > 89

   var foundError = null
   var conditionResults = NatLangTutorial.determineConditionClause(interpretation)
   var locationResults  = NatLangTutorial.determineLocationClause(interpretation)
   var qryCondition = null

   if (conditionResults.foundError) {
       foundError = conditionResults.foundError
   } else if (locationResults.foundError) {
       foundError = locationResults.foundError
   }

    if (foundError == null) {
       // Now, we combine the various conditions we may have

       if (locationResults.clause && conditionResults.clause) {
           qryCondition = {}
           qryCondition["$and"] = [locationResults.clause, conditionResults.clause]
       } else if (locationResults.clause) {
           qryCondition = locationResults.clause
       } else if (conditionResults.clause) {
           qryCondition = conditionResults.clause
       }

       var rowCount = 0
       SELECT * FROM Patients as row WHERE qryCondition {
           // the \u2022 character in the format statement below is a bullet character
           var thisRow = 
              format("\u2022 Name: {0}, Age: {1}, Room: {2}, BP: {3}/{4}, Pulse: {5}, Glucose: {6}\n",
                row.name, row.age, row.room,
                row.bpSystolic, row.bpDiastolic, row.heartRate,
                row.glucose)
           response += thisRow
           rowCount += 1
       }
       response += "\nTotal: " + rowCount + " Patients"
    } else {
         // Had some error interpreting the conditions.  Return that
       response = foundError
    }
} else {
    response = "I don't know how to perform \"" + interpretation.query + " (intent: " + 
                            interpretation.intent + ")"
}

interpretation.response = response
return interpretation

This procedure calls the entity interpreters in the previous sections. Once it has the query conditions, it merges them together as appropriate. That is, if the utterance was who’s got diabetes in the ER, then we want a query that looks for a condition of diabetes AND a location of the ER. If the utterance was who’s got diabetes, there is no location specified.

Once the condition is determined, we run a query over our Patients type, and return the results.

Extend our Utterance Processing

Once we have our custom intent procedure ready, we simply include it in the processUtterance() procedure, replacing the else clause with a call to our executeCustomIntent() procedure. The completed procedure is shown below.

PROCEDURE NatLangTutorial.processUtterance(utterance String, languageService String)

// Let's figure out if we can translate these into actions...
var response = "Default Response Value"

try {
    var interpretation = NaturalLanguageCore.interpretNLQuery(utterance, languageService)
    log.debug("ProcessUtterance() interpretation: {}", [interpretation.stringify(true)])
    if (interpretation.errorMsg != null) {
        // Then, we had some error.  Let's just dump that as the response and move on
        log.debug("ProcessUtterance(): Found Error: {} -- {}", [interpretation.errorMsg, languageService])
        response = interpretation.errorMsg
    } else if (interpretation.response.intent.startsWith("system.")) {
        log.debug("ProcessUtterance():  Attempting interpretation of intent: {}", [interpretation.response.intent])
        var interpretedString = NaturalLanguageCore.executeSystemIntent(interpretation.response)
        response = interpretedString.response
    } else {
      log.debug("ProcessUtterance():  Attempting interpretation of custom intent: {}", [interpretation.response.intent])
        var interpretedString = NatLangTutorial.executeCustomIntent(interpretation.response)
        response = interpretedString.response
    }
}
catch (error) {
    log.error("ProcessUtterance(): Error Encountered: " + stringify(error))
   response = error.message
}

return response

With that procedure altered, both our LUIS application and our VANTIQ applications are complete.

Verify, Test, Retrain, Republish, Repeat

Now that our application is complete, we can verify that it works as expected.

We verify our complete application using our external chat channel. Enter some of the expected commands, and verify that the correct results are returned.

For example,

Our Health Application

As language issues (LUIS Application) are found, the solution is, usually, to go back to the LUIS Console and add or amend utterances (see Add Intents or add further terminology to the List entities). Once that is done, train and publish again.

As operational issues are found, return to the VANTIQ system and make the appropriate corrections.

Once these steps are finished, our natural language addition to our application is complete.

Appendix: Utterances

This section lists the utterances used (at the time of this writing) to train our LUIS application – specifically the intent outlined in this tutorial. The basic function is the ability to ask questions about the hospital population in terms of location and condition.

We support the “official terms” as well as slang for conditions (e.g. tachycardia vs. heart trouble) as well as for locations (e.g. emergency room vs. ER). Similarly, there are terms in both the noun and adjectival forms (e.g. diabetes mellitus vs. diabetic). Generally, for conditions, these are covered in the various health.condition_ entities outlined in Add Entities.

Note that there are quite a few utterances here. This may seem like a lot for such a simple case. It is, but this is largely what it takes to make a natural language understanding project (in this case, a LUIS application) successful. These are, generally, big data systems. The more data, the better. The more utterances are provided, the more flavors of similar things, the more likely the system is to recognize the intents desired and to generalize that understanding to similar phraseologies. Fundamentally, the more examples, the better.

  1. who’s diabetic
  2. who has diabetes
  3. who’s in room 1234
  4. who’s in the er
  5. who’s hypertensive
  6. cardiac patients in icu?
  7. who has hbp
  8. who has cardiovascular issues
  9. who has hypertension
  10. whos got heart trouble
  11. whos hypertensive
  12. whos diabetic
  13. who is diabetic
  14. who is hypertensive
  15. admit hbp er
  16. admits heart condition icu
  17. admit hypertension
  18. admit tachycardia
  19. admit diabetes
  20. admit er
  21. admitted critical care
  22. admitted er
  23. admits hbp
  24. admits er
  25. admits
  26. admitted sick
  27. admit sick
  28. admits with tachycardia
  29. admits with hypertension
  30. patients with diabetes
  31. patients in er
  32. who’s sick