Thursday, May 24, 2018

Google Maps API


Summary

In this post I'll show a very simple use-case of the Maps API.  I'll display a map with markers on a simple web page.

Code

Below is a simple javascript array of markers to be added to the map.  This array is stored in a file named markers.js.

'use strict';
'use esversion 6';
const sites = [ 
  { lat: 39.016683491757995, lng: -106.31219892539934, label: 'Site 1' },
  { lat: 39.01841939481699, lng: -106.31966973052391, label: 'Site 2' },
  { lat: 38.816564618651974, lng: -106.243267861874, label: 'Site 3' },
  { lat: 38.970910463727286, lng: -106.40428097049143, label: 'Site 4' }
];

Simple HTML + javascript code to display the map and markers below.

<!DOCTYPE html>
<html>
<head>
 <meta charset="UTF-8">
    <style>
      #map {
        height: 800px;
        width: 100%;
       }
    </style>
 <title>Fabulous Sites</title>
 <script type="text/javascript" src="markers.js"></script>
</head>
<body>
 <div id="map"></div>
 <script>
  function markMap() {
         const center = {lat: 38.8, lng: -106.24};
         const map = new google.maps.Map(document.getElementById('map'), {
           zoom: 10,
           center: center
         });
         for (let i=0; i < sites.length; i++)
          var marker = new google.maps.Marker({
            position: {lat: sites[i].lat, lng: sites[i].lng},
            label: sites[i].label,
            map: map
          }); 
  }
 </script>
 <script async defer
     src="https://maps.googleapis.com/maps/api/js?key=yourkey&callback=markMap">
    </script>
</body>
</html>

Results



Google Sentiment Analytics


Summary

In this post, I'll demonstrate use of a Google Natural Language API - Sentiment.  The scenario will be a caller's interaction with a contact center agent is recorded and then sent through Google Speech to Text followed by Sentiment analysis.

Implementation

The diagram below depicts the overall implementation.  Calls enters an ACD/recording platform that has API access in and out.  Recordings are then sent into the Google API's for processing.



Below is a detailed flow of the interactions between the ACD platform and the Google API's.



Code Snippet

App Server

Simple Node.js server below representing the App Server component.
app.post(properties.path, jsonParser, (request, response) => {
 //send a response back to ACD immediately to release script-side resources
 response.status(200).end();
 
 const contactId = request.body.contactId;
 const fileName = request.body.fileName;
 let audio;
 
 logger.info(`contactId: ${contactId} webserver - fileName:${fileName}`);
 admin.get(contactId, fileName) //Fetch the audio file (Base64-encoded) from ACD
 .then((json) => {
  audio = json.file;
  return sentiment.process(contactId, audio);  //Get transcript and sentiment of audio bytes
 })
 .then((json) => { //Upload the audio, transcript, and sentiment to Google Cloud Storage
  return storage.upload(contactId, Buffer.from(audio, 'base64'), json.transcript, JSON.stringify(json.sentiment));
 })
 .then(() => {
  admin.remove(contactId, fileName); //Delete the audio file from ACD
 })
 .catch((err) => {
  logger.error(`contactId:${contactId} webserver - ${err}`);
 });
});

app.listen(properties.listenPort);
logger.info(`webserver - started on port ${properties.listenPort}`);

Sunday, April 15, 2018

Voice Interactions on AWS Lex + Google Dialogflow


Summary

In this post I'll discuss the audio capabilities of the bot frameworks in AWS and Google.  They have different approaches currently, though I think that's changing.  AWS Lex is fully-capable processing voice/audio in a single API call.  Google Dialogflow has a separation of concerns currently.  It takes three API calls to process a voice input and provide a voice response.  Interestingly enough, execution time on both platforms is roughly the same.

Voice Interaction Flow - AWS Lex

Diagram below of what things look like on Lex to process a voice interaction.  It's really simple.  A single API call (PostContent) can take audio as input and provide an audio bot response.  Lex is burying the speech-to-text and text-to-speech details such that the developer doesn't have to deal with it.  It's nice.


Code Snippet - AWS Lex

Simple function for submitting audio in and receiving audio out below.  The PostContent API call can process text or audio.

 send(userId, request) {
  let params = {
          botAlias: '$LATEST',
    botName: BOT_NAME,
    userId: userId,
    inputStream: request
  };
  
  switch (typeof request) {
   case 'string':
    params.contentType = 'text/plain; charset=utf-8';
    params.accept = 'text/plain; charset=utf-8';
    break;   
   case 'object':
    params.contentType = 'audio/x-l16; sample-rate=16000';
    params.accept = 'audio/mpeg';
    break;
  }
  return new Promise((resolve, reject) => {
   this.runtime.postContent(params, (err, data) => {
    if (err) {
     reject(err);
    }
    else if (data) {
     let response = {'text' : data.message};
     switch (typeof request) {
      case 'string':
       response.audio = '';
       break;
      case 'object':
       response.audio = Buffer.from(data.audioStream).toString('base64');
       break;
     }
     resolve(response);
    }
   });
  });
 }

Voice Interaction Flow - Google Dialogflow

Diagram of what the current state of affairs look like with Dialogflow and voice processing.  Each function (speech-to-text, bot, text-to-speech) require separate API calls.  At least that's the way it is in the V1 Dialogflow API.  From what I can tell in V2 (beta), it will allow for audio inputs.


Code Snippet - Google Dialogflow

Coding this up is more complicated than Lex, but nothing cosmic.  I wrote some wrapper functions around Javascript Fetch commands and then cascaded them via Promises as you see below.
 send(request) {
  return new Promise((resolve, reject) => {
   switch (typeof request) {
    case 'string':
     this._sendText(request)
     .then(text => {
      let response = {};
      response.text = text;
      response.audio = '';
      resolve(response);
     })
     .catch(err => { 
      console.error(err.message);
      reject(err);
     });  
     break;
    case 'object':
     let response = {};
     this._stt(request)
     .then((text) => {
      return this._sendText(text);
     })
     .then((text) => {
      response.text = text;
      return this._tts(text);
     })
     .then((audio) => {
      response.audio = audio;
      resolve(response);
     })
     .catch(err => { 
      console.error(err.message);
      reject(err);
     });  
   }
  });
 }

Results

I didn't expect this, but both platforms performed fairly equally even though multiple calls are necessary on Dialogflow.  For my simple bot example, I saw ~ 2 second execution times for audio in/out from both Lex and Dialogflow.  

Saturday, April 7, 2018

Dialogflow & InContact Chat Integration


Summary

In this post I'll discuss how to integrate a chat session that starts with a bot in Google Dialogflow.  The user isn't able to complete the transaction with the bot and then requests a human agent for assistance.  The application then connects the user with an agent on InContact's cloud platform.  The bot and web interfaces I built here are crude/non-production quality.  The emphasis here is on API usage and integration thereof.

This the third post of three discussing chat with InContact and Dialogflow.


Architecture

Below is a diagram the overall architecture for the scenario discussed above.


Application Architecture

The application layer is a simple HTML page with the interface driven by a single Javascript file - chat.js.  I built wrapper classes for the Dialogflow and InContact REST API's:  dflow.js and incontactchat.js respectively.  The chat.js code invokes API calls via those classes.





Application Flow

The diagram below depicts the steps in this example scenario.  


Steps 1, 2 Code Snippet - dflow.js

This is main code body of dflow.js.  It sends text (string) to Dialogflow via a POST to the API.  It returns a Promise object.

 send(text) {
  const body = {'contexts': this.contexts,
      'query': text,
      'lang': 'en',
      'sessionId': this.sessionId
  };
  
  return fetch(this.url, {
   method: 'POST',
   body: JSON.stringify(body),
   headers: {'Content-Type' : 'application/json','Authorization' : 'Bearer ' + this.token},
   cache: 'no-store',
   mode: 'cors'
  })
  .then(response => {
   if (response.ok) {
    return response.json();
   }
   else {
    const msg = 'response status: ' + response.status;
    throw new Error(msg);
   } 
  })
  .then(json => {
   if (json.result &&
    json.result.contexts &&
    json.result.fulfillment &&
    json.result.fulfillment.speech &&
    json.result.metadata &&
    json.result.metadata.intentName) {
    this.contexts = json.result.contexts;
    return {
     'intent': json.result.metadata.intentName,
     'speech': json.result.fulfillment.speech
    };
   }
   else {
    const msg = 'invalid/missing result value';
    throw new Error(msg);
   }
  })
  .catch(err => { 
   console.error(err.message);
   throw err;
  }); 
 }

Steps 3, 4 Code Snippet - chat.js

function _dflowReceive(resp) {
   console.log('dialogflow response: ' + JSON.stringify(resp));
   displayText('Bot: ' + resp.speech);
   if (resp.intent === AGENT_INTENT) {
      _incontactStart();
   }
}

function _incontactStart() {
   _mode = 'incontact';
   const from = _firstName + _lastName;
   _ict = new IncontactChat(INCONTACT_APP, INCONTACT_VENDOR, INCONTACT_KEY, INCONTACT_POC, from);
   _ict.start()
   .then(() => {
      _incontactReceive();
      _incontactSend(_getTranscript(), from);
   })
   .catch((err) => {
      console.error(err.message);
      _errorEnd();
      _incontactEnd();
   });
}

Steps 5, 6 Screen-shots




Tuesday, April 3, 2018

Google Dialogflow - Input Validation


Summary

This post concerns the task of validating user-entered input for a Google Dialogflow-driven chat agent.  My particular scenario is a quite simple/crude transactional flow, but I found input validation (slots) to be particularly cumbersome in Dialogflow.  Based on what I've seen in various forums, I'm not alone in that opinion.  Below are my thoughts on one way to handle input validation in Dialogflow.


Architecture

Below is high-level depiction of the Dialogflow architecture I utilized for my simple agent.  This particular agent is a repeat of something I did with AWS Lex (explanation here).  It's a firewood ordering agent.  The bot prompts for various items (number of cords, delivery address, etc) necessary to fulfill an order for firewood.  Really simple.



Below is my interpretation of the agent bot model in Dialogflow.

Validation Steps

For this simple, transactional agent I had various input items (slots) that needed to be provided by the end-user.  To validate those slots, I used two intents per item.  One intent was the main one that gathers the user's input.  That intent uses an input context to restrict access per the transactional flow.  The input to the intent is then be sent to a Google Cloud Function (GCF) for validation.  If it's valid, then a prompt is sent back to user for the next input slot.  If it's invalid, the GCF function triggers a follow-up intent to requery for that particular input item.  The user is trapped in that loop until they provide valid input.

Below is a diagram of the overall validation flow.


Below are screenshots of the Intent and requery-Intent for the 'number of cords' input item.  That item must be an integer between 1 and 3 for this simple scenario.



Code

Below is a depiction of the overall app architecture I used here.  All of the input validation is happening in a node.js function on GCF.


Validation function (firewoodWebhook.js)

The meaty parts of that function below:
function validate(data) { 
 console.log('validate: data.intentName - ' + data.metadata.intentName);
 switch (data.metadata.intentName) {
  case '3.0_getNumberCords':
   const cords = data.parameters.numberCords;
   if (cords && cords > 0 && cords < 4) {
    return new Promise((resolve, reject) => {
     const msg = 'We deliver within the 80863 zip code.  What is your street address?';
     const output = JSON.stringify({"speech": msg, "displayText": msg});
     resolve(output);
    });
   }
   else {
    return new Promise((resolve, reject) => {
     const output = JSON.stringify ({"followupEvent" : {"name":"requerynumbercords", "data":{}}});
     resolve(output);
    });
   }
   break;
  case '4.0_getStreet':
   const street = data.parameters.deliveryStreet;
   if (street) {
    return callStreetApi(street);
   }
   else {
    return new Promise((resolve, reject) => {
     const output = JSON.stringify ({"followupEvent" : {"name":"requerystreet", "data":{}}});
     resolve(output);
    });
   }
   break;
  case '5.0_getDeliveryTime':
   const dt = new Date(Date.parse(data.parameters.deliveryTime));
   const now = new Date();
   const tomorrow = new Date(now.getFullYear(), now.getMonth(), now.getDate()+1);
   const monthFromNow = new Date(now.getFullYear(), now.getMonth()+1, now.getDate());
   if (dt && dt.getUTCHours() >= 9 && dt.getUTCHours() <= 17 && dt >= tomorrow && dt <= monthFromNow) {
    return new Promise((resolve, reject) => {
     const contexts = data.contexts;
     let context = {};
     for (let i=0; i < contexts.length; i++){
      if (contexts[i].name === 'ordercontext') {
       context = contexts[i];
       break;
      }
     }
     const price = '$' + PRICE_PER_CORD[context.parameters.firewoodType] * context.parameters.numberCords;
     const msg = 'Thanks, your order for ' + context.parameters.numberCords + ' cords of ' + context.parameters.firewoodType + ' firewood ' + 
        'has been placed and will be delivered to ' + context.parameters.deliveryStreet + ' at ' + context.parameters.deliveryTime + '.  ' + 
        'We will need to collect a payment of ' + price + ' upon arrival.';
     const output = JSON.stringify({"speech": msg, "displayText": msg});
     resolve(output);
    });
   }
   else {
    return new Promise((resolve, reject) => {
     const output = JSON.stringify ({"followupEvent" : {"name":"requerydeliverytime", "data":{}}});
     resolve(output);   
    });
   }
   break;
  default:  //should never get here
   return new Promise((resolve, reject) => {
    const output = JSON.stringify ({"followupEvent" : {"name":"requestagent", "data":{}}});
    resolve(output);  
   });
 }
}
Focusing only on the number of cords validation -
Lines 6-11:  Check if the user input is between 1 and 3 cords.  If so, return a Promise object with the next prompt for input.
Lines 13-17:  Input is invalid.  Return a Promise object with a followupEvent to trigger the requery intent for this input item.

Client-side.  Dialogflow wrapper (dflow.js)

Meaty section of that below.  This is the 'send' function that submits user-input to Dialogflow for analysis and response.
 send(text) {
  const body = {'contexts': this.contexts,
      'query': text,
      'lang': 'en',
      'sessionId': this.sessionId
  };
  
  return fetch(this.url, {
   method: 'POST',
   body: JSON.stringify(body),
   headers: {'Content-Type' : 'application/json','Authorization' : 'Bearer ' + this.token},
   cache: 'no-store',
   mode: 'cors'
  })
  .then(response => response.json())
  .then(data => {
   console.log(data);
   if (data.status && data.status.code == 200) {
    this.contexts = data.result.contexts;
    return data.result.fulfillment.speech;
   }
   else {
    throw data.status.errorDetails;
   }
  })
  .catch(err => { 
   console.error(err);
   return 'We are experiencing technical difficulties.  Please contact an agent.';
  }) 
 }

Lines 8-29:  Main code here consists of a REST API call to Dialogflow with the user input.  If it's valid, return a Promise object with the next prompt.  Otherwise, send back a Promise with the error message.

Client-side.  User interface.

    function Chat(mode) {
        var _mode = mode;
     var _self = this;
        var _firstName;
        var _lastName;
        var _dflow; 
   
        this.start = function(firstName, lastName) {
            _firstName = firstName;
            _lastName = lastName;
            if (!_firstName || !_lastName) {
                alert('Please enter a first and last name');
                return;
            }
            
            _dflow = new DFlow("yourid");
            hide(getId('start'));
            show(getId('started'));
            getId('sendButton').disabled = false;
            getId('phrase').focus();
        };

        this.leave = function() {
         switch (_mode) {
          case 'dflow':       
           break;
         }
         getId('chat').innerHTML = '';
         show(getId('start'));
            hide(getId('started'));
            getId('firstName').focus();
        };
                       
        this.send = function() {
            var phrase = getId('phrase');
            var text = phrase.value.trim();
            phrase.value = '';

            if (text && text.length > 0) {
             var fromUser = _firstName + _lastName + ':'; 
             displayText(fromUser, text);
            
             switch (_mode) {
              case 'dflow':
               _dflow.send(text).then(resp => displayText('Bot:', resp));
               break;
             }
            }
        };         
Line 16:  Instantiate the Dialogflow wrapper object with your API token.
Line 45:  Call the 'send' function of the wrapper object and then display the returned text of the Promise.

Source Code


Tuesday, March 13, 2018

InContact Chat

Summary

I'll be discussing the basics of getting a chat implementation built on InContact.  InContact is cloud contact center provider.  All contact center functionality is provisioned and operates from their cloud platform.

Chat Model

Below is a diagram of how the various InContact chat configuration objects relate to each other.  The primary object is the Point of Contact (POC).  That object builds a relation between the chat routing scripts and a GUID that is used in the URL to initiate a chat session with an InContact Agent.

Chat Configuration

Below are screen shots of some very basic configuration of the objects mentioned above.



Below a screen shot of the basic InContact chat routing script I used for this example.  The functionality is fairly straightforward with the annotations I included.


Chat Flow

Below is a diagram of the interaction flow using the out-of-box chat web client that InContact provides.  The option also exists to write your own web client with InContact's REST API's.

Code

Below is a very crude web client implementation.  It simply provides a button that will instantiate the InContact client in a separate window.  As mentioned previously, the GUID assigned to your POC relates the web endpoint to your script.
<!DOCTYPE html>
<html>
 <head>
     <title>Chat</title>
  <script type = "text/javascript" >
   function popupChat() {
    url = "https://home-c7.incontact.com/inContact/ChatClient/ChatClient.aspx?poc=yourGUID&bu=yourBUID
&P1=FirstName&P2=LastName&P3=first.last@company.com&P4=555-555-5555";
    window.open(url,"ChatWin","location=no,height=630,menubar=no,status=no,width=410", true);
   }
  </script> 
 </head>
 <body>
  <h1>InContact Chat Demo</h1>
   <input id="StartChat" type="button" value="Start Chat" onclick="popupChat()">
 </body>
</html>

Execution

Screen-shots of the resulting web client and Agent Desktop in a live chat session.




Monday, March 5, 2018

Dual ISP - Bandwidth Reporting

Summary

This post is a continuation of the last on router configuration of two ISP's.  In this one, I'll show how to configure a bandwidth reporting with a 3rd party package - MRTG.  MRTG is a really nice open-source, graphical reporting package that can interrogate router statistics via SNMP.

Configuration

MRTG set up is fairly simple.  It runs under a HTTP server (Apache) and has a single config file - mrtg.cfg.  Configuration consists of setting a few options for each of the interfaces that you want monitored over SNMP.

HtmlDir: /var/www/mrtg
ImageDir: /var/www/mrtg
LogDir: /var/www/mrtg
ThreshDir: /var/lib/mrtg
Target[wisp]: \GigabitEthernet0/1:public@<yourRouterIp>
MaxBytes[wisp]: 12500000
Title[wisp]: Traffic Analysis
PageTop[wisp]: <H1>WISP Bandwidth Usage</H1>
Options[wisp]: bits

Target[dsl]: \Dialer1:public@<yourRouterIp>
MaxBytes[dsl]: 12500000
Title[dsl]: Traffic Analysis
PageTop[dsl]: <H1>DSL Bandwidth Usage</H1>
Options[dsl]: bits

Results

MRTG will provide daily, weekly, monthly and yearly statistics in a nice graphical format.  Below are the screenshots of the graphs for the two interface configured above.



Another open-source reporting package called MRTG Traffic Utilization provides a easy-to-read aggregation of the bandwidth stats via the MRTG logs.  Below is a screenshot of mrtgtu