Tuesday, March 13, 2018

InContact Chat


I'll be discussing the basics of getting a chat implementation built on InContact.  InContact is cloud contact center provider.  All contact center functionality is provisioned and operates from their cloud platform.

Chat Model

Below is a diagram of how the various InContact chat configuration objects relate to each other.  The primary object is the Point of Contact (POC).  That object builds a relation between the chat routing scripts and a GUID that is used in the URL to initiate a chat session with an InContact Agent.

Chat Configuration

Below are screen shots of some very basic configuration of the objects mentioned above.

Below a screen shot of the basic InContact chat routing script I used for this example.  The functionality is fairly straightforward with the annotations I included.

Chat Flow

Below is a diagram of the interaction flow using the out-of-box chat web client that InContact provides.  The option also exists to write your own web client with InContact's REST API's.


Below is a very crude web client implementation.  It simply provides a button that will instantiate the InContact client in a separate window.  As mentioned previously, the GUID assigned to your POC relates the web endpoint to your script.
<!DOCTYPE html>
  <script type = "text/javascript" >
   function popupChat() {
    url = "https://home-c7.incontact.com/inContact/ChatClient/ChatClient.aspx?poc=yourGUID&bu=yourBUID
    window.open(url,"ChatWin","location=no,height=630,menubar=no,status=no,width=410", true);
  <h1>InContact Chat Demo</h1>
   <input id="StartChat" type="button" value="Start Chat" onclick="popupChat()">


Screen-shots of the resulting web client and Agent Desktop in a live chat session.

Monday, March 5, 2018

Dual ISP - Bandwidth Reporting


This post is a continuation of the last on router configuration of two ISP's.  In this one, I'll show how to configure a bandwidth reporting with a 3rd party package - MRTG.  MRTG is a really nice open-source, graphical reporting package that can interrogate router statistics via SNMP.


MRTG set up is fairly simple.  It runs under a HTTP server (Apache) and has a single config file - mrtg.cfg.  Configuration consists of setting a few options for each of the interfaces that you want monitored over SNMP.

HtmlDir: /var/www/mrtg
ImageDir: /var/www/mrtg
LogDir: /var/www/mrtg
ThreshDir: /var/lib/mrtg
Target[wisp]: \GigabitEthernet0/1:public@<yourRouterIp>
MaxBytes[wisp]: 12500000
Title[wisp]: Traffic Analysis
PageTop[wisp]: <H1>WISP Bandwidth Usage</H1>
Options[wisp]: bits

Target[dsl]: \Dialer1:public@<yourRouterIp>
MaxBytes[dsl]: 12500000
Title[dsl]: Traffic Analysis
PageTop[dsl]: <H1>DSL Bandwidth Usage</H1>
Options[dsl]: bits


MRTG will provide daily, weekly, monthly and yearly statistics in a nice graphical format.  Below are the screenshots of the graphs for the two interface configured above.

Another open-source reporting package called MRTG Traffic Utilization provides a easy-to-read aggregation of the bandwidth stats via the MRTG logs.  Below is a screenshot of mrtgtu

Sunday, March 4, 2018

Cisco Performance Routing - Dual ISP's, Single Router


In this post I'll discuss how to set up dual ISP links in a sample scenario using a single Cisco router with Performance Routing (PfR).  Traditionally, dual links could be set up with Policy-Based Routing (PBR) and IP-SLA.  An example of that here.  The combination of those two would yield fail-over functionality upon loss of one of the two links.  It would not provide load-balancing of those links though.  PfR provides both.


Below is a diagram of the example dual ISP scenario.  One connection is to a DSL provider; the other to a wireless ISP (WISP).  Available bandwidth on the links is grossly imbalanced, by a factor of 8.  A dialer interface (PPOE) to an ATM interface connects the DSL ISP.  There is a Gig Ethernet connection to the WISP.  Behind the router/firewall are clients on private-range IP addresses.  Two internet-facing web servers are segregated into a DMZ.

Interface Configurations

ISP Link 1 - DSL ISP - Dialer

The PfR-important items are highlighted below.  You need to set an accurate figure for the expected bandwidth on the link and set load statistics interval to the lowest setting (30 sec).  Also note that both interfaces are designated as 'outside' for NAT.
 interface Dialer1
 bandwidth 8000
 ip address negotiated
 ip access-group fwacl in
 ip mtu 1492
 ip nat outside
 ip inspect outside_outCBAC out
 ip virtual-reassembly in
 encapsulation ppp
 ip tcp adjust-mss 1452
 load-interval 30
 dialer pool 1
 dialer-group 1
 ppp authentication pap callin
 ppp chap refuse
 ppp pap sent-username yourUsername password yourPwd
 no cdp enable

ISP Link 2 - Wireless ISP - GigE

interface GigabitEthernet0/1
 bandwidth 64000
 ip address
 ip access-group fwacl in
 ip nat outside
 ip inspect outside_outCBAC out
 ip virtual-reassembly in
 load-interval 30
 duplex auto
 speed auto
 no cdp enable

Internal Link - GigE

The link to the LAN is configured as NAT inside.
interface GigabitEthernet1/0
 ip address
 ip nat inside
 ip virtual-reassembly in
 load-interval 30

Routing Configuration

Routing for this scenario is very simple.  Just 2 static default routes to the next hop on the respective ISP's.
ip route
ip route

NAT Configuration

The item of interest is the 'oer' command on the NAT inside commands.  This alleviates a potential issue with unicast reverse-path forwarding.  It's discussed in detail here.

route-map wispnat_routemap permit 1
 match ip address nat_acl
 match interface GigabitEthernet0/1

route-map dslnat_routemap permit 2
 match ip address nat_acl
 match interface Dialer1

ip nat inside source route-map dslnat_routemap interface Dialer1 overload oer
ip nat inside source route-map wispnat_routemap interface GigabitEthernet0/1 overload oer

ip nat inside source static tcp 80 interface Dialer1 80
ip nat inside source static tcp 443 interface Dialer1 443
ip nat inside source static tcp 80 interface GigabitEthernet0/1 80
ip nat inside source static tcp 443 interface GigabitEthernet0/1 443

PfR Configuration

Loopback Interface + Key Chain

These are used for communication between the Master and Border elements of PfR.
interface Loopback0
 ip address

key chain pfr
 key 0
  key-string 7 071F275E450C00

PfR Border Router Config

Really simple config for the border component.
pfr border
 local Loopback0
 master key-chain pfr

PfR Master Router Config

Configuring the Master is easy as well.  In fact, just defining the key-chain and the internal+external interfaces is enough to enable basic load-balancing + fail-over.  PfR will aggregrate routes on IP address prefixes and balance across those routes across the 2 ISP links.  The extra commands below specify to keep the utilization of the two links within 10 percent of each other, use delay as route learning parameter, evaluate policies every 3 minutes, and make delay the top priority for policy.
pfr master
 max-range-utilization percent 10
 border key-chain pfr
  interface GigabitEthernet1/0 internal
  interface Dialer1 external
  interface GigabitEthernet0/1 external
 periodic 180
 resolve delay priority 1 variance 10

Wednesday, February 28, 2018

AWS Lex Bot & Genesys Chat Integration


This post is the culmination of the posts below on Lex + Genesys chat builds.  In this one, I'll discuss how to build a web client interface that allows integration of the two chat implementations.  The client will start out in a bot session with Lex and then allow for an escalation to a Genesys Agent when the end-user makes the request for an agent.

Genesys Chat build:  http://joeywhelan.blogspot.com/2018/01/genesys-chat-85-installation-notes.html
AWS Lex Chat build: http://joeywhelan.blogspot.com/2018/02/aws-lex-chatbot-programmatic.html
Reverse Proxy to support GMS: http://joeywhelan.blogspot.com/2018/02/nodejs-reverse-proxy.html

Architecture Layer

Below is a diagram depicting the overall architecture.  AWS Lex + Lambda is utilized for chat bot functionality; Genesys for human chat interactions.  A reverse proxy is used to provide access to Genesys Mobility Services (GMS).  GMS is a web app server allowing programmatic access to the Genesys agent routing framework.

Transport Layer

Secure transport is used through out the architecture.  HTTPS is use for SDK calls to Lex.  The web client code itself is served up via a HTTPS server.  Communications between the web client and GMS are proxied and then tunneled through WSS via Cometd to provide support for asynchronous communcations between the web client and Genesys agent. 

Application Layer

I used the 'vanilla demo' included with the Cometd distro to build the web interface.  All the functionality of interest is contained in the chat.js file.  Integration with Lex is via the AWS Lex SDK.  Integration with Genesys is via publish/subscribe across Cometd to the GMS server.  GMS supports Cometd natively for asynchronous communications.

Application Flow

Below are the steps for an example scenario:  User starts out a chat session with a Lex bot, attempts to complete an interaction with Lex, encounters difficulties and asks for a human agent, and finally transfer of the chat session to an agent with the chat transcript.

Step 1 Code Snippets

        // Initialize the Amazon Cognito credentials provider
        AWS.config.region = 'us-east-1'; // Region
        AWS.config.credentials = new AWS.CognitoIdentityCredentials({
            IdentityPoolId: 'us-east-1:yourId',
        var _lexruntime = new AWS.LexRuntime();

        function _lexSend(text) {
         console.log('sending text to lex');
            var fromUser = _firstName + _lastName + ':'; 
            _displayText(fromUser, text);
            var params = {
              botAlias: '$LATEST',
        botName: 'OrderFirewoodBot',
        inputText: text,
        userId: _firstName + _lastName,
            _lexruntime.postText(params, _lexReceive);
Lines 1-6:  Javascript. AWS SDK set up.  An AWS Cognito pool must be created with an identity with permissions for the the Lex postText call.

Step 2 Code Snippets

RequestAgent Intent

An intent to capture the request for an agent needs to be added to the Lex Bot.  Below is a JSON-formatted intent object that can be programmatically built in Lex.
    "name": "RequestAgent",
    "description": "Intent for transfer to agent",
    "slots": [],
    "sampleUtterances": [
       "Please transfer me to an agent",
       "Transfer me to an agent",
       "Transfer to agent"
    "confirmationPrompt": {
        "maxAttempts": 2,
        "messages": [
                "content": "Would you like to be transferred to an agent?",
                "contentType": "PlainText"
    "rejectionStatement": {
        "messages": [
                "content": "OK, no transfer.",
                "contentType": "PlainText"
    "fulfillmentActivity": {
        "type": "CodeHook",
        "codeHook": {
         "uri" : "arn:aws:lambda:us-east-1:yourId:function:firewoodLambda",
      "messageVersion" : "1.0"

Lambda Codehook

Python code below was added to the codehook described in my previous Lex post.  Lines 3-6 add an attribute/flag that can be interrogated on the client side to determine if a transfer to agent has been requested.
    def __agentTransfer(self):
        if self.source == 'FulfillmentCodeHook':
            if self.sessionAttributes:
                self.sessionAttributes['Agent'] = 'True';
                self.sessionAttributes = {'Agent' : 'True'}
            msg = 'Transferring you to an agent now.'
            resp = {
                    'sessionAttributes': self.sessionAttributes,
                    'dialogAction': {
                                        'type': 'Close',
                                        'fulfillmentState': 'Fulfilled',
                                        'message': {
                                            'contentType': 'PlainText',
                                            'content': msg
            return resp

Step 3 Code Snippets

Receiving the Lex response with the agent request

Lines 11-13 interogate the session attributes returned by Lex and then set up the agent transfer, if necessary.
        function _lexReceive(err, data) {
         console.log('receiving lex message')
         if (err) {
    console.log(err, err.stack);
   if (data) {
    console.log('message: ' + data.message);
    var sessionAttributes = data.sessionAttributes;
    _displayText('Bot:', data.message);
    if (data.sessionAttributes && 'Agent' in data.sessionAttributes){
     _mode = 'genesys';

Genesys connection.

Genesys side configuration is necessary to set up the hook between the GMS API calls and the Genesys routing framework. 'Enable-notification-mode' must be set to True to allow Cometd connections to GMS.  A service/endpoint must be created that corresponds to an endpoint definition in the Genesys Chat Server configuration.  That chat end point is a pointer to a Genesys routing strategy.

If a Cometd connection doesn't already exist, create one and perform the handshake to determine connection type.  Websocket is the preferred method, but if that fails - Cometd will fall back to a polling-type async connection.  The request to connect to Genesys is then sent across that Cometd(websocket) connection via the publish command.

        var _genesysChannel = '/service/chatV2/v2Test';

        function _metaHandshake(message) {
         console.log('cometd handshake msg: ' + JSON.stringify(message, null, 4));         
         if (message.successful === true) {

        function _genesysReqChat() {
         var reqChat = {
           'operation' : 'requestChat',
        'nickname' : _firstName + _lastName
         _cometd.batch(function() { 
       _genesysSubscription = _cometd.subscribe(_genesysChannel, _genesysReceive); 
       _cometd.publish(_genesysChannel, reqChat);
        function _genesysConnect() {
         console.log('connecting to genesys');
         if (!_connected) { 
           url: 'https://' + location.host + '/genesys/cometd',
           logLevel: 'debug'
          _cometd.addListener('/meta/handshake', _metaHandshake);
          _cometd.addListener('/meta/connect', _metaConnect);
          _cometd.addListener('/meta/disconnect', _metaDisconnect);
         else {

Step 4 Code Snippets

In the previous step, the web client subscribed to a Cometd channel corresponding to a Genesys chat end point.  When the message arrives that this client is 'joined', publish the existing chat transcript (between the user and Lex) to that Cometd channel.
    function _getTranscript(){
     var chat = _id('chat');
     var text;
     if (chat.hasChildNodes()) {
      text = '***Transcript Start***' + '\n';
      var nodes = chat.childNodes;
      for (var i=0; i < nodes.length; i++){
       text += nodes[i].textContent + '\n';
      text += '***Transcript End***';
     return text;
        function _genesysReceive(res) {
         console.log('receiving genesys message: ' + JSON.stringify(res, null, 4));
      if (res && res.data && res.data.messages) {
       res.data.messages.forEach(function(message) {
        if (message.index > _genesysIndex) {
         _genesysIndex = message.index;
         switch (message.type) {
          case 'ParticipantJoined':
           var nickname = _firstName + _lastName;
           if (!_genesysSecureKey && message.from.nickname === nickname){
            _genesysSecureKey = res.data.secureKey;
            console.log('genesys secure key reset to: ' + _genesysSecureKey);
            var transcript = _getTranscript();
            if (transcript){
             _genesysSend(transcript, true);

Step 5 Screen Shots

Web Client

Agent Desktop (Genesys Workspace)

Full Source

Tuesday, February 20, 2018

Node.js Reverse Proxy


In this post I'll show how to create a simple reverse proxy server in Node.js.


The scenario here is front-ending an app server (in this case Genesys Mobility Services (GMS) with a proxy to only forward application-specific REST API requests to GMS over HTTPS.  The proxy also acts as a general web server as well - also over HTTPS.


var path = require('path');
var fs = require('fs'); 
var gms = 'https://svr2:3443';

var express = require('express');
var app = express();
var privateKey = fs.readFileSync('./key.pem'); 
var certificate = fs.readFileSync('./cert.pem'); 
var credentials = {key: privateKey, cert: certificate};
var https = require('https');
var httpsServer = https.createServer(credentials, app);

var httpProxy = require('http-proxy');
var proxy = httpProxy.createProxyServer({
 secure : false,
 target : gms

httpsServer.on('upgrade', function (req, socket, head) {
   proxy.ws(req, socket, head);

proxy.on('error', function (err, req, res) {
 try {
  res.writeHead(500, {
   'Content-Type': 'text/plain'
  res.end('Error: ' + err.message);
 } catch(err) {

app.use(express.static(path.join(__dirname, 'public')));

app.all("/genesys/*", function(req, res) {
 proxy.web(req, res);

Lines 1-11:  Set up a HTTPS server with Express.  The proxy target is specified in Line 3.
Lines 13-17:  Set up the Proxy.  I'm using a self-signed certificate on Svr 2, so 'secure' is set to false to support that.
Lines 19-21:  Configure the HTTPS server use the Proxy to proxy websockets.
Line 35:  Serve up static content (HTML, CSS, Javascript) from the 'public' directory for general requests to this server.
Lines 37-39:  Proxy any requests that are specifically to the GMS REST API, both HTTPS and WSS traffic.

Source:  https://github.com/joeywhelan/Revproxy/

Sunday, February 18, 2018

AWS Lex Chatbot - Programmatic Provisioning


AWS Lex has a full SDK for model building and run time execution.  In this post, I'll demonstrate use of that SDK in Python.  I'll demonstrate how create a simple/toy chatbot, integrate with a Lambda validation function, do a real time test, then finally - delete the bot.  Use of the AWS console will not be necessary at all.  All provisioning will be done in code.

AWS Lex Architecture

Below is a diagram of the overall architecture.  The AWS Python SDK is used here for provisioning.  Lambda is used for real time validation of data.  In this particular bot, I'm using a 3rd party (SmartyStreets) for validating street addresses.  That consists of a web service call from Lambda itself.

Bot-specific Architecture

Below is a diagram of how Lex bots are constructed.  Lex bot is goal-oriented sort of chatbot.  Goals are called 'Intents'.  Items necessary to fulfill an Intent are called Slots.  A bot consists of a bot definition in which Intent definitions are referenced.  Intents can include references to custom Slot Type definitions.  Intents are also where Lambda function calls for validation of slot input and overall fulfillment are made.

Bot Provisioning Architecture

Below is an architectural diagram of my particular provisioning application.  The Python application itself is composed of generic AWS SDK function calls.  All of the bot-specific provisioning configuration exists in JSON files.

Lex Provisioning Code

My Lex bot provisioning code consists of a single Python class.  That class gets its configuration info from an external config file.
if __name__ == '__main__':
    bot = AWSBot('awsbot.cfg')
    bot.test('I want to order 2 cords of split firewood to be delivered at 1 pm on tomorrow to 900 Tamarac Pkwy 80863')
Lines 1-5:  The AWSBot class exposes a simple interface to build, test, and destroy a bot on Lex.

As mentioned, all configuration is driven by a single config file and multiple JSON files.
class AWSBot(object):  
    def __init__(self, config):
        self.bot, self.slots, self.intents, self._lambda, self.permission = self.__loadResources(config)
        self.buildClient = boto3.client('lex-models')
        self.testClient = boto3.client('lex-runtime')
        self.lambdaClient = boto3.client('lambda')

    def __loadResources(self, config):
        cfgParser = configparser.ConfigParser()
        cfgParser.optionxform = str
        filename = cfgParser.get('AWSBot', 'botJsonFile')
        with open(filename, 'r') as file:
            bot = json.load(file)
        slotsDir = cfgParser.get('AWSBot', 'slotsDir')
        slots = []
        for root,_,filenames in os.walk(slotsDir):
            for filename in filenames:
                with open(os.path.join(root,filename), 'r') as file:
                    jobj = json.load(file)
                    logger.debug(json.dumps(jobj, indent=4, sort_keys=True))
        intentsDir = cfgParser.get('AWSBot', 'intentsDir')
        intents = []
        for root,_,filenames in os.walk(intentsDir):
            for filename in filenames:
                with open(os.path.join(root,filename), 'r') as file:
                    jobj = json.load(file)
                    logger.debug(json.dumps(jobj, indent=4, sort_keys=True))
        filename = cfgParser.get('AWSBot', 'lambdaJsonFile')
        dirname = os.path.dirname(filename)
        with open(filename, 'r') as file:
            _lambda = json.load(file)
        with open(os.path.join(dirname,_lambda['Code']['ZipFile']), 'rb') as zipFile:
            zipBytes = zipFile.read()
        _lambda['Code']['ZipFile'] = zipBytes    
        filename = cfgParser.get('AWSBot', 'permissionJsonFile')
        with open(filename, 'r') as file:
            permission = json.load(file)
        return bot, slots, intents, _lambda, permission
Lines 1-8:  Load up dict objects with the Lex config and instantiate the AWS SDK objects.
Lines 10-14:  Read a config file that holds directory paths to the JSON files used to provision Lex.
Lines 16-18:  Load a dict object with the a JSON Lex Bot definition file.
Lines 20-27:  Load a dict object with a custom slot type JSON definition.
Lines 29-35:  Load a dict object with the Intent JSON definition.
Lines 38-44:  Load a dict object with Lambda code hook definition.  Read the bytes of a zip file containing the Python code hook along with all non-AWS-standard libraries it references.
Lines 46-48:  Load a dict object with the attributes necessary to add permission for the Lambda code hook to be called from Lex.

The public build interface consists calls to private methods to build the various Lex-related objects: Lambda code hook, slot types, intents, and finally the bot itself.
    def build(self):
Below is the code for the private build methods:
    def __buildLambda(self):
        resp = self.lambdaClient.create_function(**self._lambda)
        logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        resp = self.lambdaClient.add_permission(**self.permission)
        logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
    def __buildSlotTypes(self):
        for slot in self.slots:
            resp = self.buildClient.put_slot_type(**slot)
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
    def __buildIntents(self):
        for intent in self.intents:
            resp = self.buildClient.put_intent(**intent)
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
    def __buildBot(self):
        complete = False
        for _ in range(20):
            resp = self.buildClient.get_bot(name=self.bot['name'], versionOrAlias='$LATEST')
            if resp['status'] == 'FAILED':
                logger.debug('***Bot Build Failed***')
                logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
                complete = True
            elif resp['status']  == 'READY':
                logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
                complete = True
        if not complete:
            logger.debug('***Bot Build Timed Out***')
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer)) 
Lines 1-7:  Call the AWS Lambda SDK client to create the function with the previously-loaded JSON definition.  Add the permission for the Intent to call that Lambda function.
Lines 9-14:  Loop through the slots JSON definitions and build each via AWS SDK call.
Lines 16-21:  Same thing, but with the Intents.
Lines 23-44:  Build the bot with its JSON definition.  Although this SDK call is synchronous and returns almost immediately, the Bot will not be completed upon return from the call.  It takes around 1-2 minutes.  The for loop here is checking AWS's progress on the Bot build every 20 sec.

After the Bot is complete, the Lex runtime SDK can be used to test it with a sample utterance.
def test(self, msg):
        params = {
                    'botAlias': '$LATEST',
                    'botName': self.bot['name'],
                    'inputText': msg,
                    'userId': 'fred',
        resp = self.testClient.post_text(**params)
        logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer)) 
Deleting/cleaning-up the Lex objects on AWS is a simple matter of making the corresponding delete function calls from the SDK.  Similar to the build process, deleting an object takes time on AWS, even though the function may return immediately.  You can mitigate the issues delays associated with deletion and corresponding dependencies by putting artificial delays in the code, such as below.
def __destroyBot(self):
            resp = self.buildClient.delete_bot(name=self.bot['name'])
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        except Exception as err:
        time.sleep(5) #artificial delay to allow the operation to be completed on AWS

Lambda Code Hook

If you require any logic for data validation or fulfillment (and you will for any real bot implementation), there is no choice but to use AWS Lambda for that function.  That Lambda function needs a single entry point where the Lex event (Dialog validation and/or Fulfillment) is passed.  Below is entry point for the function I developed.  All the validation logic is contained in single Python class - LexHandler.
def lambda_handler(event, context):
    handler = LexHandler(event)
    os.environ['TZ'] = 'America/Denver'
    return handler.respond()
I'm not going to post all the code for the handler class as it's pretty straight-forward (full source will be on github as well), but here's one snippet of the Address validation.  It actually makes a call to an external webservice (SmartyStreets) to perform the validation function.
    def __isValidDeliveryStreet(self, deliveryStreet, deliveryZip):  
        if deliveryStreet and deliveryZip:
            credentials = StaticCredentials(AUTH_ID, AUTH_TOKEN)
            client = ClientBuilder(credentials).build_us_street_api_client()
            lookup = Lookup()
            lookup.street = deliveryStreet
            lookup.zipcode = deliveryZip
            except exceptions.SmartyException:
                return False
            if lookup.result:
                return True
                return False
            return False

Full source here:  https://github.com/joeywhelan/AWSBot

Thursday, January 25, 2018

Genesys Chat 8.5 Installation Notes


This post will cover some of the highlights of a recent install of latest/greatest Genesys Chat architecture.  I'm not attempting to recreate the Genesys installation documentation (there's ample amount of that already), just areas that I noted were troublesome and/or not documented as clearly as I would like.


Below is a diagram of this particular lab environment.  For clarity, the diagram does not depict all the actual processes and inter-connections.  This is a lab/all-in-one-box type deployment.  Genesys Mobility Services (GMS) is used for AP interface to Chat Server whereas was used WebAPI in the previous 8.1 architecture.


Below is an excerpt of a Genesys license file with the line items necessary for chat highlighted.  Ten seats of chat are enabled.
FEATURE 3GP07263ACAA genesys.d 7.1 1-oct-2018 10 5379D90C1653 \
 vendor_info="v7.1 - Genesys Agent Desktop" NOTICE="Lab" \
FEATURE 3GP08393ACAA genesys.d 8.0 1-oct-2018 10 3EB730F543A4 \
 vendor_info="v8.0 - SIP Server" NOTICE="Lab" SIGN=FAC58E606844
FEATURE 3GP08693ACAA genesys.d 8.0 1-oct-2018 1 02D33156B657 \
 vendor_info="v8.0 - Genesys Chat - Lab" NOTICE="Lab" \
FEATURE ics_multi_media_agent_seat genesys.d 8.0 1-oct-2018 10 \
 63B03A7C3207 NOTICE="Lab" SIGN=1366ECF80B66
FEATURE ics_live_web_channel genesys.d 8.0 1-oct-2018 10 B606C4B7F496 \
 NOTICE="Lab" \
FEATURE DESKTOP_SUPERVISOR genesys.d 7.0 1-oct-2018 1 BE6E85DE569E \
 NOTICE="Lab" \
Additionally, the options below need to be set in Interaction Server to check out licenses from FlexLM daemon:

Interaction Server

  1. This particular Genesys server needs a Genesys DB Server + Data Access Point (DAP) to integrate with a database (unlike Config layer which can utilize the native client - Oracle dbclient via a DAP alone).
  2. There are a dozen or so SQL scripts included in the install directory.  The two that you need for a fresh install are (assuming Oracle DB):  isdb_oracle.sql, eldb_oracle.sql.
    $ pwd
    $ ls
    eldb_oracle_7.6.1-8.0.1.sql  eldb_oracle.sql          isdb_oracle_7.2-7.5.sql      isdb_oracle_7.6-7.6.1.sql  isdb_oracle.sql
    eldb_oracle_drop.sql         isdb_oracle_7.0-7.1.sql  isdb_oracle_7.5-7.6.sql      isdb_oracle_drop.sql
    eldb_oracle_nvc.sql          isdb_oracle_7.1-7.2.sql  isdb_oracle_7.6.1-8.0.1.sql  isdb_oracle_nvc.sql
  3. Interop with ORS. Below is an excerpt from the current ORS Deployment guide regarding interop with eServices. 

    Starting with ORS 8.1.400.27, you create the Interaction Server Application(s) using only the Interaction Server Application template. There is no need to create an Interaction Server Application using a T- Server Application Template for the second Application object. For backward compatibility, both methods of deployment are supported.

    Based on my experience, that's simply not true.  Not configuring a multimedia switch and corresponding TServer results in the following errors in the ORS log and no routing from ORS:
    10:23:27.752 Std 20010 Configuration error. Class [ConfigDirectory] : Switch is not assinged to the tenant of Interaction Server 'ixnsvr'
    10:23:27.752 Std 23009 ORS WARNING Connection to Interaction Server configured as T-Server required, eServices functionality not enabled.
    Those messages are pretty clear to me:  ORS still demands the legacy configuration.  Solution: create a switching office of type Multimedia, a switch, and associate a Interaction Server with it by using a TServer application template.  Screen shots below of what that looks like:
  4. There's an undocumented health monitoring interface (HTTP/SOAP).  If you set up a HTTP 'health' port and options, you can access it via browser as depicted below:


Ensure the connection to Chat Server is on its webapi port (http).  GMS will attempt to install its own Cassandra instance but you can specify instead it use an existing/external instance.  


The deployment guide pretty well covers the configurations necessary to get ORS to function with eServices.  Couple items of note:
  1. Turn on mcr-pull-by-this node
  2. Turn up full debug on logs if you're troubleshooting.  Setting 'debug' as the log level alone won't get you log messaging down to SCXML processing level.  You need set the x-server-trace-level option as well.


By default, Workspace will try to login an agent to all media types.  If you don't have licensing to support that, you'll get annoying errors on start-up of Workspace.  To eliminate those, turn on role-based security, create a role with privileges corresponding to your licensing (in this case, voice and chat only) and assign the role to the agent(s).  Role-based security is disabled by default.  You do the double-negative to turn it on (option is 'disable', set it to false).

Capacity Rule

By default, agents have no capacity for any eService-type interactions (chat, email, etc).  If you don't configure a Capacity Rule with chat, for instance, and assign it to an agent - no routing of a chat will occur.  

Creating a rule requires deployment of GAX.  Below are some screenshots of a simple rule to allows for 1 voice and 3 chat interactions simultaneously.

Routing - GMS Chat API Version 1

There are two Chat API's within GMS.  Below are the steps to get a V1 Chat interaction routed to an agent.

Develop the Composer Routing Script

Screen shots below of the dev cycle for a extremely simplistic chat routing script.  It simply sends an inbound request to an Agent Group.

File, New, Other

Name the project, Select Route project type, Next, Finish.

Connect to Configuration Server.

Open up the default.workflow and add a single Route Interaction block.  For Targets, choose an Agent Group you've previously created in Administrator.

Open the default.ixnprocess view and add an Interaction Queue object.  Go to the properties of that object and add a View.  Connect the Interaction Queue to the Workflow object.

Left click on the project in Project Explorer, choose Generate All.  Select Deploy Project and Publish data to Configuration Server.  This step will build/validate the code, deploy the resulting WAR file on the Tomcat instance included with Composer, and finally build all the necessary Script objects in Configuration Management.

After completing this step, the project will be on Tomcat and objects below are constructed in Configuration.

Those four objects have linkages to each other.  The 'defaultWorkflow' object has the URI to the actual Composer-generated SCXML on Tomcat.

We use the InteractionQueue object in an Chat endpoint.  Create an Endpoint in Options and add the reference to the Queue as its value.

Using the web GUI to GMS, provision a 'request-chat' service and add the Chat endpoint you just defined to it.

In theory, all the provisioning is complete now.  GMS provides a sample Chat V1 API client on the main page with the 'Sample' link.

Select the 'Request-Chat' scenario and click the 'Connect' button.

The GMS sample client will then initiate a chat session against the endpoint defined in GMS + Chat Server.  Interaction Server will trigger ORS to fetch the Composer-generated SCXML file on Tomcat.  The SCXML strategy will route the chat to the agent in the defined Agent Group.