Sunday, October 14, 2018

Thoughts on SOWs

Summary

This post is a departure from the normal technical content of this blog.  I'm going to share my thoughts on the topic of Statement of Work (SOW) development in the domain of IT services.  The content of this post is strictly my opinions as I'm not an attorney nor have any legal training whatsoever.  Those opinions are based on ~20 years in Professional Services and active in the writing and implementing of SOWs for technology services.  In many cases, the lessons described here are written in blood/tears from mistakes I and others have made in the past.  

SOW Components

A SOW is a contract, pure and simple.  It's the sort of contract that is the standard for describing services in the technology space.  Below are the typical sections in a SOW.

Header with Parties

This is generally a boilerplate section with a definition of the parties involved in the SOW.  There are typically two:  
  • Service Provider (SP):  This is the organization delivering the services.
  • Customer:  This is the recipient of those professional services.
Lines can be blurred in some instances.  Example:  A SP could be both a provider and recipient in an engagement if they are subcontracting some or all of the services being delivered to a Customer under a SOW.

Scope

The 'Scope' is (or should be, more on that later) the main content of the SOW.  This is where the tasks and deliverables of the included services are described.  This area is the domain of SP SOW author and typically gets the most attention from both the SP and Customer.  Rightfully so, as this area dictates what the Customer will actually receive for their money and what the SP is beholden to deliver.

Terms, Conditions, and Assumptions

Typically a section of the SOW is carved out for boilerplate legal language and/or clarifications to what is/is not included in the SOW scope.  This is usually a list of bullets.    The SOW author may put some content in this area, but this is usually the domain of both sides' Legal/Contracting organizations.

Pricing

This section covers just what you'd expect - the cost of the services described in the Scope.  The structure of the content here is dependent on the SOW design pattern being employed.  Those models are described next.

SOW Design Patterns

In my opinion, there are only two SOW models:  Fixed-bid, Time and Materials.  I'll discuss another model later that I consider an anti-pattern.

Fixed-Bid

This model can be employed when both sides (SP, Customer) clearly understand the services to be delivered.  This type of SOW is harder for the author to construct as more work has to be put into describing the scope.  It's also typically more difficult for the SP to execute profitably as the cost in fixed-bid engagements is just that - fixed.  Costs remain static if the corresponding scope also remains static.  The SP charges exactly what the Pricing section dictates - whether it took 1 or 1 million hours to deliver the tasks + deliverables of the Scope.  

All that being said, this is the SOW model that most Customers typically feel most comfortable with.  They have a budget for a project and want an assurance that they'll get the project completed in that budget.  Fixed-bid can provide that assurance - if requirements remain fixed.

Pricing for this model is best structured around billing milestones tied to a tangible activity or deliverable.  Each milestone represents a percentage of the total services cost of the SOW.  Milestones must be well-bounded as to what 'done' means for that milestone as billing is triggered when 'done' is achieved.  

Example milestone:  Acceptance of Design Document.  The 'Design Document' and the concept of 'Acceptance' must both be rigorously described in the Scope section of the SOW such that both parties agree to what completion of that milestone entails.

Time and Materials (T&M)

This model is typically employed when it is not possible to clearly define the scope of the given project.  That can be for a variety of reasons:  scope simply isn't known at the time of SOW development, requirements are fluid, or - the Customer simply wants access to resources with certain skill sets, i.e., staff augmentation, without specific deliverables.    This is in contrast to a project-based engagement.

As the name suggests, the Customer pays for exactly the amount of services delivered by the SP - typically, metered by the hour.  The SOW will state the number of hours included in the price and an hourly rate.  A good T&M SOW practice is to include a 'circuit-breaker' clause that states the Customer will be notified when the expended hours have reached some critical mass.  Example:  20% remaining.  The Customer can then decide whether they want to increase funding to extend the SOW.  In any case, the services end when the funded hours are expended regardless of the state of the project.

A really bad practice in T&M is to state that 'deliverables' are included (bad from the SP perspective, but probably considered fabulous from the Customer side).  Those two concepts are antithetical.  A 'deliverable' implies something that's guaranteed to be produced under the SOW.  The duration of a T&M SOW is by definition limited by the funded hours.  Scenario:  A 'deliverable' is incomplete but the funded hours have all been expended.  Now what?  

An anti-pattern variant of T&M is 'Capped T&M'.  This non-sensical model is an attempt at a hybrid of T&M and Fixed-bid:  the hours are capped at a level but the Customer only pays for the actual amount expended.  The real mess comes when the SOW author includes deliverables.  The Customer gets the fabulous deal of guaranteed 'deliverables' but only pays for actual hours expended up to a fixed maximum.  The SP gets the short end of that stick.  They didn't understand their scope well enough for a fixed-bid SOW but now have to deliver at what is, in essence, a fixed cost.  Net, this is an imbalanced situation.

Summary graphic below.


SOW Authors

I'm going to speak to this from the SP perspective as they are typically the party that generates the SOW.  It's their services and the SOW represents their quote for those services.  There are occasions when the Customer will generate the first draft of a SOW but those seem to be less common in my experience.

On the SP side, SOW development is almost always a team effort.  There is a main author of the scope content and then various other parties that provide review and/or auxiliary content.  Those other parties are invariably Contracting/Sourcing, Legal and the management of the Services arm that will be responsible to implement the tasks/deliverables of the SOW.

As far as that 'main author' - who should that be?  In my experience, the best results come from individuals that have real-world, hands-on experience in the technology.  This is almost always people that were/are in the Professional Services practice or have independently taken an active role in keeping themselves technically relevant.  It's these subject matter experts (SMEs) that are best suited to follow the sale from the beginning with the SP Sales team.  They hear the Customer explain the requirements first hand and understand any constraints that the Customer has expressed during the sales cycle.

I've seen a number of SPs that do not utilize SMEs for SOW development.  Instead, they'll use weak/generic scope statements or create a separate SOW group altogether that is detached from the Sales process.  Below is a listing of the issues I've seen over and over again with this model:
  • Authors that have no historical context on the engagement.  They weren't involved in the sales cycle so they didn't hear all the conversations with the Customer.  They come in cold to the sale.  As such, it's almost impossible for them to write a coherent fixed-bid SOW.  
  • Authors that have little to no technical or operational background.  They simply can't go to the level of detail necessary to capture the scope because they don't have a firm understanding of those details.
  • Authors that have too much technical background (and no sales background) and produce SOWs that go off the deep-end in technical content.  These SOWs wind up looking like technical design documents instead of a contract that a business leader is going to have to decipher and agree with prior to signing.
In most cases where the SP doesn't use their SE for SOW development, that SP is only comfortable with T&M engagements.  That's for good reason as their risks skyrocket in fixed-bid SOW's developed on incomplete information.

SOW Do's/Don't

  • #1 - Create a balanced first draft of the SOW and strive to keep it that way.  By 'balanced', I mean fair to both sides: Customer and SP.  Imbalanced SOWs just lead to protracted negotiations and in some cases - no sale.
  • Do write detailed scope content.
  • Don't write scope content that resembles a technical design document.
  • Don't put deliverables in T&M SOWs.
  • Don't put labor hour breakouts in Fixed-bid SOWs.  That's an unnecessary artifact that potentially leads to confusion and protracted negotiations.  By definition, the fixed-bid engagement will be delivered at a static cost.  The hours to do so are irrelevant.  Labor breakouts in T&M are fine and expected.
  • For Fixed-bid SOWs, do tie billing milestones to tangible results.
  • Avoid putting fixed delivery dates in any SOW.  Sometimes this cannot be avoided due to the Customer's requirements.  In those cases, something akin to a formal project plan is going to have to be developed prior.  That means the Services arm will have to be pulled in for a detailed analysis of the requirements and project plan development.  In most cases, that will be a non-billable exercise as it's prior to the SOW being executed.
  • Don't attempt to write an exclusion for every item you can think of that's out of scope.  By definition, anything that's not explicitly listed as in-scope is out of scope.  The universe of 'out of scope' is infinite.  You can't hope to capture infinity.
  • In the same vein, if the SOW has more content around exclusions than it does around scope definition - that SOW is likely of poor quality.  This is a sign that the author does not understand the engagement or technology. 

Copyright ©1993-2024 Joey E Whelan, All rights reserved.


Thursday, May 24, 2018

Google Maps API


Summary

In this post I'll show a very simple use-case of the Maps API.  I'll display a map with markers on a simple web page.

Code

Below is a simple javascript array of markers to be added to the map.  This array is stored in a file named markers.js.

'use strict';
'use esversion 6';
const sites = [ 
  { lat: 39.016683491757995, lng: -106.31219892539934, label: 'Site 1' },
  { lat: 39.01841939481699, lng: -106.31966973052391, label: 'Site 2' },
  { lat: 38.816564618651974, lng: -106.243267861874, label: 'Site 3' },
  { lat: 38.970910463727286, lng: -106.40428097049143, label: 'Site 4' }
];

Simple HTML + javascript code to display the map and markers below.

<!DOCTYPE html>
<html>
<head>
 <meta charset="UTF-8">
    <style>
      #map {
        height: 800px;
        width: 100%;
       }
    </style>
 <title>Fabulous Sites</title>
 <script type="text/javascript" src="markers.js"></script>
</head>
<body>
 <div id="map"></div>
 <script>
  function markMap() {
         const center = {lat: 38.8, lng: -106.24};
         const map = new google.maps.Map(document.getElementById('map'), {
           zoom: 10,
           center: center
         });
         for (let i=0; i < sites.length; i++)
          var marker = new google.maps.Marker({
            position: {lat: sites[i].lat, lng: sites[i].lng},
            label: sites[i].label,
            map: map
          }); 
  }
 </script>
 <script async defer
     src="https://maps.googleapis.com/maps/api/js?key=yourkey&callback=markMap">
    </script>
</body>
</html>

Results


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Google Sentiment Analytics


Summary

In this post, I'll demonstrate use of a Google Natural Language API - Sentiment.  The scenario will be a caller's interaction with a contact center agent is recorded and then sent through Google Speech to Text followed by Sentiment analysis.

Implementation

The diagram below depicts the overall implementation.  Calls enters an ACD/recording platform that has API access in and out.  Recordings are then sent into the Google API's for processing.



Below is a detailed flow of the interactions between the ACD platform and the Google API's.



Code Snippet

App Server

Simple Node.js server below representing the App Server component.
app.post(properties.path, jsonParser, (request, response) => {
 //send a response back to ACD immediately to release script-side resources
 response.status(200).end();
 
 const contactId = request.body.contactId;
 const fileName = request.body.fileName;
 let audio;
 
 logger.info(`contactId: ${contactId} webserver - fileName:${fileName}`);
 admin.get(contactId, fileName) //Fetch the audio file (Base64-encoded) from ACD
 .then((json) => {
  audio = json.file;
  return sentiment.process(contactId, audio);  //Get transcript and sentiment of audio bytes
 })
 .then((json) => { //Upload the audio, transcript, and sentiment to Google Cloud Storage
  return storage.upload(contactId, Buffer.from(audio, 'base64'), json.transcript, JSON.stringify(json.sentiment));
 })
 .then(() => {
  admin.remove(contactId, fileName); //Delete the audio file from ACD
 })
 .catch((err) => {
  logger.error(`contactId:${contactId} webserver - ${err}`);
 });
});

app.listen(properties.listenPort);
logger.info(`webserver - started on port ${properties.listenPort}`);


Source: https://github.com/joeywhelan/sentiment

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, April 15, 2018

Voice Interactions on AWS Lex + Google Dialogflow


Summary

In this post I'll discuss the audio capabilities of the bot frameworks in AWS and Google.  They have different approaches currently, though I think that's changing.  AWS Lex is fully-capable processing voice/audio in a single API call.  Google Dialogflow has a separation of concerns currently.  It takes three API calls to process a voice input and provide a voice response.  Interestingly enough, execution time on both platforms is roughly the same.

Voice Interaction Flow - AWS Lex

Diagram below of what things look like on Lex to process a voice interaction.  It's really simple.  A single API call (PostContent) can take audio as input and provide an audio bot response.  Lex is burying the speech-to-text and text-to-speech details such that the developer doesn't have to deal with it.  It's nice.


Code Snippet - AWS Lex

Simple function for submitting audio in and receiving audio out below.  The PostContent API call can process text or audio.

 send(userId, request) {
  let params = {
          botAlias: '$LATEST',
    botName: BOT_NAME,
    userId: userId,
    inputStream: request
  };
  
  switch (typeof request) {
   case 'string':
    params.contentType = 'text/plain; charset=utf-8';
    params.accept = 'text/plain; charset=utf-8';
    break;   
   case 'object':
    params.contentType = 'audio/x-l16; sample-rate=16000';
    params.accept = 'audio/mpeg';
    break;
  }
  return new Promise((resolve, reject) => {
   this.runtime.postContent(params, (err, data) => {
    if (err) {
     reject(err);
    }
    else if (data) {
     let response = {'text' : data.message};
     switch (typeof request) {
      case 'string':
       response.audio = '';
       break;
      case 'object':
       response.audio = Buffer.from(data.audioStream).toString('base64');
       break;
     }
     resolve(response);
    }
   });
  });
 }

Voice Interaction Flow - Google Dialogflow

Diagram of what the current state of affairs look like with Dialogflow and voice processing.  Each function (speech-to-text, bot, text-to-speech) require separate API calls.  At least that's the way it is in the V1 Dialogflow API.  From what I can tell in V2 (beta), it will allow for audio inputs.


Code Snippet - Google Dialogflow

Coding this up is more complicated than Lex, but nothing cosmic.  I wrote some wrapper functions around Javascript Fetch commands and then cascaded them via Promises as you see below.
 send(request) {
  return new Promise((resolve, reject) => {
   switch (typeof request) {
    case 'string':
     this._sendText(request)
     .then(text => {
      let response = {};
      response.text = text;
      response.audio = '';
      resolve(response);
     })
     .catch(err => { 
      console.error(err.message);
      reject(err);
     });  
     break;
    case 'object':
     let response = {};
     this._stt(request)
     .then((text) => {
      return this._sendText(text);
     })
     .then((text) => {
      response.text = text;
      return this._tts(text);
     })
     .then((audio) => {
      response.audio = audio;
      resolve(response);
     })
     .catch(err => { 
      console.error(err.message);
      reject(err);
     });  
   }
  });
 }

Results

I didn't expect this, but both platforms performed fairly equally even though multiple calls are necessary on Dialogflow.  For my simple bot example, I saw ~ 2 second execution times for audio in/out from both Lex and Dialogflow.  

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Saturday, April 7, 2018

Dialogflow & InContact Chat Integration


Summary

In this post I'll discuss how to integrate a chat session that starts with a bot in Google Dialogflow.  The user isn't able to complete the transaction with the bot and then requests a human agent for assistance.  The application then connects the user with an agent on InContact's cloud platform.  The bot and web interfaces I built here are crude/non-production quality.  The emphasis here is on API usage and integration thereof.

This the third post of three discussing chat with InContact and Dialogflow.


Architecture

Below is a diagram the overall architecture for the scenario discussed above.


Application Architecture

The application layer is a simple HTML page with the interface driven by a single Javascript file - chat.js.  I built wrapper classes for the Dialogflow and InContact REST API's:  dflow.js and incontactchat.js respectively.  The chat.js code invokes API calls via those classes.





Application Flow

The diagram below depicts the steps in this example scenario.  




Steps 5, 6 Screen-shots



Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Tuesday, April 3, 2018

Google Dialogflow - Input Validation


Summary

This post concerns the task of validating user-entered input for a Google Dialogflow-driven chat agent.  My particular scenario is a quite simple/crude transactional flow, but I found input validation (slots) to be particularly cumbersome in Dialogflow.  Based on what I've seen in various forums, I'm not alone in that opinion.  Below are my thoughts on one way to handle input validation in Dialogflow.


Architecture

Below is high-level depiction of the Dialogflow architecture I utilized for my simple agent.  This particular agent is a repeat of something I did with AWS Lex (explanation here).  It's a firewood ordering agent.  The bot prompts for various items (number of cords, delivery address, etc) necessary to fulfill an order for firewood.  Really simple.



Below is my interpretation of the agent bot model in Dialogflow.

Validation Steps

For this simple, transactional agent I had various input items (slots) that needed to be provided by the end-user.  To validate those slots, I used two intents per item.  One intent was the main one that gathers the user's input.  That intent uses an input context to restrict access per the transactional flow.  The input to the intent is then be sent to a Google Cloud Function (GCF) for validation.  If it's valid, then a prompt is sent back to user for the next input slot.  If it's invalid, the GCF function triggers a follow-up intent to requery for that particular input item.  The user is trapped in that loop until they provide valid input.

Below is a diagram of the overall validation flow.


Below are screenshots of the Intent and requery-Intent for the 'number of cords' input item.  That item must be an integer between 1 and 3 for this simple scenario.



Code

Below is a depiction of the overall app architecture I used here.  All of the input validation is happening in a node.js function on GCF.


Validation function (firewoodWebhook.js)

The meaty parts of that function below:
function validate(data) { 
 console.log('validate: data.intentName - ' + data.metadata.intentName);
 switch (data.metadata.intentName) {
  case '3.0_getNumberCords':
   const cords = data.parameters.numberCords;
   if (cords && cords > 0 && cords < 4) {
    return new Promise((resolve, reject) => {
     const msg = 'We deliver within the 80863 zip code.  What is your street address?';
     const output = JSON.stringify({"speech": msg, "displayText": msg});
     resolve(output);
    });
   }
   else {
    return new Promise((resolve, reject) => {
     const output = JSON.stringify ({"followupEvent" : {"name":"requerynumbercords", "data":{}}});
     resolve(output);
    });
   }
   break;
  case '4.0_getStreet':
   const street = data.parameters.deliveryStreet;
   if (street) {
    return callStreetApi(street);
   }
   else {
    return new Promise((resolve, reject) => {
     const output = JSON.stringify ({"followupEvent" : {"name":"requerystreet", "data":{}}});
     resolve(output);
    });
   }
   break;
  case '5.0_getDeliveryTime':
   const dt = new Date(Date.parse(data.parameters.deliveryTime));
   const now = new Date();
   const tomorrow = new Date(now.getFullYear(), now.getMonth(), now.getDate()+1);
   const monthFromNow = new Date(now.getFullYear(), now.getMonth()+1, now.getDate());
   if (dt && dt.getUTCHours() >= 9 && dt.getUTCHours() <= 17 && dt >= tomorrow && dt <= monthFromNow) {
    return new Promise((resolve, reject) => {
     const contexts = data.contexts;
     let context = {};
     for (let i=0; i < contexts.length; i++){
      if (contexts[i].name === 'ordercontext') {
       context = contexts[i];
       break;
      }
     }
     const price = '$' + PRICE_PER_CORD[context.parameters.firewoodType] * context.parameters.numberCords;
     const msg = 'Thanks, your order for ' + context.parameters.numberCords + ' cords of ' + context.parameters.firewoodType + ' firewood ' + 
        'has been placed and will be delivered to ' + context.parameters.deliveryStreet + ' at ' + context.parameters.deliveryTime + '.  ' + 
        'We will need to collect a payment of ' + price + ' upon arrival.';
     const output = JSON.stringify({"speech": msg, "displayText": msg});
     resolve(output);
    });
   }
   else {
    return new Promise((resolve, reject) => {
     const output = JSON.stringify ({"followupEvent" : {"name":"requerydeliverytime", "data":{}}});
     resolve(output);   
    });
   }
   break;
  default:  //should never get here
   return new Promise((resolve, reject) => {
    const output = JSON.stringify ({"followupEvent" : {"name":"requestagent", "data":{}}});
    resolve(output);  
   });
 }
}
Focusing only on the number of cords validation -
Lines 6-11:  Check if the user input is between 1 and 3 cords.  If so, return a Promise object with the next prompt for input.
Lines 13-17:  Input is invalid.  Return a Promise object with a followupEvent to trigger the requery intent for this input item.

Client-side.  Dialogflow wrapper (dflow.js)

Meaty section of that below.  This is the 'send' function that submits user-input to Dialogflow for analysis and response.
 send(text) {
  const body = {'contexts': this.contexts,
      'query': text,
      'lang': 'en',
      'sessionId': this.sessionId
  };
  
  return fetch(this.url, {
   method: 'POST',
   body: JSON.stringify(body),
   headers: {'Content-Type' : 'application/json','Authorization' : 'Bearer ' + this.token},
   cache: 'no-store',
   mode: 'cors'
  })
  .then(response => response.json())
  .then(data => {
   console.log(data);
   if (data.status && data.status.code == 200) {
    this.contexts = data.result.contexts;
    return data.result.fulfillment.speech;
   }
   else {
    throw data.status.errorDetails;
   }
  })
  .catch(err => { 
   console.error(err);
   return 'We are experiencing technical difficulties.  Please contact an agent.';
  }) 
 }

Lines 8-29:  Main code here consists of a REST API call to Dialogflow with the user input.  If it's valid, return a Promise object with the next prompt.  Otherwise, send back a Promise with the error message.

Client-side.  User interface.

    function Chat(mode) {
        var _mode = mode;
     var _self = this;
        var _firstName;
        var _lastName;
        var _dflow; 
   
        this.start = function(firstName, lastName) {
            _firstName = firstName;
            _lastName = lastName;
            if (!_firstName || !_lastName) {
                alert('Please enter a first and last name');
                return;
            }
            
            _dflow = new DFlow("yourid");
            hide(getId('start'));
            show(getId('started'));
            getId('sendButton').disabled = false;
            getId('phrase').focus();
        };

        this.leave = function() {
         switch (_mode) {
          case 'dflow':       
           break;
         }
         getId('chat').innerHTML = '';
         show(getId('start'));
            hide(getId('started'));
            getId('firstName').focus();
        };
                       
        this.send = function() {
            var phrase = getId('phrase');
            var text = phrase.value.trim();
            phrase.value = '';

            if (text && text.length > 0) {
             var fromUser = _firstName + _lastName + ':'; 
             displayText(fromUser, text);
            
             switch (_mode) {
              case 'dflow':
               _dflow.send(text).then(resp => displayText('Bot:', resp));
               break;
             }
            }
        };         
Line 16:  Instantiate the Dialogflow wrapper object with your API token.
Line 45:  Call the 'send' function of the wrapper object and then display the returned text of the Promise.

Source Code


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Tuesday, March 13, 2018

InContact Chat

Summary

I'll be discussing the basics of getting a chat implementation built on InContact.  InContact is cloud contact center provider.  All contact center functionality is provisioned and operates from their cloud platform.

Chat Model

Below is a diagram of how the various InContact chat configuration objects relate to each other.  The primary object is the Point of Contact (POC).  That object builds a relation between the chat routing scripts and a GUID that is used in the URL to initiate a chat session with an InContact Agent.

Chat Configuration

Below are screen shots of some very basic configuration of the objects mentioned above.



Below a screen shot of the basic InContact chat routing script I used for this example.  The functionality is fairly straightforward with the annotations I included.


Chat Flow

Below is a diagram of the interaction flow using the out-of-box chat web client that InContact provides.  The option also exists to write your own web client with InContact's REST API's.

Code

Below is a very crude web client implementation.  It simply provides a button that will instantiate the InContact client in a separate window.  As mentioned previously, the GUID assigned to your POC relates the web endpoint to your script.
<!DOCTYPE html>
<html>
 <head>
     <title>Chat</title>
  <script type = "text/javascript" >
   function popupChat() {
    url = "https://home-c7.incontact.com/inContact/ChatClient/ChatClient.aspx?poc=yourGUID&bu=yourBUID
&P1=FirstName&P2=LastName&P3=first.last@company.com&P4=555-555-5555";
    window.open(url,"ChatWin","location=no,height=630,menubar=no,status=no,width=410", true);
   }
  </script> 
 </head>
 <body>
  <h1>InContact Chat Demo</h1>
   <input id="StartChat" type="button" value="Start Chat" onclick="popupChat()">
 </body>
</html>

Execution

Screen-shots of the resulting web client and Agent Desktop in a live chat session.




Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Monday, March 5, 2018

Dual ISP - Bandwidth Reporting

Summary

This post is a continuation of the last on router configuration of two ISP's.  In this one, I'll show how to configure a bandwidth reporting with a 3rd party package - MRTG.  MRTG is a really nice open-source, graphical reporting package that can interrogate router statistics via SNMP.

Configuration

MRTG set up is fairly simple.  It runs under a HTTP server (Apache) and has a single config file - mrtg.cfg.  Configuration consists of setting a few options for each of the interfaces that you want monitored over SNMP.

HtmlDir: /var/www/mrtg
ImageDir: /var/www/mrtg
LogDir: /var/www/mrtg
ThreshDir: /var/lib/mrtg
Target[wisp]: \GigabitEthernet0/1:public@<yourRouterIp>
MaxBytes[wisp]: 12500000
Title[wisp]: Traffic Analysis
PageTop[wisp]: <H1>WISP Bandwidth Usage</H1>
Options[wisp]: bits

Target[dsl]: \Dialer1:public@<yourRouterIp>
MaxBytes[dsl]: 12500000
Title[dsl]: Traffic Analysis
PageTop[dsl]: <H1>DSL Bandwidth Usage</H1>
Options[dsl]: bits

Results

MRTG will provide daily, weekly, monthly and yearly statistics in a nice graphical format.  Below are the screenshots of the graphs for the two interface configured above.



Another open-source reporting package called MRTG Traffic Utilization provides a easy-to-read aggregation of the bandwidth stats via the MRTG logs.  Below is a screenshot of mrtgtu


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, March 4, 2018

Cisco Performance Routing - Dual ISP's, Single Router


Summary

In this post I'll discuss how to set up dual ISP links in a sample scenario using a single Cisco router with Performance Routing (PfR).  Traditionally, dual links could be set up with Policy-Based Routing (PBR) and IP-SLA.  An example of that here.  The combination of those two would yield fail-over functionality upon loss of one of the two links.  It would not provide load-balancing of those links though.  PfR provides both.

Scenario

Below is a diagram of the example dual ISP scenario.  One connection is to a DSL provider; the other to a wireless ISP (WISP).  Available bandwidth on the links is grossly imbalanced, by a factor of 8.  A dialer interface (PPOE) to an ATM interface connects the DSL ISP.  There is a Gig Ethernet connection to the WISP.  Behind the router/firewall are clients on private-range IP addresses.  Two internet-facing web servers are segregated into a DMZ.



Interface Configurations

ISP Link 1 - DSL ISP - Dialer

The PfR-important items are highlighted below.  You need to set an accurate figure for the expected bandwidth on the link and set load statistics interval to the lowest setting (30 sec).  Also note that both interfaces are designated as 'outside' for NAT.
 interface Dialer1
 bandwidth 8000
 ip address negotiated
 ip access-group fwacl in
 ip mtu 1492
 ip nat outside
 ip inspect outside_outCBAC out
 ip virtual-reassembly in
 encapsulation ppp
 ip tcp adjust-mss 1452
 load-interval 30
 dialer pool 1
 dialer-group 1
 ppp authentication pap callin
 ppp chap refuse
 ppp pap sent-username yourUsername password yourPwd
 no cdp enable

ISP Link 2 - Wireless ISP - GigE

interface GigabitEthernet0/1
 bandwidth 64000
 ip address 2.2.2.2 255.255.255.0
 ip access-group fwacl in
 ip nat outside
 ip inspect outside_outCBAC out
 ip virtual-reassembly in
 load-interval 30
 duplex auto
 speed auto
 no cdp enable

Internal Link - GigE

The link to the LAN is configured as NAT inside.
interface GigabitEthernet1/0
 ip address 10.10.10.10 255.255.255.0
 ip nat inside
 ip virtual-reassembly in
 load-interval 30

Routing Configuration


Routing for this scenario is very simple.  Just 2 static default routes to the next hop on the respective ISP's.
ip route 0.0.0.0 0.0.0.0 1.1.1.1
ip route 0.0.0.0 0.0.0.0 2.2.2.1

NAT Configuration

The item of interest is the 'oer' command on the NAT inside commands.  This alleviates a potential issue with unicast reverse-path forwarding.  It's discussed in detail here.

route-map wispnat_routemap permit 1
 match ip address nat_acl
 match interface GigabitEthernet0/1

route-map dslnat_routemap permit 2
 match ip address nat_acl
 match interface Dialer1

ip nat inside source route-map dslnat_routemap interface Dialer1 overload oer
ip nat inside source route-map wispnat_routemap interface GigabitEthernet0/1 overload oer

ip nat inside source static tcp 192.168.40.60 80 interface Dialer1 80
ip nat inside source static tcp 192.168.40.60 443 interface Dialer1 443
ip nat inside source static tcp 192.168.40.61 80 interface GigabitEthernet0/1 80
ip nat inside source static tcp 192.168.40.61 443 interface GigabitEthernet0/1 443


PfR Configuration

Loopback Interface + Key Chain

These are used for communication between the Master and Border elements of PfR.
interface Loopback0
 ip address 192.168.200.1 255.255.255.0

key chain pfr
 key 0
  key-string 7 071F275E450C00

PfR Border Router Config

Really simple config for the border component.
pfr border
 logging
 local Loopback0
 master 192.168.200.1 key-chain pfr

PfR Master Router Config

Configuring the Master is easy as well.  In fact, just defining the key-chain and the internal+external interfaces is enough to enable basic load-balancing + fail-over.  PfR will aggregrate routes on IP address prefixes and balance across those routes across the 2 ISP links.  The extra commands below specify to keep the utilization of the two links within 10 percent of each other, use delay as route learning parameter, evaluate policies every 3 minutes, and make delay the top priority for policy.
pfr master
 max-range-utilization percent 10
 logging
 !
 border 192.168.200.1 key-chain pfr
  interface GigabitEthernet1/0 internal
  interface Dialer1 external
  interface GigabitEthernet0/1 external
 !
 learn
  delay
 periodic 180
 resolve delay priority 1 variance 10

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Wednesday, February 28, 2018

AWS Lex Bot & Genesys Chat Integration


Summary

This post is the culmination of the posts below on Lex + Genesys chat builds.  In this one, I'll discuss how to build a web client interface that allows integration of the two chat implementations.  The client will start out in a bot session with Lex and then allow for an escalation to a Genesys Agent when the end-user makes the request for an agent.

Genesys Chat build:  http://joeywhelan.blogspot.com/2018/01/genesys-chat-85-installation-notes.html
AWS Lex Chat build: http://joeywhelan.blogspot.com/2018/02/aws-lex-chatbot-programmatic.html
Reverse Proxy to support GMS: http://joeywhelan.blogspot.com/2018/02/nodejs-reverse-proxy.html

Architecture Layer

Below is a diagram depicting the overall architecture.  AWS Lex + Lambda is utilized for chat bot functionality; Genesys for human chat interactions.  A reverse proxy is used to provide access to Genesys Mobility Services (GMS).  GMS is a web app server allowing programmatic access to the Genesys agent routing framework.

Transport Layer

Secure transport is used through out the architecture.  HTTPS is use for SDK calls to Lex.  The web client code itself is served up via a HTTPS server.  Communications between the web client and GMS are proxied and then tunneled through WSS via Cometd to provide support for asynchronous communcations between the web client and Genesys agent. 

Application Layer

I used the 'vanilla demo' included with the Cometd distro to build the web interface.  All the functionality of interest is contained in the chat.js file.  Integration with Lex is via the AWS Lex SDK.  Integration with Genesys is via publish/subscribe across Cometd to the GMS server.  GMS supports Cometd natively for asynchronous communications.


Application Flow

Below are the steps for an example scenario:  User starts out a chat session with a Lex bot, attempts to complete an interaction with Lex, encounters difficulties and asks for a human agent, and finally transfer of the chat session to an agent with the chat transcript.


Step 1 Code Snippets

        // Initialize the Amazon Cognito credentials provider
        AWS.config.region = 'us-east-1'; // Region
        AWS.config.credentials = new AWS.CognitoIdentityCredentials({
            IdentityPoolId: 'us-east-1:yourId',
        });
        var _lexruntime = new AWS.LexRuntime();

        function _lexSend(text) {
         console.log('sending text to lex');
            var fromUser = _firstName + _lastName + ':'; 
            _displayText(fromUser, text);
        
            var params = {
              botAlias: '$LATEST',
        botName: 'OrderFirewoodBot',
        inputText: text,
        userId: _firstName + _lastName,
       };
            _lexruntime.postText(params, _lexReceive);
        }
Lines 1-6:  Javascript. AWS SDK set up.  An AWS Cognito pool must be created with an identity with permissions for the the Lex postText call.

Step 2 Code Snippets

RequestAgent Intent

An intent to capture the request for an agent needs to be added to the Lex Bot.  Below is a JSON-formatted intent object that can be programmatically built in Lex.
{
    "name": "RequestAgent",
    "description": "Intent for transfer to agent",
    "slots": [],
    "sampleUtterances": [
       "Agent",
       "Please transfer me to an agent",
       "Transfer me to an agent",
       "Transfer to agent"
    ],
    "confirmationPrompt": {
        "maxAttempts": 2,
        "messages": [
            {
                "content": "Would you like to be transferred to an agent?",
                "contentType": "PlainText"
            }
        ]
    },
    "rejectionStatement": {
        "messages": [
            {
                "content": "OK, no transfer.",
                "contentType": "PlainText"
            }
        ]
    },
    "fulfillmentActivity": {
        "type": "CodeHook",
        "codeHook": {
         "uri" : "arn:aws:lambda:us-east-1:yourId:function:firewoodLambda",
      "messageVersion" : "1.0"
        }
    }
}

Lambda Codehook

Python code below was added to the codehook described in my previous Lex post.  Lines 3-6 add an attribute/flag that can be interrogated on the client side to determine if a transfer to agent has been requested.
    def __agentTransfer(self):
        if self.source == 'FulfillmentCodeHook':
            if self.sessionAttributes:
                self.sessionAttributes['Agent'] = 'True';
            else:
                self.sessionAttributes = {'Agent' : 'True'}
            msg = 'Transferring you to an agent now.'
            resp = {
                    'sessionAttributes': self.sessionAttributes,
                    'dialogAction': {
                                        'type': 'Close',
                                        'fulfillmentState': 'Fulfilled',
                                        'message': {
                                            'contentType': 'PlainText',
                                            'content': msg
                                        }
                                    }
                    }
            return resp

Step 3 Code Snippets

Receiving the Lex response with the agent request

Lines 11-13 interogate the session attributes returned by Lex and then set up the agent transfer, if necessary.
        function _lexReceive(err, data) {
         console.log('receiving lex message')
         if (err) {
    console.log(err, err.stack);
   }
         
   if (data) {
    console.log('message: ' + data.message);
    var sessionAttributes = data.sessionAttributes;
    _displayText('Bot:', data.message);
    if (data.sessionAttributes && 'Agent' in data.sessionAttributes){
     _mode = 'genesys';
     _genesysConnect(_getTranscript());
    }
   } 
        }

Genesys connection.

Genesys side configuration is necessary to set up the hook between the GMS API calls and the Genesys routing framework. 'Enable-notification-mode' must be set to True to allow Cometd connections to GMS.  A service/endpoint must be created that corresponds to an endpoint definition in the Genesys Chat Server configuration.  That chat end point is a pointer to a Genesys routing strategy.


If a Cometd connection doesn't already exist, create one and perform the handshake to determine connection type.  Websocket is the preferred method, but if that fails - Cometd will fall back to a polling-type async connection.  The request to connect to Genesys is then sent across that Cometd(websocket) connection via the publish command.


        
        var _genesysChannel = '/service/chatV2/v2Test';

        function _metaHandshake(message) {
         console.log('cometd handshake msg: ' + JSON.stringify(message, null, 4));         
         if (message.successful === true) {
          _genesysReqChat();
         }
        }

        function _genesysReqChat() {
         var reqChat = {
           'operation' : 'requestChat',
        'nickname' : _firstName + _lastName
      };
         _cometd.batch(function() { 
       _genesysSubscription = _cometd.subscribe(_genesysChannel, _genesysReceive); 
       _cometd.publish(_genesysChannel, reqChat);
      });
        }
        
        function _genesysConnect() {
         console.log('connecting to genesys');
         if (!_connected) { 
          _cometd.configure({
           url: 'https://' + location.host + '/genesys/cometd',
           logLevel: 'debug'
          });
          _cometd.addListener('/meta/handshake', _metaHandshake);
          _cometd.addListener('/meta/connect', _metaConnect);
          _cometd.addListener('/meta/disconnect', _metaDisconnect);
          _cometd.handshake();
         }
         else {
          _genesysReqChat();
         }
        }

Step 4 Code Snippets

In the previous step, the web client subscribed to a Cometd channel corresponding to a Genesys chat end point.  When the message arrives that this client is 'joined', publish the existing chat transcript (between the user and Lex) to that Cometd channel.
    function _getTranscript(){
     var chat = _id('chat');
     var text;
     if (chat.hasChildNodes()) {
      text = '***Transcript Start***' + '\n';
      var nodes = chat.childNodes;
      for (var i=0; i < nodes.length; i++){
       text += nodes[i].textContent + '\n';
      }
      text += '***Transcript End***';
     }
     return text;
    }
        function _genesysReceive(res) {
         console.log('receiving genesys message: ' + JSON.stringify(res, null, 4));
      if (res && res.data && res.data.messages) {
       res.data.messages.forEach(function(message) {
        if (message.index > _genesysIndex) {
         _genesysIndex = message.index;
         switch (message.type) {
          case 'ParticipantJoined':
           var nickname = _firstName + _lastName;
           if (!_genesysSecureKey && message.from.nickname === nickname){
            _genesysSecureKey = res.data.secureKey;
            console.log('genesys secure key reset to: ' + _genesysSecureKey);
            var transcript = _getTranscript();
            if (transcript){
             _genesysSend(transcript, true);
            }
           }
           break;

Step 5 Screen Shots


Agent Desktop (Genesys Workspace)



Source: https://github.com/joeywhelan/lexgenesys

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Tuesday, February 20, 2018

Node.js Reverse Proxy


Summary

In this post I'll show how to create a simple reverse proxy server in Node.js.

Environmentals

The scenario here is front-ending an app server (in this case Genesys Mobility Services (GMS) with a proxy to only forward application-specific REST API requests to GMS over HTTPS.  The proxy also acts as a general web server as well - also over HTTPS.

Code

var path = require('path');
var fs = require('fs'); 
var gms = 'https://svr2:3443';

var express = require('express');
var app = express();
var privateKey = fs.readFileSync('./key.pem'); 
var certificate = fs.readFileSync('./cert.pem'); 
var credentials = {key: privateKey, cert: certificate};
var https = require('https');
var httpsServer = https.createServer(credentials, app);

var httpProxy = require('http-proxy');
var proxy = httpProxy.createProxyServer({
 secure : false,
 target : gms
});

httpsServer.on('upgrade', function (req, socket, head) {
   proxy.ws(req, socket, head);
});

proxy.on('error', function (err, req, res) {
 console.log(err);
 try {
  res.writeHead(500, {
   'Content-Type': 'text/plain'
  });
  res.end('Error: ' + err.message);
 } catch(err) {
  console.log(err);
 }
});

app.use(express.static(path.join(__dirname, 'public')));

app.all("/genesys/*", function(req, res) {
 proxy.web(req, res);
});

httpsServer.listen(8443);
Lines 1-11:  Set up a HTTPS server with Express.  The proxy target is specified in Line 3.
Lines 13-17:  Set up the Proxy.  I'm using a self-signed certificate on Svr 2, so 'secure' is set to false to support that.
Lines 19-21:  Configure the HTTPS server use the Proxy to proxy websockets.
Line 35:  Serve up static content (HTML, CSS, Javascript) from the 'public' directory for general requests to this server.
Lines 37-39:  Proxy any requests that are specifically to the GMS REST API, both HTTPS and WSS traffic.

Source:  https://github.com/joeywhelan/Revproxy/

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, February 18, 2018

AWS Lex Chatbot - Programmatic Provisioning


Summary

AWS Lex has a full SDK for model building and run time execution.  In this post, I'll demonstrate use of that SDK in Python.  I'll demonstrate how create a simple/toy chatbot, integrate with a Lambda validation function, do a real time test, then finally - delete the bot.  Use of the AWS console will not be necessary at all.  All provisioning will be done in code.

AWS Lex Architecture

Below is a diagram of the overall architecture.  The AWS Python SDK is used here for provisioning.  Lambda is used for real time validation of data.  In this particular bot, I'm using a 3rd party (SmartyStreets) for validating street addresses.  That consists of a web service call from Lambda itself.



Bot-specific Architecture

Below is a diagram of how Lex bots are constructed.  Lex bot is goal-oriented sort of chatbot.  Goals are called 'Intents'.  Items necessary to fulfill an Intent are called Slots.  A bot consists of a bot definition in which Intent definitions are referenced.  Intents can include references to custom Slot Type definitions.  Intents are also where Lambda function calls for validation of slot input and overall fulfillment are made.

Bot Provisioning Architecture

Below is an architectural diagram of my particular provisioning application.  The Python application itself is composed of generic AWS SDK function calls.  All of the bot-specific provisioning configuration exists in JSON files.


Lex Provisioning Code

My Lex bot provisioning code consists of a single Python class.  That class gets its configuration info from an external config file.
if __name__ == '__main__':
    bot = AWSBot('awsbot.cfg')
    bot.build()
    bot.test('I want to order 2 cords of split firewood to be delivered at 1 pm on tomorrow to 900 Tamarac Pkwy 80863')
    bot.destroy()
Lines 1-5:  The AWSBot class exposes a simple interface to build, test, and destroy a bot on Lex.

As mentioned, all configuration is driven by a single config file and multiple JSON files.
class AWSBot(object):  
    def __init__(self, config):
        logger.debug('Entering')
        self.bot, self.slots, self.intents, self._lambda, self.permission = self.__loadResources(config)
        self.buildClient = boto3.client('lex-models')
        self.testClient = boto3.client('lex-runtime')
        self.lambdaClient = boto3.client('lambda')
        logger.debug('Exiting')  

    def __loadResources(self, config):
        logger.debug('Entering')
        cfgParser = configparser.ConfigParser()
        cfgParser.optionxform = str
        cfgParser.read(config)
        
        filename = cfgParser.get('AWSBot', 'botJsonFile')
        with open(filename, 'r') as file:
            bot = json.load(file)
        
        slotsDir = cfgParser.get('AWSBot', 'slotsDir')
        slots = []
        for root,_,filenames in os.walk(slotsDir):
            for filename in filenames:
                with open(os.path.join(root,filename), 'r') as file:
                    jobj = json.load(file)
                    slots.append(jobj)
                    logger.debug(json.dumps(jobj, indent=4, sort_keys=True))
                     
        intentsDir = cfgParser.get('AWSBot', 'intentsDir')
        intents = []
        for root,_,filenames in os.walk(intentsDir):
            for filename in filenames:
                with open(os.path.join(root,filename), 'r') as file:
                    jobj = json.load(file)
                    intents.append(jobj)
                    logger.debug(json.dumps(jobj, indent=4, sort_keys=True))
        
        filename = cfgParser.get('AWSBot', 'lambdaJsonFile')
        dirname = os.path.dirname(filename)
        with open(filename, 'r') as file:
            _lambda = json.load(file)
        with open(os.path.join(dirname,_lambda['Code']['ZipFile']), 'rb') as zipFile:
            zipBytes = zipFile.read()
        _lambda['Code']['ZipFile'] = zipBytes    
        
        filename = cfgParser.get('AWSBot', 'permissionJsonFile')
        with open(filename, 'r') as file:
            permission = json.load(file)
               
        return bot, slots, intents, _lambda, permission
Lines 1-8:  Load up dict objects with the Lex config and instantiate the AWS SDK objects.
Lines 10-14:  Read a config file that holds directory paths to the JSON files used to provision Lex.
Lines 16-18:  Load a dict object with the a JSON Lex Bot definition file.
Lines 20-27:  Load a dict object with a custom slot type JSON definition.
Lines 29-35:  Load a dict object with the Intent JSON definition.
Lines 38-44:  Load a dict object with Lambda code hook definition.  Read the bytes of a zip file containing the Python code hook along with all non-AWS-standard libraries it references.
Lines 46-48:  Load a dict object with the attributes necessary to add permission for the Lambda code hook to be called from Lex.

The public build interface consists calls to private methods to build the various Lex-related objects: Lambda code hook, slot types, intents, and finally the bot itself.
    def build(self):
        logger.debug('Entering')  
        self.__buildLambda()
        self.__buildSlotTypes()
        self.__buildIntents()
        self.__buildBot()
        logger.debug('Exiting')
Below is the code for the private build methods:
    def __buildLambda(self):
        logger.debug('Entering')
        resp = self.lambdaClient.create_function(**self._lambda)
        logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        resp = self.lambdaClient.add_permission(**self.permission)
        logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        logger.debug('Exiting')
    
    def __buildSlotTypes(self):
        logger.debug('Entering')
        for slot in self.slots:
            resp = self.buildClient.put_slot_type(**slot)
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        logger.debug('Exiting')
        
    def __buildIntents(self):
        logger.debug('Entering')
        for intent in self.intents:
            resp = self.buildClient.put_intent(**intent)
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        logger.debug('Exiting')
        
    def __buildBot(self):
        logger.debug('Entering')
        self.buildClient.put_bot(**self.bot)
        complete = False
        for _ in range(20):
            time.sleep(20)
            resp = self.buildClient.get_bot(name=self.bot['name'], versionOrAlias='$LATEST')
            logger.debug(resp['status'])
            if resp['status'] == 'FAILED':
                logger.debug('***Bot Build Failed***')
                logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
                complete = True
                break
            elif resp['status']  == 'READY':
                logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
                complete = True
                break
                   
        if not complete:
            logger.debug('***Bot Build Timed Out***')
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer)) 
        logger.debug('Exiting')
Lines 1-7:  Call the AWS Lambda SDK client to create the function with the previously-loaded JSON definition.  Add the permission for the Intent to call that Lambda function.
Lines 9-14:  Loop through the slots JSON definitions and build each via AWS SDK call.
Lines 16-21:  Same thing, but with the Intents.
Lines 23-44:  Build the bot with its JSON definition.  Although this SDK call is synchronous and returns almost immediately, the Bot will not be completed upon return from the call.  It takes around 1-2 minutes.  The for loop here is checking AWS's progress on the Bot build every 20 sec.

After the Bot is complete, the Lex runtime SDK can be used to test it with a sample utterance.
def test(self, msg):
        logger.debug('Entering')
        params = {
                    'botAlias': '$LATEST',
                    'botName': self.bot['name'],
                    'inputText': msg,
                    'userId': 'fred',
                }
        resp = self.testClient.post_text(**params)
        logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer)) 
        logger.debug('Exiting')
Deleting/cleaning-up the Lex objects on AWS is a simple matter of making the corresponding delete function calls from the SDK.  Similar to the build process, deleting an object takes time on AWS, even though the function may return immediately.  You can mitigate the issues delays associated with deletion and corresponding dependencies by putting artificial delays in the code, such as below.
def __destroyBot(self):
        logger.debug('Entering')
        try:
            resp = self.buildClient.delete_bot(name=self.bot['name'])
            logger.debug(json.dumps(resp, indent=4, sort_keys=True, default=self.__dateSerializer))
        except Exception as err:
            logger.debug(err)
        time.sleep(5) #artificial delay to allow the operation to be completed on AWS
        logger.debug('Exiting')

Lambda Code Hook

If you require any logic for data validation or fulfillment (and you will for any real bot implementation), there is no choice but to use AWS Lambda for that function.  That Lambda function needs a single entry point where the Lex event (Dialog validation and/or Fulfillment) is passed.  Below is entry point for the function I developed.  All the validation logic is contained in single Python class - LexHandler.
def lambda_handler(event, context):
    handler = LexHandler(event)
    os.environ['TZ'] = 'America/Denver'
    time.tzset()
    return handler.respond()
I'm not going to post all the code for the handler class as it's pretty straight-forward (full source will be on github as well), but here's one snippet of the Address validation.  It actually makes a call to an external webservice (SmartyStreets) to perform the validation function.
    def __isValidDeliveryStreet(self, deliveryStreet, deliveryZip):  
        if deliveryStreet and deliveryZip:
            credentials = StaticCredentials(AUTH_ID, AUTH_TOKEN)
            client = ClientBuilder(credentials).build_us_street_api_client()
            lookup = Lookup()
            lookup.street = deliveryStreet
            lookup.zipcode = deliveryZip
            try:
                client.send_lookup(lookup)
            except exceptions.SmartyException:
                return False
            
            if lookup.result:
                return True
            else:
                return False
        else:
            return False

Full source here:  https://github.com/joeywhelan/AWSBot

Copyright ©1993-2024 Joey E Whelan, All rights reserved.