Wednesday, November 25, 2015

LendingClub REST API access with Python


Summary

LendingClub is one of the peer-to-peer lenders out there.  They provide a REST API for simple account transactions such as querying account data, available loans, and submitting loan orders.  In this article, I'll be discussing the development of a simple auto-investment tool I wrote in Python with the Lending Club API.  The application reads a user-configurable file for options and then if funds are available and loans exist that meet the user's criteria, orders are placed with LendingClub for those loans.  The application was designed to be run out of a cron script to periodically check funds + loans and place orders accordingly.

I have an additional set of articles discussing integration of machine learning techniques with this API here.

Preparation

Obviously, Step 1 is to establish an account at LendingClub.  After that, you can make a request for access to the REST API.  There are two critical pieces of info you'll need execute any API calls:  the Account ID and an Authorization Key.  LendingClub uses the auth key method for securing access to their API.  As will be discussed later, the auth key will be passed as a HTTP header item for any API call.

Application Organization

Figure 1 below depicts the overall organization of this application.  All user-configurable options are held in a configuration file.  The ConfigParser module is utilized for reading that file.  Configuration state is managed in a class I developed called ConfigData.  All the REST calls to the LendingClub API are bundled into a class I developed called LendingClub.  The requests module is leveraged for the HTTP operations.

Figure 1

Code Snippets

Configuration File

[AccountData]
investorId = yourId
authKey = yourAuthKey
reserveCash = 0.0
investAmount = 25.00
portfolioName = A Loans
[LoanCriteria]
grade = A
term = 36
delinq2Yrs = 0

This represents a simplistic user-configuration file.
Line 1:  AccountData section of the configuration file.
Line 2:  Your LendingClub account ID.  You can find this on the main account summary page on Lending's Club's site.
Line 3:  The authorization key issued by LendingClub when you request access to their API.
Line 4:  The amount of cash you want to remain in 'reserve'.  That means it will not be invested.
Line 5:  The amount you want invested in each loan.
Line 6:  The textual name of the portfolio where you want any loan purchases to be placed.
Line 7:  LoanCriteria section of the configuration file.
Lines 8-10:  Any criteria you wish to employ to filter loans for investment.  The filtering logic in the main app (discussed later) is very simple - it looks at equality only, e.g.  does Grade = 'A'.  You can find a full listing of the various loan data points in the Lending Club API documentation for the LoanList resource.

Application Body


class ConfigData(object):
    def __init__(self, filename):
        cfgParser = ConfigParser.ConfigParser()
        cfgParser.optionxform = str
        cfgParser.read(filename)
        self.investorId = self.castNum(cfgParser.get('AccountData', 'investorId'))
        self.authKey = cfgParser.get('AccountData', 'authKey')
        self.reserveCash = self.castNum(cfgParser.get('AccountData', 'reserveCash'))
        self.investAmount = self.castNum(cfgParser.get('AccountData', 'investAmount'))
        if self.investAmount < 25 or self.investAmount % 25 != 0:  
            raise RuntimeError('Invalid investment amount specified in configuration file')
        self.portfolioName = cfgParser.get('AccountData', 'portfolioName')
        criteriaOpts = cfgParser.options('LoanCriteria')  #Loan filtering criteria
        self.criteria = {}
        for opt in criteriaOpts:
            self.criteria[opt] = self.castNum(cfgParser.get('LoanCriteria', opt));

    def castNum(self, val):
        try:
            i = int(val)
            return i
        except ValueError:
            try:
                d = decimal.Decimal(val)
                return d
            except decimal.InvalidOperation:
                return val
Line 2:  Constructor for this class.
Lines 3-5:  Instantiate a ConfigParser object.  Read the config file and make option names case-sensitive.
Lines 6-12:  Set instance variables to the various account-data options in the config file.
Lines 14-16: Create an instance dictionary variable to store the user-specified loan criteria.
Lines 18-27: Helper function for casting options to the correct numeric type.

class LendingClub(object):
    apiVersion = 'v1'
    
    def __init__(self, config):
        self.config = config
        self.header = {'Authorization' : self.config.authKey, 'Content-Type': 'application/json'}
        self.loans = None
        self.cash = None
        self.portfolioId = None
        
        self.acctSummaryURL = 'https://api.lendingclub.com/api/investor/' + LendingClub.apiVersion + \
        '/accounts/' + str(self.config.investorId) + '/summary'
        self.loanListURL = 'https://api.lendingclub.com/api/investor/' + LendingClub.apiVersion + \
        '/loans/listing'
        self.portfoliosURL = 'https://api.lendingclub.com/api/investor/' + LendingClub.apiVersion + \
        '/accounts/' + str(self.config.investorId) + '/portfolios'
        self.ordersURL = 'https://api.lendingclub.com/api/investor/' + LendingClub.apiVersion + \
        '/accounts/' + str(self.config.investorId) + '/orders'
        
    def __getCash(self):
        resp = requests.get(self.acctSummaryURL, headers=self.header)
        resp.raise_for_status()
        return decimal.Decimal(str(resp.json()['availableCash']))
        
    
    def __getLoans(self):
        payload = {'showAll' : 'true'}
        resp = requests.get(self.loanListURL, headers=self.header, params=payload)
        resp.raise_for_status()
     
        loanDict = {}
        for loan in resp.json()['loans']:
            numChecked = 0
            for criterion in self.config.criteria:
                if loan[criterion] == self.config.criteria[criterion]:
                    numChecked += 1              
                else:
                    break
            if numChecked == len(self.config.criteria):
                loanDict[loan['id']] = loan['fundedAmount'] / loan['loanAmount']
                logger.info('Loan id:' + str(loan['id']) + \
                             ' was a match, funded percentage = ' + str(loanDict[loan['id']]))
        return sorted(loanDict.items(), key=operator.itemgetter(1), reverse=True)            

    def __postOrder(self, aid, loanId, requestedAmount, portfolioId):
        payload = json.dumps({'aid': aid, \
                   'orders':[{'loanId' : loanId, \
                                'requestedAmount' : float(requestedAmount), \
                                'portfolioId' : portfolioId}]})
        resp = requests.post(self.ordersURL, headers=self.header, data=payload)
        retVal = resp.json();
        
        if 'errors' in retVal:
            for error in retVal['errors']:
                logger.error('Order error: ' + error['message'])
        resp.raise_for_status()
        
        confirmation = retVal['orderConfirmations'][0]
        logger.info('OrderId:' + str(retVal['orderInstructId']) + ', $' + \
                    str(confirmation['investedAmount']) + ' was invested in loanId:' + str(confirmation['loanId']))
        return decimal.Decimal(str(confirmation['investedAmount']))

Line 4:  Constructor for this class.  Accepts a ConfigData object as input.
Lines 5-18:  Set the state for this object based on the configuration data passed as input.
Lines 20-23:  Private method for obtaining the cash available in the LendingClub account.  Utilizes the 'response' module for the HTTP operation.
Lines 26-43:  Private method for fetching available loans from LendingClub.  After the loans are fetched, they checked against the user's criteria and then sorted by their current funding percentage.
Lines 45-61:  Private method for submitting a loan order to LendingClub.  The LendingClub API actually allows to bundle multiple orders into one REST call; however, I'm only doing one order at time in this app.  That made the post-order error-checking logic simpler.

Main Code Block

try:
    lc = LendingClub(ConfigData(CONFIG_FILENAME))
    while lc.hasCash() and lc.hasLoans():
        lc.buy()
except:
    logger.exception('')

Line 2:  Instantiate a LendingClub object with the configuration data object as the input parameter.
Lines 3-4:  Loop based on availability of cash and matching loans.  If both exist, place an order.

Full source code here. Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Monday, August 17, 2015

Dual-booting (Win/Linux) with a USB Drive

Summary

This article covers the topic of dual-booting a primarily Windows box to Linux.  Nothing particularly cosmic about this, but there are some gaps in the google knowledge base out there on this.  I boil this down to the essentials with a working model.

Implementation

So, the scenario is you have a box with Windows hard drive but you'd like to be able to boot to a Linux distro on occasion without disrupting the universe (and that Win hard drive).  

For the second hard drive - I went with this jewel: SanDisk USB 3.0 128GB.  It's quite healthy on both speed and capacity.  

For the Linux distro - I like Ubuntu for desktop use.  Doing an install of it to that USB drive is trivial; just build yourself another drive (USB, whatever) with the install image and set your USB drive as the target for the install.

Now to the tricky part - time.  The Ubuntu install will reset your BIOS clock to UTC time.  That's normally considered a good thing.  You typically want your hardware clock on UTC, not local time.  Unfortunately, Microsoft operates in a different universe when it comes to this topic of time.  Windows expects the BIOS/hardware clock to be in local time.  So, if you do nothing - next time you boot to Windows, you'll see UTC time reflected in Windows - not your local time.  That's an annoyance and a real issue if you have apps that are dependent on an accurate time setting.

The options to correct this are either set the hardware clock to local time and configure Linux to deal with it or leave the clock on UTC and configure Windows to handle that.

The second option (configure Windows for UTC) is a beating that involves registry manipulations.  Configuring Linux to handle local time on the hardware clock is way easier.

Step 1:  Reset the hardware/BIOS clock back to local time.  If you have access to the BIOS, that's simple.  If your BIOS is locked down, it's still simple.  The Linux shell command below will do it:
$ sudo hwclock -w --localtime

The hwclock command provides direct access to the BIOS clock.  The command above sets the BIOS clock to the current Linux local system time.  Assuming you're using NTP, your BIOS clock will be set to a very accurate time.

Step 2:  Configure the Linux O/S to expect localtime from the BIOS clock (instead of UTC).  To do this, simply make the edit below to the /etc/default/rcS file.
# assume that the BIOS clock is set to UTC time (recommended)
UTC=no

That variable is set to 'yes' by default.  Just change it to 'no'.




Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Tuesday, June 16, 2015

Call Recordings to Email


Summary

In this article I'll be describing how create a simple call recording service that will record a message from an IVR application and then attach the resulting audio content to an email.  That email could then be routed to an agent in a contact center scenario, for instance.

Environment

Figure 1 depicts the overall architecture.  I created a simple VXML application that provides the voice interface.  That same VXML app sends the collected audio content to a Node-based web service.  The web service repackages the audio content into an email attachment.

Figure 1

Implementation

Figure 2 below depicts the voice and data flow for this architecture.

Figure 2

The web service is built as a simple Node application.  Below is high-level organization of that app.

Figure 3

Finally, Figure 4 depicts the input/output behavior of this web service.

Figure 4

Code Snippets

Voice Application



<?xml version="1.0" encoding="UTF-8"?>
<vxml version="2.1">

  <catch event="error.badfetch.http.400">
    <log label="Error" expr="'HTTP 400 - Bad Request'"/>
    <prompt>
      This request was rejected by the server due to being malformed.  Good bye.
      <break time="1000"/>
    </prompt>
  </catch>
  
  <catch event="error.badfetch.http.413">
    <log label="Error" expr="'HTTP 413 - Request Entity Too Large'"/>
    <prompt>
        This request had an upload that is larger than the server will accept.  Good bye.
        <break time="1000"/>
    </prompt>
  </catch>
  
  <catch event="error.badfetch.http.500">
    <log label="Error" expr="'HTTP 500 - Internal Server Error'"/>
    <prompt>
      The server has experienced an internal error.  Good bye.
      <break time="1000"/>
    </prompt>
  </catch>
  
  <form>
    <var name="ANI" expr="session.callerid" />
    <var name="DNIS" expr="session.calledid" />
    <block>
      <prompt>
        This is a message recording demo.
        <break time="200"/>
      </prompt>
    </block>
    
    <record  name="MSG" beep="true" maxtime="20s" dtmfterm="true" type="audio/mp3">
        <prompt timeout="5s">
          Record a message after the beep.
        </prompt>
        <noinput>
          I didn't hear anything, please try again.
         </noinput>
        <filled>
          <submit next="http://yourwebserver/upload" enctype="multipart/form-data"
            method="post" namelist="ANI DNIS MSG"/>
        </filled>
    </record>
  </form>
</vxml>
Lines 4-26:  Catch logic for HTTP error
Lines 29-30: Saving ANI and DNIS in variables for use later in the HTTP form post.
Line 38:  VXML tag used to record a caller.  Output will be in MP3 format.
Line 46:  Finally, when the recording has ended - send it and the ANI + DNIS via an HTTP multipart form.

Web Service (main body of code)

appHttp.post('/upload', function(req, res) {
          try {
            logger.debug('Entering - File: main.js, Method: appHttp.post()');
                        
            var form = new multiparty.Form();
            var ani = null;
            var dnis = null;
            var fname = null;
            var msg = null;
            var size = 0;
                        
            form.on('error', function(err, statCode) {
              logger.error('File: main.js, Method: appHttp.post(), form(), Error: ' + err.message);
              res.status(statCode || 400).end();
            });
                        
            form.on('part', function(part) {
              var data=[];
                          
              part.on('error', function(err, statCode) {
                form.emit('error', err, statCode);
              });
                          
              part.on('data', function(chunk) {
                size += chunk.length;
                if (size > properties.maxUploadSize) {
                  //covers a degenerate case of too large of an upload.  Possible DOS attempt
                  part.emit('error', new Error('Upload exceeds maximum allowed size'), 413);
                }
                else {  
                  data.push(chunk);
                }
              });
                         
              part.on('end', function() {
                switch (part.name) {
                  case 'ANI':
                    ani = data.toString();
                    break;
                  case 'DNIS':
                    dnis = data.toString();
                    break;
                  case 'MSG':
                    if (part.filename) {
                      fname = part.filename;
                      msg = Buffer.concat(data);
                    }
                    else {
                      part.emit('error', new Error('Malformed file part in form'), 400);
                    }
                    break;
                  default:
                    part.emit('error', new Error('Unrecognized part in form'), 400);
                    break;
                }      
              });
            });
                        
            form.on('close', function() {
              if (ani && dnis && fname && msg) {
                res.status(200).sendFile(__dirname + '/vxml/response.vxml');
                var mailOptions = {
                  from : properties.emailFromUser,
                  to : properties.emailToUser,
                  subject : 'Recorded Message - ANI:' + ani + ', DNIS:' + dnis,
                  text : 'The attached recorded audio message was received.',
                  attachments : [{filename : fname, content : msg}]
                };
                
                transporter.sendMail(mailOptions, function(err, info) {
                  if (err) {
                    appHttp.emit('error', err);
                  }
                  logger.debug('Exiting - File: main.js, Method: appHttp.post()');
                });    
              }
              else {
                form.emit('error', new Error('Form missing required fields'), 400);
              }             
            });
                        
            form.parse(req); 
          }
Line 1:  This is the Express route for an 'upload' POST.
Line 5:  The multiparty node module is used for processing the POST'ed form data.
Lines 24-33:  Compile the form 'chunks' that are uploaded.  If an upload is being attempted that is larger than a user-configured maximum limit, terminate the upload.  Based on my testing, simply emitting an error is enough to cause Node/Express to terminate an upload.  I saw no need for something like 'req.connection.destroy()'.
Lines 35-57:  When a form 'part' has been completely uploaded, determine which 'part' it was and save it into local variables.
Lines 59-80:  When the entire form has been completely uploaded, determine if all the expected 'parts' were included.  If so, send back a 200 OK with simple VXML response.  Then, send the 'parts' out as an email.  The ANI and DNIS are put in the subject line of the email.  The audio content is sent as an attached file.

Output

Snippet of the resulting email output below:
From: yourFromAddress@gmail.com
To: yourToAddress@gmail.com
Subject: Recorded Message - ANI:1234567890, DNIS:9876543210
X-Mailer: nodemailer (1.3.4; +http://www.nodemailer.com;
 SMTP/1.0.3[client:1.2.0])
Date: Tue, 16 Jun 2015 01:02:06 +0000
Message-Id: <1434416526890-858ca434-7a77d97c-8d53507c data-blogger-escaped-gmail.com="">
MIME-Version: 1.0

------sinikael-?=_1-14344165263510.8602459693793207
Content-Type: text/plain
Content-Transfer-Encoding: 7bit

The attached recorded audio message was received.
------sinikael-?=_1-14344165263510.8602459693793207
Content-Type: audio/mpeg
Content-Disposition: attachment; filename=MSG-1434416526064.mp3
Content-Transfer-Encoding: base64


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Monday, June 1, 2015

Broadband Connectivity Monitor


Summary

In this article I'll be demonstrating a way for keeping tabs on your Internet connectivity.  Anyone that  has had challenges with their ISP and uptime knows what I'm talking about.
I looked at several external services (Pingdom, UptimeRobot, etc) and other folks' code, but didn't see anything that I particularly liked.  They wanted money, monitoring intervals were too long, etc.  Instead, I just wrote a fairly simple Linux shell script myself to do the job.

Design goals:
  • Simple to deploy/use
  • One-minute monitoring granularity
  • Logging with sufficient detail that I can go back to my ISP and get refunds for service disruptions
  • Email alerts

Environmentals

The two main requirements for this script are the Bash shell and a Mail Transfer Agent (MTA).

The MTA requirement is to support transmission of email alerts.  I used the Heirloom Mailx agent in testing on both Debian (Ubuntu) and Red Hat (Centos) environments.  My ISP blocks direct SMTP traffic (spam prevention no doubt), so I needed a MTA that would support use of external SMTP services (i.e., relay) for outbound emails.  Mailx provides that.  

I decided to use Google's email service (GMail) for the relay.  Below is the configuration I have working for Ubuntu (nail.rc file):
set smtp-use-starttls
set smtp=smtp://smtp.gmail.com:587
set ssl-verify=ignore
set smtp-auth=login
set smtp-auth-user="yourEmailAddress@gmail.com"
set smtp-auth-password="yourPassword"
set from="yourEmailAddress@gmail.com"

For Centos, I had to add the following line in addition the ones above (mail.rc file in this case):
 set nss-config-dir="/etc/pki/nssdb"

Implementation

As mentioned previously, I wrote this monitor completely in Linux shell script.  The overall program logic is as follows (loops forever at a user-configurable time interval):
  • Send an ICMP echo request (Ping) to a user-configurable target.
  • If the target replies, do nothing.  
  • If the target does not reply, I've experienced a broadband/Internet connectivity outage.  Log the details locally.
  • If we've recorded an outage and now have a successful ping, calculate the service disruption time, log it, and send an email alert that the outage occurred.  Since I'm doing the monitoring locally, there's no need to attempt an email alert till connectivity is restored, for obvious reasons.

Main body of the shell script below:
while :
do
 results=`ping -qc $COUNT $TARGET`
 case "$?" in
  0) if [ "$failedTime" -ne 0 ]
   then
    restoredTime=`date +%s`
    duration=$(( $restoredTime - $failedTime ))
    s=$(( duration%60 ))
    h=$(( duration/3600 ))
    (( duration/=60 ))
    m=$(( duration%60 ))
    
    logRec="Service Restored, Approx Outage Duration:"
    logRec+=`printf "%02d %s %02d %s %02d %s" "$h" "hrs" "$m" "min" "$s" "sec"`
    logger -t $(basename $0) "$logRec"
    t1=`date -d @$failedTime -I'seconds'`
    t2=`date -d @$restoredTime -I'seconds'`
    printf "%s %s\n%s %s" "$t1" "$msg" "$t2" "$logRec" | mail -s "Service Outage Occurred" $EMAIL
    failedTime=0
    internalError=0
   fi
   ;;
  1) if [ "$failedTime" -eq 0 ]
   then
    failedTime=`date +%s`
    logRec=`echo "Service Outage:" "$results" | tr '\n' ' '`
    msg=$logRec
    logger -t $(basename $0) "$logRec"
    internalError=0
   fi
   ;;
  *)
   if [ "$internalError" -eq 0 ]
   then
    logger -t $(basename $0) "Internal Error"
    (( internalError+=1 ))
   fi
   ;;
 esac
 
 sleep $INTERVAL
done

Line 1:  Loop, like forever.
Line 3:  Executes the ping command with a user-configurable ping count and target.  Those settings are stored in an external config file.
Line 4:  Set up a switch on the return value of the ping command.  Per the man page, ping will return 0 if it gets a reply, 1 if it gets no reply at all, and 2 on any other sort of error.
Line 5:  This would be the case that ping received a reply.  I only need to take action if there has been an outage recorded earlier.  That outage flag is the time of occurrence, stored in the failedTime variable.
Line 7: An outage and resulting service restoration is in progress.  Store the time of the restoration (in seconds since 1970).
Lines 8-12:  Calculate the total duration of the outage using difference between the start and stop times.  Take that duration time that is in seconds and do some arithmetic to convert it to hours, minutes, and seconds.
Lines 14-15:  Do some prettifying of a log message of the service restoration notice.
Line 16:  Send the notice to the syslog process on the local server.
Lines 17-18:  Put some timestamps on the message that will be sent as an email alert (syslog does this automatically, so I didn't need to timestamp the log messages).
Line 19:  Send alert message out via email.
Lines 20-21:  Reset some variable flags.
Line 24:  The case here is ping has returned a "1", meaning it did not receive a reply.  If the failedTime flag is not set, this indicates a fabulous new outage event.
Line 26:  Save the current time (in seconds since 1970) in the failedTime variable.
Line 27:  Concatenate a string with that contains the output of the original ping command.  Remove all newlines in that string (syslog logging is 1 line at a time).
Line 28:  Save that outage string in a variable for use later for the email alert when connectivity has been restored.
Line 29:  Send the log message to syslog.
Line 34:  This covers the degenerate case (ping return code of "2").  A scenario where this may happen would be the local server interface went down.
Line 36:  Simply log a message of the issue, but only do it one time.
Line 42:  Pause till the next ping for a user-configurable amount of time.

This script can be fired off and run forever simply like this:
nohup ./ispmon.sh > /dev/null 2>&1 &
For those more motivated, you can set this up as a regular Linux daemon in init.d.

Output


Sample syslog output below:
Jun  2 04:58:31 intel3770k ispmon.sh: Service Outage: PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data.  --- 192.168.100.1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms 
Jun  2 04:59:33 intel3770k ispmon.sh: Service Restored, Approx Outage Duration:00 hrs 01 min 02 sec

Email alert text from the sample above:
2015-06-02T04:58:31-0600 Service Outage: PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data.  --- 192.168.100.1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms 
2015-06-02T04:59:33-0600 Service Restored, Approx Outage Duration:00 hrs 01 min 02 sec
Full source here. Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Friday, May 22, 2015

Digit Transforms with Genesys SIP Server

Summary

In this article I'll demonstrate a simple digit translation using the dial plan feature in Genesys SIP Server (SIPS)

Implementation

For this scenario, I have a Sonus Session Border Controller (SBC) in front of Genesys SIPS.  The Sonus SBC by default prepends a '+1'  (e.164 format) on all calls going in/out of its trunk groups.  In this example, I'm going to use the SIPS dial plan feature to remove those two characters from the dialed number.

In Figure 1 below I've created a DN of type Voice over IP Service under the SIPS switch object.

Figure 1
In Figure 2, I created a TServer Section for this object and then added two options to that section:  dial-plan-rule-1 and service-type.

Figure 2
The dial-plan-rule-<n> option establishes a digit manipulation rule.  The service-type option signifies this is a dial-plan configuration (vs a Class of Service configuration).

SIPS translation rules roughly follow the Asterisk standard:

+1.=>${DIGITS:2}
  • On the left-hand side of the =>, I state our matching criteria.  In this case, a '+1' and then one or more following character will yield a match.
  • On the right side, I state our translation.  In this case, retain all the digits from position 2 in the string - dropping the first two characters.  Similar to C arrays, position numbering starts from 0.
After creating the dial-plan object, it needs to be assigned to an Agent Login, DN, or App level.  Figure 3 depicts the trunk object for the Sonus SBC.

Figure 3
Figure 4 shows the TServer section I created in the Annex for this trunk object.  I have the IP address of the SBC's media interface and an option specifying the dial-plan (sonusDialPlan) created back in Figure 2.

Figure 4

Below are some snippets of the Genesys SIPS log to see this dial-plan feature in action.  The SIPS extension/called party is 999-123-4567.  The caller is 5555.

Incoming Invite from the Sonus SBC:

11:02:35.281: SIPTR: Received [0,UDP] 1003 bytes from 192.168.1.21:5060
INVITE sip:+19991234567@192.168.1.69:5060;user=phone SIP/2.0
Via: SIP/2.0/UDP 192.168.1.21:5060;branch=z9hG4bK00B00006d2998922276
From: <sip:+15555@192.168.1.21;user=phone>;tag=gK00000184
To: <sip:+19991234567@192.168.1.69;user=phone>
Call-ID: 1_41283677@192.168.1.21
CSeq: 55481280 INVITE
Max-Forwards: 70
Allow: INVITE,ACK,CANCEL,BYE,REGISTER,REFER,INFO,SUBSCRIBE,NOTIFY,PRACK,UPDATE,OPTIONS,MESSAGE,PUBLISH
Accept: application/sdp, application/isup, application/dtmf, application/dtmf-relay,  multipart/mixed
Contact: <sip:+15555@192.168.1.21:5060>
P-Preferred-Identity: <sip:+15555@192.168.1.21:5060;user=phone>
Supported: timer,100rel,precondition,replaces
Session-Expires: 1800
Min-SE: 90
Content-Length:   189
Content-Disposition: session; handling=required
Content-Type: application/sdp

v=0
o=Sonus_UAC 1794294632 2031408208 IN IP4 192.168.1.21
s=SIP Media Capabilities
c=IN IP4 192.168.1.21
t=0 0
m=audio 1026 RTP/AVP 0
a=rtpmap:0 PCMU/8000
a=sendrecv
a=maxptime:10



Dial-plan executing:

11:02:35.284: SIPTR(432): Step 0 - SipTransactionCreateCall(433) complete
11:02:35.284: SIPTR(432): Begin step 1 - SipTransactionResolveCallInfoByDialPlan(434)
11:02:35.284: DialPlan:executing for dest +19991234567 - dial-plan-rule-1: +1.=>${DIGITS:2};calltype=inbound
11:02:35.285: DialPlan:Sending to target '9991234567' - type=1
11:02:35.285: DialPlan: clear flag DIAL_PLAN_PROCESSING
11:02:35.286: ProcessDialPlanResult: Connecting to device 9991234567.
11:02:35.286: SIPTS: New call: CallType overridden with 2 by context



Resulting SIP Invite that is ultimately sent to the SIPS registered end-point:

11:02:35.289: Sending  [0,UDP] 1094 bytes to 192.168.1.71:26144 >>>>>
INVITE sip:9991234567@192.168.1.71:26144;rinstance=0aa264f9d95b3d6a SIP/2.0
From: sip:+15555@192.168.1.21;user=phone;tag=008A0624-66D4-155B-B7A7-0100007FAA77-144
To: sip:9991234567@9991234567
Call-ID: 008A05C0-66D4-155B-B7A7-0100007FAA77-91@192.168.1.69
CSeq: 1 INVITE
Content-Length: 180
Content-Type: application/sdp
Via: SIP/2.0/UDP 192.168.1.69:5060;branch=z9hG4bK008A0660-66D4-155B-B7A7-0100007FAA77-136
Contact: <sip:+15555@192.168.1.69:5060>
Allow: ACK, BYE, CANCEL, INFO, INVITE, MESSAGE, NOTIFY, OPTIONS, PRACK, REFER, UPDATE
Accept: application/sdp, application/isup, application/dtmf, application/dtmf-relay,  multipart/mixed
P-Preferred-Identity: <sip:+15555@192.168.1.21:5060;user=phone>
Content-Disposition: session; handling=required
Max-Forwards: 69
X-Genesys-CallUUID: 028BE9J6QGALNDT704000VTAES00001A
Session-Expires: 1800;refresher=uac
Min-SE: 90
Supported: uui,100rel,timer

v=0
o=Sonus_UAC 1432141492 1 IN IP4 192.168.1.21
s=SIP Media Capabilities
c=IN IP4 192.168.1.21
t=0 0
m=audio 1026 RTP/AVP 0
a=sendrecv
a=maxptime:10
a=rtpmap:0 PCMU/8000

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Saturday, April 18, 2015

Taming YouTube


Summary

It's pretty easy to completely blow out a monthly bandwidth cap with streaming video traffic.  This article going to discuss the options for throttling this traffic before you get in trouble with your ISP (or wallet) due to data overages.

Identifying the Problem

A nifty 3rd Party tool for monitoring router traffic is MRTG.  It's a Perl-based web app that provides nice graphical displays of router statistics.  MRTG pulls its stats off the router via SNMP, so you'll need to configure that on your router.  You'll also need to set up a web server for it (Apache works just fine).  Figure 1 below depicts a MRTG-generated table of a hypothetical "problem" (note the Ingress number of 1.41 TB).

Figure 1
Now, if you want to do some real-time analysis, there's a tool right there on the router: Cisco IOS Netflow.  There are a lot of capabilities included in Netflow, but I'm only going to touch on one - Top Talkers.

Top Talkers is just what it sounds like - a real-time depiction of the flows that are generating the most traffic.  Setting it up is easy.

The first step is to activate Netflow on the interface you want to monitor.  That can be in the ingress, egress, or both directions.
router(config)#int gi1/0
router(config-if)#ip flow ingress
router(config-if)#ip flow egress
The next and final step is to activate and configure the Top Talkers feature.
router(config)#ip flow-top-talkers
router(config-flow-top-talkers)#sort-by bytes
router(config-flow-top-talkers)#top 3
In this case, the commands above configure a display of the top three talkers, sorted by byte count.

Now to see some real-time stats:
router#show ip flow top-talkers
SrcIf         SrcIPaddress    DstIf         DstIPaddress    Pr SrcP DstP Bytes
Di1           173.194.141.140 Gi1/0*        192.168.1.85    06 01BB B49D    35M
Gi1/0         192.168.1.85    Di1           173.194.141.140 06 B49D 01BB   774K
Di1           173.194.141.140 Gi1/0*        192.168.1.85    06 01BB B4A0   334K
Clearly, there's some fairly heavy ingress traffic (35M thus far) going on between a 173.194.141.140 address and a local/private 192 address.  A quick whois on that 173 address reveals it's in Google's IP range.  More specifically, this is YouTube video traffic.

As far as inspecting real-time traffic - the tool of choice is Wireshark.  There are some network monitoring capabilities built into Chrome and Firefox as well, but they're primarily focused on the HTTP layer.

Solving the Problem

There are various factors that influence streaming bit-rates, but Netflix gives some general guidance of 3 Mbps for SD video and 5 Mbps for HD.  Some more detailed info from Google regarding YouTube here.

A rough calculation (using Netflix's guidelines) of the amount of bandwidth burned streaming 1 hour of HD video is:

60 min/hr * 60 sec/min * 5 Mb/sec = 18,000 Mb/hr = 2,250 MB/hr = 2.25 GB/hr 

As expected, the meter runs pretty fast with streaming video traffic - particularly HD.

Site Settings

There are some easy measures you can take to limit bandwidth on the streaming sites themselves.  You can simply turn down the bandwidth usage for the particular site in the end-user's account settings.  Unfortunately, depending on users to voluntarily degrade their video quality probably isn't realistic.  The rest of this article will mostly focus on the options for imposing limits on the network itself.

Traffic Policing & Shaping

Cisco has a wealth of info published on the policing/shaping topic so I'm not going to spend too much time on the details.  Grossly simplifying the process: "policing" traffic results in dropping it when a defined bandwidth limit is reached.  "Shaping" traffic uses router resources (memory) to queue traffic to avoid dropping it.  However, shaping will degenerate into dropping packets if the traffic reaches a level beyond the limits of the queuing resources.

Figure 2 below is a simple diagram depicting on where to implement policing and shaping.  The main concept of note: traffic policing should happen on the inbound interface; shaping has to happen on the outbound interface.

Figure 2

IOS Commands for Configuring Policing and Shaping

Implementation of shaping and policing follow the same basic steps:
  1. Define 'class'es of traffic that you want to manage.
  2. Create a policy that allocates bandwidth and/or makes modifications to the classes.
  3. Apply the policy to an interface.
Below is a first attempt at defining a traffic class for streaming video:
class-map match-any http-video-class
 match protocol video-over-http
Line 1 defines the class and matching criteria.
Line 2 invokes Cisco's NBAR feature to utilize a pre-built signature for matching HTTP-based video.

Below is a sample policing policy:
policy-map http-video-police
 class http-video-class
  police 1000000 187500 conform-action transmit exceed-action drop
Line 1 names the policy.
Line 2 invokes the video class we defined above.
Line 3 applies a police policy to that video class.  The class is given bandwidth of 1 Mbps with a possible burst of 187.5 Kbps.  Traffic that conforms to that bandwidth limitation is transmitted; otherwise, the traffic is dropped.

Similarly, below is a sample shaping policy:
policy-map http-video-shape
 class http-video-class
  shape average 2000000
  queue-limit 128 packets
Line 3 applies a shaping policy to the video class. Max bandwidth is 2 Mbps.
Line 4 allocates a queue size of 128. The default is 64.

The last step is to apply the policy to an interface.
interface GigabitEthernet1/0
 ip address 10.20.30.1 255.255.255.0
 service-policy output http-video-shape
Line 4 applies the shaping policy to this interface. As discussed previously, shaping/queuing happens in outbound the direction.

The command below will allow you to view real-time statistics on the policy in action:
router#show policy-map int "yourInt"

Video Classification Case Studies

In the example above, I gave the impression that the pre-defined 'video-over-http' NBAR signature was sufficient to classify all of the streaming video out there.  Unfortunately, that's not the case, at all.  Different video providers are implementing streaming differently.  Part of that is due to the fact we're in a technology transition period - HTML5 is replacing Flash as the streaming video standard.  However, there are other factors at work - in particular with YouTube - that make this classification task (and thereby the whole concept of rate limiting video) non-trivial.  Below are a few video providers that I took the time to analyze and document their streaming behavior:  Amazon Prime, Netflix, and YouTube.

Of note, I had to load the latest NBAR2 Protocol Pack (ver 13) to get the correct NBAR signatures for identifying Netflix and YouTube traffic.  That also required an IOS upgrade to the latest version which has a bug of some sorts in its PPP implementation.  Nothing can ever be easy.

Amazon Prime

Amazon currently uses Microsoft Silverlight by default but will fall back to Flash with the Hardware Abstraction Layer (HAL) module if Silverlight isn't available.  Both options were evidently motivated by the need to be in DRM compliance.

For folks that use Linux and want to watch Prime on their browser, you're kind of in a bind given how Amazon has implemented streaming.  Obviously, Silverlight won't work out of box for you (but a substitute plugin has been written - Pipelight).  And, on the Flash front - Flash/HAL flat out won't work in Chrome and requires a hack to even work in Firefox.  Instructions here on that hack.

Assuming a Flash implementation against Amazon Prime - classifying Prime traffic is simple.  Flash is using RTMP as the underlying streaming protocol.  Prime is using RTMPE, the encrypted version of RTMP.  So, a NBAR rule for RTMPE will catch Prime streaming traffic.  Incidentally, the built-in NBAR signature for Prime (amazon-instant-video) doesn't identify this Flash traffic.

Our http-video-class looks like this for classifying Amazon Prime traffic (Flash-based).
class-map match-any http-video-class
 match protocol rtmpe

Below is a real-time snapshot of the shaping policy at work on Amazon traffic:
router#show policy-map int gi1/0
 GigabitEthernet1/0 

  Service-policy output: http-video-shape

    Class-map: http-video-class (match-any)  
      9626 packets, 14460061 bytes
      5 minute offered rate 323000 bps, drop rate 0000 bps
      Match: protocol rtmpe
        9626 packets, 14460061 bytes
        5 minute rate 323000 bps
      Queueing
      queue limit 128 packets
      (queue depth/total drops/no-buffer drops) 41/0/0
      (pkts output/bytes output) 9626/14460061
      shape (average) cir 2000000, bc 8000, be 8000
      target shape rate 2000000
      

    Class-map: class-default (match-any)  
      18205 packets, 24550571 bytes
      5 minute offered rate 474000 bps, drop rate 0000 bps
      Match: any 
      
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 18204/24550505

As an aside - the Amazon Flash traffic has a source port of 1935 (TCP).  So, if you don't have a router that supports signatures - it's still easy to classify Amazon's traffic with a simple ACL such as this:
access-list 111 permit tcp any eq 1935 any

Netflix

Netflix just recently started showing some love to those of us in the Linux community.  They were a 100% MS Silverlight shop last year.  Today, they've adopted HTML5 video as well.  HTML5 video tag snippet below from a Netflix page:

<video src="blob:http%3A//www.netflix.com/bc9c980c-5bf9-47f0-b1b4-423de9ee289a" style="position: absolute; width: 100%; height: 100%;"></video>

The current releases of Chrome on Linux will work with the Netflix player (Firefox is not supported, so the inverse of Amazon Prime).

The traffic profile for Netflix is fairly straight-forward.  Netflix streaming traffic is TCP packets sourced from port 80.  The current NBAR Protocol Pack (ver 13) correctly classifies Netflix traffic.  So, adding a match for Netflix to our existing class-map yields this:
class-map match-any http-video-class
 match protocol rtmpe
 match protocol netflix

Traffic statistics output below with the Netflix class match added:
router#show policy-map int gi1/0
 GigabitEthernet1/0 

  Service-policy output: http-video-shape

    Class-map: http-video-class (match-any)  
      19090 packets, 28450585 bytes
      5 minute offered rate 222000 bps, drop rate 0000 bps
      Match: protocol rtmpe
        12390 packets, 18618543 bytes
        5 minute rate 0 bps
      Match: protocol netflix
        6700 packets, 9832042 bytes
        5 minute rate 222000 bps
      Queueing
      queue limit 128 packets
      (queue depth/total drops/no-buffer drops) 56/25/0
      (pkts output/bytes output) 19057/28401286
      shape (average) cir 2000000, bc 8000, be 8000
      target shape rate 2000000
      

    Class-map: class-default (match-any)  
      99486 packets, 108676494 bytes
      5 minute offered rate 143000 bps, drop rate 0000 bps
      Match: any 
      
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 99307/108536983

YouTube

Similar to Amazon, YouTube also used Flash previously but they have since moved on to HTML5 video.  Code snippet below from a YouTube page:

<video class="video-stream html5-main-video" style="width: 640px; height: 360px; left: 0px; top: 0px; transform: none;" src="blob:https%3A//www.youtube.com/57c207a1-5fb8-4383-94ec-4fd298f0944e"></video>

Similar to Netflix, they're using a File API blob for the video source (in memory) with Media Source Extensions.  This allows them to do cool stuff like adaptive streaming, restrict downloads to relevant portions of the video, etc.

What makes YouTube more interesting though, from a traffic classification perspective, is they've moved to a 100% TLS access format.  That means YouTube traffic is encrypted/unreadable from Layer 3 (transport) up.  The following sorts of things just won't work for classifying YouTube traffic:

  • HTTP URL - You can't read any of the headers the HTTP segment (Layer 7, application).  It's encrypted.  There's no such thing as 'deep packet inspection' of this traffic.
  • HTTP Host - Same thing.
  • TCP Port - They're using TCP port 443, just like all the rest of the TLS traffic on your network.  You put limits on TCP 443, you limit all TLS traffic.
  • IP Address - Like many of the streaming providers, Google is using a content delivery network (CDN) for their video traffic.  It's common to see multiple different IP addresses delivering video in a single session.  It's all about finding the most efficient route for the content.  To boot, that CDN lives in Google's IP address range.  The Google IP range is >200K wide these days and growing, no doubt.  Neat discussion on how to count up Google's IP addresses yourself here.  Net, trying to maintain an IP address-based ACL for Google seems like an uphill battle.
So, TLS makes YouTube traffic a bit more challenging to manage.  I would wager Google knew this all along given today's semi-hostile environment between the content providers and the ISP's.  To that point - earlier this year, a Google engineer uncovered that Gogo, an inflight broadband provider, was using a less than wholesome method to manage YouTube traffic.  In a nutshell, Gogo was issuing fake TLS certs for YouTube.  As a Man-in-the-Middle, that would (did) enable Gogo to decrypt YouTube traffic.  Gogo evidently ceased this seedy practice shortly after it was uncovered.

In the Cisco world, the current NBAR Protocol Pack (version 13) does in fact have a working signature for identifying YouTube traffic.  So, throttling YouTube is a simple matter of adding it to the class-map:
class-map match-any http-video-class
 match protocol rtmpe
 match protocol netflix
 match protocol youtube

Results of that addition below:
router#show policy-map int gi1/0
 GigabitEthernet1/0 

  Service-policy output: http-video-shape

    Class-map: http-video-class (match-any)  
      54104 packets, 80313633 bytes
      5 minute offered rate 619000 bps, drop rate 7000 bps
      Match: protocol rtmpe
        12390 packets, 18618543 bytes
        5 minute rate 0 bps
      Match: protocol netflix
        18059 packets, 26614941 bytes
        5 minute rate 0 bps
      Match: protocol youtube
        23655 packets, 35080149 bytes
        5 minute rate 619000 bps
      Queueing
      queue limit 128 packets
      (queue depth/total drops/no-buffer drops) 0/511/0
      (pkts output/bytes output) 53575/79541510
      shape (average) cir 2000000, bc 8000, be 8000
      target shape rate 2000000
      

    Class-map: class-default (match-any)  
      172255 packets, 159978075 bytes
      5 minute offered rate 7000 bps, drop rate 0000 bps
      Match: any 
      
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 172197/159807489

Now the question is - How are they classifying this traffic?  I don't have access to the Cisco source code of their Protocol Pack, so I can only guess.  If someone out there does have a solid explanation of how Cisco or others have implemented this, I would appreciate you describing it in the comments section of this blog.

The options I can think of:
  1. IP address filter on the Google IP address range.  I discussed earlier how that maintenance task is probably unmanageable.  But, given Cisco issues regular updates on these Protocol Packs - maybe it's a workable model for them.
  2. Tracking flows based on an extension of the TLS spec known as Server Name Indication (SNI).  If you watch YouTube traffic in Wireshark, you will in fact see youtube.com and googlevideo.com in the SNI header. The SNI is unencrypted as it passed during the TLS handshake and prior to encryption.  A similar method can be implemented using the Common Name field in the X.509 certificate that is issued during the TLS handshake.  For either of these to work, the traffic filter would have to mark all subsequent traffic originating from the target SNI/Common Name source and then manage the flow accordingly.
As an aside - Google is experimenting with a protocol they developed known as QUIC.  In short, its purpose is to speed up web connections.  You can turn this on full-time in Chrome easily here: chrome://flags/#enable-quic.

After QUIC is enabled, all YouTube content is delivered to UDP port 443.  That's an easy target for a class map.  Additionally, that traffic won't get mixed in with all of the rest of your TLS traffic.  The simple ACL below will classify it:
access-list 102 permit udp any eq 443 any

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Cisco IOS PPP Bug Workaround

Summary

There appears to be a bug in the current releases in both the 15M and 15T code trains.  I've tested with 15.4.3M2, 15.5.1T1, and 15.5.2T with the same results.  From what I can tell, the bug is specifically in the PAP implementation in these releases.

Diagnosis

I've had a PPPoE/PAP implementation up for years.  Upon installing any of the above IOS releases, that implementation stopped working.  The symptom is the connection flapping (up/down) continuously.  I got a hint this was an IOS bug by googling the symptom.  These PPP bugs have evidently manifested themselves in previous releases.

Turning up debug is really the only way to narrow down what is happening:
router#debug ppp authentication
router#debug ppp error
router#debug pppoe errors
Here's a sampling of the error messages you'll see:
PPPoE: Failed to add PPPoE switching subblock
PPPoE: Unexpected Event!. PPPoE switching Subblockdestroy called
Vi2 LCP: Sent too many CONFNAKs.  Switch to CONFREJ
I've had CHAP shut off on this implementation (again, for years) with this configured on the Dialer interface:
ppp chap refuse

Implementation

Turning on CHAP (and removing the 'refuse' command) seems to fix things for me.  That IOS CHAP code apparently is not bug-ridden and my ISP evidently will allow a CHAP authentication.  If yours doesn't, this won't help you.  Your only option is drop back to a stable release and wait till Cisco corrects the PPP/PAP bug in a future release.
interface Dialer1
 ip address negotiated
 ip mtu 1492
 encapsulation ppp
 ip tcp adjust-mss 1452
 dialer pool 1
 dialer-group 1
 ppp authentication pap chap callin
 ppp chap hostname yourName
 ppp chap password yourPassword
 ppp pap sent-username yourName password yourPassword
 no cdp enable
Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Saturday, February 28, 2015

Building a Home/Lab VMWare ESXi System

Summary

In this post I'm going discuss a recent lab hardware build.  I needed to upgrade my existing VMWare ESXi system.  The goals I had for this upgrade:
  • Relatively cheap hardware
  • Decent performance; it's just going to be a lab box
  • Smaller footprint than previous box
  • Quieter and more energy efficient than previous box
  • And more importantly - caveman-simple deployment.  I had zero interest in troubleshooting an ESXi install on non-compatible or barely compatible hardware.

Implementation

I did the typical Google research project to find hardware combos that others had got working with ESXi.  I focused on ESXi 5.5 Update 2 as that's the current release.  After the research, I decided to go with a SuperMicro bare-bones server.  

These type of systems are relatively complete.  The motherboard w/CPU and power supply are already installed (and fit) the chassis.  You provide the RAM and hard drive.  That motherboard 'fit'ting part is a legitimate concern in these little 1U, half-depth rackmount systems.  I'm all about building boxes from scratch, but I didn't want to deal with figuring out what motherboard would fit what chassis.  In general, this server checked all my requirements boxes:
  • Roughly $500 on the street, before RAM and hard drive.
  • 8-core/2.4 GHz Intel Atom processor.  The motherboard will accept up to 64 GB of RAM.
  • Tiny little rackmount box.  1U and only 9.8" deep.  Fits nicely in a small rack.
  • Low noise.  I can't hear it above the router that's in the same rack.
  • Low power.  That Intel C2758 Atom processor is quite efficient, only 20 watts.
  • Caveman factor - others have had luck with getting ESXi up on this box.

Picture below of the actual server after I took it out of the packaging:



Below is what a "bare-bones" system looks like on the inside.  As mentioned, everything but the RAM and hard drive is supplied.



For RAM, I decided on some fairly cheap Kingston SODIMM's that I'd seen others say work in this SuperMicro box.  I went with 32 GB of RAM (4 x 8 GB).  SODIMM picture below:


For a hard drive, I went the SSD route for performance.  I bought a Samsung 120 GB model.  120 GB is way overkill for me as I keep all my VM images on a NAS, but that size seems to be still the sweet spot as far as price on these drives.    One annoying thing with SuperMicro box - they don't include the mounting bracket for a drive.  You have to go buy that separately.  Only ~$7, but still an unnecessary pain in my book.  Below is a picture of that Samsung SSD mounted in the bracket.


Picture below of the bracket w/drive mounted in the SuperMicro chassis.


ESXi Installation

I don't have any stories of technical heroics employed to get ESXi installed on this box.  It just worked. 

I pulled down ESXi 5.5 Update 2 from VMWare's site and made a bootable thumb drive with UNetbootin.  There are multiple sites out there describing how to use that utility to make bootable ESXi images.  Simple.

Bill of Materials

Here's a consolidated list of the parts I used to build this box:


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, February 1, 2015

Combining MP4 files with avconv (updated)

Here's a shell script for combining .mp4 files into one (in chronological order).  Comes in handy for consolidating a bunch of short videos into a single file.


#!/bin/bash

count=1
cmd="cat"

for f in `ls -1rt *.mp4`
do
 PIPE="pipe"$count".mpg"
 mkfifo $PIPE
 avconv -i $f -c:v mpeg2video -q:v 5 -y $PIPE < /dev/null &
              cmd=$cmd" pipe"$count".mpg"
 count=`expr $count + 1`
done

$cmd | avconv -i pipe: -r 24 -vcodec libx264 -acodec libvo_aacenc -ab 61000 -ar 16000 -threads 0 -y final.mp4
rm pipe*mpg
Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Saturday, January 17, 2015

Backups with rsync

Summary

rsync is a nifty command for doing backups.  I'm not going to even attempt go through all the options available with it.  I'm just going to show the one that works for me for doing backups of my NAS.

Implementation

$ rsync -ah --progress --delete --exclude ".Trash*" /source/ /target
  • -a : 'archive mode.' Sets a conglomeration of options that you can review on the man page if you're interested
  • -h : human-readable output
  • --progress : feedback on the progress of the backup
  • --delete : deletes any files on the target that don't exist on the source
  • --exclude : excludes any files from backup with the given pattern.  Ubuntu systems create a .Trash directory that doesn't need to be backed up
  • trailing slashes on source and target matter



Copyright ©1993-2024 Joey E Whelan, All rights reserved.