Saturday, December 11, 2021

Javascript Function Argument Options and Object Conditionals

Summary

This post is purely some code examples of different ways to structure arguments to functions.  I show some examples of conditional object properties as well.

Base

function example1(p1,p2,p3) {
    console.log('example1');

    const params = {
        "p1": p1,
        "p2": p2,
        "p3": p3
    };
    console.log(params);
};
example1('a', 'b', 'c');
example1
{ p1: 'a', p2: 'b', p3: 'c' }


Insufficient Number of Arguments

function example2(p1,p2,p3) {
    console.log('\nexample2')
    const params = {
        "p1": p1,
        "p2": p2,
        "p3": p3
    };
    console.log(params);   
};
example2('a', 'b');
example2
{ p1: 'a', p2: 'b', p3: undefined }


Argument with a default value

function example3(p1,p2,p3='c') {
    console.log('\nexample3')
    const params = {
        "p1": p1,
        "p2": p2,
        "p3": p3
    };
    console.log(params);
};
example3('a', 'b');
example3
{ p1: 'a', p2: 'b', p3: 'c' }


Arguments passed as an object

function example4(allparms) {
    console.log('\nexample4')
    const params = {
        "p1": allparms.p1,
        "p2": allparms.p2,
        "p3": allparms.p3
    };
    console.log(params); 
}
example4({"p1": 'a', "p2": 'b', "p3": 'c'});
example4
{ p1: 'a', p2: 'b', p3: 'c' }


Argument object destructured

function example5({ p1,p2,p3 }) {
    console.log('\nexample5');

    const params = {
        "p1": p1,
        "p2": p2,
        "p3": p3
    };
    console.log(params);
};
example5({"p1": "a", "p2": "b", "p3": "c"});
example5
{ p1: 'a', p2: 'b', p3: 'c' }


Destructured arguments, undefined argument

function example6({ p1,p2,p3 }) {
    console.log('\nexample6');

    const params = {
        "p1": p1,
        "p2": p2,
        "p3": p3
    };
    console.log(params);
};
example6({"p1": "a", "p2": "b"});
example6
{ p1: 'a', p2: 'b', p3: undefined }


Destructured arguments, undefined arg, conditional object property

function example7({ p1,p2,p3 }) {
    console.log('\nexample7');

    const params = {
        "p1": p1,
        "p2": p2,
        ...(p3 && {"p3": p3})
    };
    console.log(params);
};
example7({"p1": "a", "p2": "b"});
example7
{ p1: 'a', p2: 'b' }


Destructured arguments, default value

function example8({ p1,p2,p3='c' }) {
    console.log('\nexample8');

    const params = {
        "p1": p1,
        "p2": p2,
        ...(p3 && {"p3": p3})
    };
    console.log(params)
};
example8({"p1": "a", "p2": "b"});
example8
{ p1: 'a', p2: 'b', p3: 'c' }


Variadic function, spread operator

function example9(...allParms) {
    console.log('\nexample9');

    let params = {};
    let ind = 1;
    for (let parm of allParms) {
        params[`p${ind++}`] = parm;
    };

    console.log(params);
};
example9("a", "b", "c");
example9
{ p1: 'a', p2: 'b', p3: 'c' }


Gist


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Saturday, December 4, 2021

Google Cloud Serverless VPC + Cloud Functions

Summary


I'll show an example configuration of GCP Serverless VPC Access in this post.  The scenario here will be a Cloud Function that needs to access Memorystore (GCP managed Redis).  Memorystore is isolated in a VPC with a private range address - which is good as far as security is concerned.  To access that VPC from Cloud Functions, a Serverless VPC connector needs to be built.

Architecture




Memorystore Configuration




Serverless VPC Configuration



Cloud Function Configuration





Cloud Function Redis Client Connection Code


const {createClient} = require('redis');

    getClient() {
        const client = createClient({
            socket: {
                host: process.env.REDIS_HOST
            },
            password: process.env.REDIS_PASS
        });
        client.on('error', (err) => { 
            throw Error(`redis client error: ${err}`);
        });
        return client;
    }

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, November 28, 2021

Google Cloud API Gateway - M2M client, GCF server

Summary

I'll be showing some of the detailed configuration necessary to deploy API Gateway with a Cloud Functions back-end and authentication for a non-human (machine) client.  I'll be focusing on the front and back-end authentication configuration.  I'll also be showing the client side in Node.js which is very thinly documented by Google.


Architecture




Authentication

Back-end:  Cloud Functions

The back-end GCF is deployed requiring authentication.  The API Gateway is configured to operate under a Service Account that has the Cloud Function Invoker role.

Front-end:  Machine Client

Configuration here is significantly more complicated than the back-end.  Configuration areas:
  • A Service Account needs to be created and a SA key downloaded.  That key is then used to sign a JWT for authentication to the API Gateway.
  • Security definitions must be added to OpenAPI spec (Swagger 2.0) that specify that SA as an allowed user.
  • The Machine Client itself must generate a JWT to the API Gateway specs and sign that JWT with the SA key.

Code

OpenAPI Security Definition

securityDefinitions:
  machine-service:
    authorizationUrl: ""
    flow: "implicit"
    type: "oauth2"
    x-google-issuer: "machine-service@kvpstore.iam.gserviceaccount.com"
    x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/machine-service@kvpstore.iam.gserviceaccount.com"
security:
  - machine-service: []

Machine Client-side 


'use strict';
const fetch = require('node-fetch');
const jwt = require('jsonwebtoken');
 
const sakey = require('./sakey.json');  //json file downloaded from Google IAM
const EMAIL = sakey.client_email;
const AUDIENCE = 'your audience';// this value corresponds to the "Managed service" name of the API Gateway
const ALGORITHM = 'RS256';
const GWY_URL = 'your URL';
const KEY = sakey.private_key

function exampleAPICall(email, audience, key, algorithm) {
    const payload = {
        iat: Date.now(),
        exp: Date.now() + 3600,
        iss: email,
        aud: audience,
        sub: email,
        email: email
    }

    const token = jwt.sign(payload, key, {algorithm: algorithm});
    
    const response = await fetch(`${gwyurl}/guid`, {
        method: 'GET',
        headers: {
            'Authorization': `Bearer ${token}`
        }
    });
    return await response.json();
}

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Tuesday, October 12, 2021

Photo Album on Google Drive

 

Photo from my pro football officiating days.  I had to leave the game at age 5 due to penalty flag-induced carpal tunnel.

Summary

I'm going to cover personal use-case in this post around making a large number of family photos securely accessible to family members.  I've been maintaining a website for years for this purpose.  I decided recently that maintaining that site was more work than was really necessary.  Google recently terminated their Photo application, but Drive works just fine for sharing photos.  I had a large enough collection of photos to upload to Drive that it made sense to go to code to do it.


Architecture

Drive has a documented Python API.  I set up a Google Cloud project with a Service Account that allows access to Drive.  I used that Service Account for all my API calls to Drive.




The diagram below depicts the transfer scenario.  The local images are stored in a year, year-month hierarchy.  That hierarchy is replicated on Drive.



Code

Main Loop

I have my photos stored locally in a folder hierarchy that follows this:  Year -> Year-Month.  The loop below iterates through all the local year and year-month folders to upload files to Drive.
    def upload_all_images(self):
        years = os.listdir(LOCAL_ROOT)

        for year in years:
            print('Uploading year: ' + year)
            year_path = os.path.join(LOCAL_ROOT, year)
            year_months = os.listdir(year_path)
            for year_month in year_months:
                print('Uploading year_month: ' + year_month)
                self.upload_folder_images(year, year_month)

Photo Folder Upload

The function below creates the necessary year and year-month folders on Drive if they don't already exist.  It then iterates through the local year-month folder to upload each image file to Drive.
 
    def upload_folder_images(self,
                    year,
                    year_month):
        year_month_path = os.path.join(os.path.join(LOCAL_ROOT, year), year_month)
        if (os.path.isdir(year_month_path)):
            year_folder_id = self.get_folder_id(year)
            if (not year_folder_id):
                year_folder_id = self.create_folder(year, self.root_folder_id)
       
            year_month_folder_id = self.get_folder_id(year_month)
            if (not year_month_folder_id):
                year_month_folder_id = self.create_folder(year_month, year_folder_id)

            for file in os.listdir(year_month_path):
                local_file_path = os.path.join(year_month_path,file)
                if (os.path.isfile(local_file_path)):
                    try:
                        self.upload_file(local_file_path, year_month_folder_id)      
                    except Exception as e:
                        print(e)

File Upload

The code below checks to see if the file already exists on Drive.  If not, then it calls necessary Drive API functions to upload the image.
    def upload_file(self,
                    local_file_path,
                    folder_id):

        file_name = os.path.basename(local_file_path)
        #check if file already exists on gdrive.  if not, create the file on google drive.
        results = self.service.files().list(q="'" + folder_id + "' in parents and name = '"  + file_name + "'", 
                                                spaces='drive',
                                                fields='files(id)').execute(num_retries=NUM_TRIES)
        items = results.get('files', [])
        if not items:
            print('Uploading: ' + file_name)
            try:
                outfile = self.resize(local_file_path)
                media = MediaFileUpload(outfile)
                file_metadata = {'name': file_name, 'parents': [folder_id]}
                self.service.files().create(body=file_metadata,
                                media_body=media,
                                fields='id').execute(num_retries=NUM_TRIES)
                os.remove(outfile)
            except Exception as e:
                print(e)
        else:
            print('File already exists on gdrive: ' + file_name)
        return 

Image Resizing

I use the PIL library to reduce the resolution (and thus size) of each image file to reduce my Drive space.
    def resize(self, 
            infile):
        outfile = os.path.join('./', os.path.basename(infile))
        im = Image.open(infile)
        if max(im.size) < 1000:
            size = im.size
        else:
            size = (1000,1000)

        im.thumbnail(size, Image.ANTIALIAS)
        im.save(outfile, optimize=True, quality=85)
        return outfile

Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Thursday, August 19, 2021

PC Build

 Summary

I'm going to present a series of pics depicting a new PC build I completed this week.  This replaces the PC I built ~8 years ago (and is still fully operational).   That PC had 4 CPU cores and 16 GB RAM.  This one has 16 cores and 64 GB, so a 4x upgrade.

Equipment List

Pic below with the boxed components used in this build.



  • Motherboard - Gigabyte Aorus Elite.  I've had good luck with Gigabyte MBs in the past, so I'm a repeat customer.
  • CPU - AMD Ryzen 5950X.  16 CPU cores, 32 logical cores with Simultaneous Multithreading (SMT).
  • Case - Corsair Obsidian 750D.  Again, another case of repeat success + customer.  This one has thorough ventilation + 3x140mm fans - 2 in front, 1 behind.
  • Drive - Samsung 980 SSD 1 TB.  I consider Samsung the top end on drives.  This one can be had at a good price.
  • Power Supply - Seasonic Focus PX-750.  More repeat success + customer.  The Seasonic PSU I bought 8 years ago is still running.
  • CPU Cooler - Noctua NH-D15 Chromax.Black.  This is an air cooler - and it's a beast.  I really like how this company makes fans.  Super quiet.
  • GPU - Biostar Radeon RX 550.  Contrary to the rest of this build - this component is on the low end.  It's old GPU tech.  Unfortunately, the hoards of Bitcoin miners out there have pushed the prices of current generation GPUs beyond what I'm willing to pay.  I'm not a gamer, so I don't really need the high-end anyway.
  • RAM - G.Skill RipJaws V Series 64GB.  More repeat success + customer.  The Aorus board will take 4 x 32GB (128 GB), so I'm filling it to half at this point.
Unboxed pic of the same components below.



BIOS Updating

The BIOS on these boards needs to be updated before you even install the CPU.  Gigabyte makes that easy with a feature called Q-Flash plus.  Steps:
  • Download the current BIOS rev from Gigabyte's site
  • Rename it to 'GIGABYTE.bin' and copy it to a USB drive.
  • Put that drive in the MB USB slot that is tagged for this.
  • Power up the board.
  • Push the Q-flash button on the board.  The associated MB LED will light up until the update is complete.
Pic below of me flashing this board.


Seating the CPU

Pic below of the Ryzen chip seated on the AM4 socket of this board.


Installing the CPU Cooler

The stock cooler retainer brackets on this board need to be removed and replaced with the Noctua bracket.  The Noctua cooler includes thermal paste, so no need to buy any separately.  Pic below of the brackets and paste applied.



Pic below of the Noctua cooler now installed.


Note about Fan Headers

This board has a total of 4 Fan Headers.  For this build, there are 3 fans in the case and 2 on the CPU cooler.  That equals 5.  That would be a problem except for the fact that the Noctua kit includes a Y-cable for splicing two fans to one header.  I'm showing a pic below of that cable on the 2 CPU fan cables.  I didn't actually install it that way due to physical distances between fans and headers.  Instead, I used that Y-cable to combine the case's front two fans into 1 and the connected them to one of the SYS FAN headers.  I connected each of the CPU cooler fans to the two CPU fan headers.  The rear case fan I connected to the rear SYS FAN header.

Note that I've also installed both CPU fans on the cooler in this pic.  This works for this build.


RAM Installation

Pic below of both RAM sticks in place.  They fit under the CPU cooler + fans.  For only 2 sticks, you use slots A2 and B2 on this board.


NVM Drive Installation

Pic below of NVM drive in place on this board.  There's a piece of plastic wrap on the M2 heat dissipater that you need to remove before final seating.



GPU Installation

My low-end board on the main PCI slot of the MB below.



Motherboard Seating into Case

Pics below of the full package installed in the case.




Complete System

Bios screenshot of completed system below:

Ubuntu System Monitor screen-shot



Completed build in its native habitat.


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Wednesday, June 30, 2021

Google Cloud Devops

 Summary

This post will demonstrate the usage of Google Cloud's serverless deployment pipeline - Cloud Build.  The use case for this will be a fairly simple Python app that exposes a REST interface via Flask + NLTK for tokenization of text.

Overall Architecture

The diagram below depicts the Cloud Build pipeline.

Python Application Organization



Cloud Build Steps

Cloud Build is orchestrated from a cloudbuild.yaml file.  Example code below with associated diagram.

steps:
  #Unit Test
  - name: python
    entrypoint: /bin/sh
    args: ["-c", 
     "pip install -r requirements.txt &&\ 
     python -c \"import nltk; nltk.download('popular', download_dir='/home/nltk_data')\" &&\
     export NLTK_DATA=/home/nltk_data &&\ 
     python -m unittest"] 
  
  #Docker Build
  - name: 'gcr.io/cloud-builders/docker'
    args: ['build', '-t', 
            'us-central1-docker.pkg.dev/$PROJECT_ID/$_REPO_NAME/cleaner', '.']
  
  #Docker push to Google Artifact Registry
  - name: 'gcr.io/cloud-builders/docker'
    args: ['push',  'us-central1-docker.pkg.dev/$PROJECT_ID/$_REPO_NAME/cleaner']
  
  #Deploy to Cloud Run
  - name: google/cloud-sdk
    args: ['gcloud', 'run', 'deploy', 'cleaner', 
           '--image=us-central1-docker.pkg.dev/$PROJECT_ID/$_REPO_NAME/cleaner', 
           '--region', 'us-central1', '--platform', 'managed', 
           '--allow-unauthenticated']


Screenshots of Results

Cloud Build





Artifact Registry



Cloud Run




Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Saturday, April 17, 2021

Python Virtual Environments + VSCode

Summary

This is a brief post on setting up a Python3 virtual environment.  A virtual environment enables you to maintain a clean set of Python modules for your specific project.

Step 1

Create a folder for your project.  For this demo, I'm calling it 'envdemo'.

Step 2

Open that folder in VSCode and start a terminal session.


Step 3

Execute the python3 command to create a virtual environment in that folder.


Step 4

Go to View, Command Palette, Select Python Interpreter and then select the Python interpreter from your new virtual environment.



Step 5

Open a new terminal session.  A Workspace (.vscode) has been created for this project and the default interpreter is set to the one in your new virtual environment.  Also, note that no Python modules have been installed yet in this environment.



Step 6

Proceed with installing the Python modules necessary for this particular project and develop code.



Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, February 28, 2021

RFC 7523 Demo

Summary

I'll be covering the usage of the RFC 7523 authorization grant type in this post.  I'll create a server-side implementation as a Google Cloud Function and a simple client in Node.js.

Architecture

The implementation consists of a Google Cloud Function that is triggered on an HTTP POST request.  The request has a form-urlencoded body consisting of the grant_type parameter and an assertion with the JSON web token (JWT) - per the RFC.  The response is a JSON object consisting of the token type, bearer token, and the unencoded version of the original JWT received.  That last item isn't required, but I included it to aid in troubleshooting.


Server-side Snippet

As discussed, the server is implemented as a GCF.  For a Node implementation, that consists of an Express server.  The express-jwt middleware is used to implement the JWT handling.

app.use(expressJwt({
    secret: sharedKey,
    issuer: issuer,
    audience: audience,
    algorithms: ['HS256', 'HS384', 'HS512'],    
    requestProperty: 'token',
    getToken: (req) => {
        return req.body.assertion;
    }
}));

app.post('/rfc7523', (req, res) => {
    if (req.token) {
        console.log(`Received token: ${JSON.stringify(req.token)}`);
        const alg = 'HS512'
        const payload = { 
            "iss": 'oauth issuer',
            "sub": 'oauth authority',
            "aud": 'm2mclient',
            "exp": Math.round(Date.now()/1000 + 3) //3 second expiration
        };
        const accessToken = jwt.sign(payload, privateKey, {algorithm: alg});
            
        res.status(200)
        .json({
            token_type: 'bearer',
            rec_token: req.token,
            access_token: accessToken
        });
    }
    else {
        res.status(400).json({error: 'no token found'});
    }
});

Client-side Snippet


(async () => {
    const payload = { 
        "iss": issuer,
        "sub": subject,
        "aud": audience,
        "exp": Math.round(Date.now()/1000 + 3) //3 second expiration        
    };
    const alg = 'HS512'
    const token = jwt.sign(payload, sharedKey, {algorithm: alg});
    const authGrant = encodeURI('urn:ietf:params:oauth:grant-type:jwt-bearer');
    const response = await fetch(url, {
        method: 'POST',
        headers: {'Content-Type': 'application/x-www-form-urlencoded'},
        body: `grant_type=${authGrant}&assertion=${token}`
    });
    
    const json = await response.json();
    console.log(`Results: ${JSON.stringify(json, null, 4)}`);
})();

Results


Results: {
...
    "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJvYXV0aCBpc3 - abbreviated"
}

Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, February 21, 2021

Simulation with Simpy

Summary

In this post, I'll be using the Simpy Python framework to create a simulation model.  I'll cover the base case here of generating requests to a finite set of resources.  I'll compare the simulation results to the expected/theoretical results for an Erlang B model. 

Architecture

The diagram below depicts the base simulation model.  A Request Generator sends a stream of requests at an interval corresponding to a Poisson process.  An intermediary process, the Regulator, makes a decision based on resource availability of where to route the request.  If all resources are busy, the request is blocked.  Otherwise, the resource assumes a position in the queue and is subsequently serviced by a worker for a period of time representing a Normal distribution on the average handle time.



Code Snippet


def scenario_one():
    rand: Random = Random()
    rand.seed(RANDOM_SEED)
    env: Environment = Environment()
    distributor: Distributor = Distributor(rand, env, POSITIONS, 0, HANDLE_TIME_MEAN, 0, QUEUE_COST, HANDLE_COST)
    regulator: Regulator = Regulator(env, distributor)
    requestGenerator: RequestGenerator = RequestGenerator(rand, env, regulator, 0, 0, STEADY_RATE)
    env.run(until=TOTAL_DURATION)
    print(f'***  Scenario 1: ErlangB Sanity Check.  Partial simulation: no workers, no surge, no deflection ***')
    print(f'Total Requests: {requestGenerator.total_requests}')
    print(f'Total Queued Requests: {distributor.total_queued}')
    print(f'Total Serviced Requests: {distributor.total_serviced}')
    print(f'Total Deflected Requests: {regulator.total_deflected}')
    print(f'Total Blocked Requests: {regulator.total_blocked}')
    print(f'Max Queue Depth: {distributor.max_positions_depth}')
    print(f'Max Resources Consumed: {distributor.max_resources}')
    print(f'Probability of Blockage: {regulator.total_blocked/requestGenerator.total_requests}')
    print(f'')

Results


Below are the results with 20 requests/sec rate and 15 minute average handle time.  The Erlang B blocking calculation for those parameters = 0.666694
***  Scenario 1: ErlangB Sanity Check.  Partial simulation: no workers, no surge, no deflection ***
Total Requests: 864377
Total Queued Requests: 288000
Total Serviced Requests: 0
Total Deflected Requests: 0
Total Blocked Requests: 576376
Max Queue Depth: 6000
Max Resources Consumed: 6000
Probability of Blockage: 0.666810893857657


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, January 10, 2021

Google Cloud Healthcare - Analytics

Summary

This post is a continuation of my previous on the Google Healthcare API.  In this post, I'll push the FHIR datastore into Google's data warehouse - BigQuery.  Once in BigQuery, the data can then be subjected to traditional analytics tools (SQL queries) and visualized with Google's report/dashboard tool - Data Studio.  For the purposes of these demos, I extended the Synthea-generated recordsets to 50 patient bundles.

Architecture

Below is a diagram of the cloud architecture.  FHIR data (JSON-based) is transformed into relational database tables on BigQuery.  SQL queries can then be created to analyze the data.  Finally, the output of those queries can be saved as Views and then presented in charts in Data Studio.


BigQuery Execution

FHIR Export

Below is the gcloud command-line to export an FHIR datastore to BigQuery.  This is a one-time export; however, it is possible to configure a continuous stream of updates from the FHIR store to BigQuery as well.

gcloud healthcare fhir-stores export bq $FHIR_STORE_ID \
  --dataset=$DATASET_ID \
  --location=$LOCATION \
  --bq-dataset=bq://$PROJECT_ID.$BIGQUERY_DATASET_ID \
  --schema-type=analytics

Query 1 - Top Ten Medications

At this point, a relational database is created within BigQuery and ready for analytics.  Below are a query and its output to find the top 10 prescribed meds within the FHIR datastore.


Query 2 - Demographics

Below is a query that provides a bucketing of the patient age groups.


Query 3 - Top Ten Conditions

Below is a query to derive the top 10 conditions within the patient population.


Views

I then created views for each of these queries.  Those views will be used for the presentation layer of the output in Data Studio.  Below is the view of the demographics query.


Data Studio Configuration

Now that the views are set up in BigQuery, it's now possible to create visualizations of them using Data Studio.  Below are the steps to do that.

Create a blank report


Select BigQuery as the data source



Select the BigQuery View


Configure the presentation


Choose the chart type


Output

Top Ten Medications


Demographics - Age Distribution


Top Ten Conditions


Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.