Sunday, February 28, 2021

RFC 7523 Demo

Summary

I'll be covering the usage of the RFC 7523 authorization grant type in this post.  I'll create a server-side implementation as a Google Cloud Function and a simple client in Node.js.

Architecture

The implementation consists of a Google Cloud Function that is triggered on an HTTP POST request.  The request has a form-urlencoded body consisting of the grant_type parameter and an assertion with the JSON web token (JWT) - per the RFC.  The response is a JSON object consisting of the token type, bearer token, and the unencoded version of the original JWT received.  That last item isn't required, but I included it to aid in troubleshooting.


Server-side Snippet

As discussed, the server is implemented as a GCF.  For a Node implementation, that consists of an Express server.  The express-jwt middleware is used to implement the JWT handling.

app.use(expressJwt({
    secret: sharedKey,
    issuer: issuer,
    audience: audience,
    algorithms: ['HS256', 'HS384', 'HS512'],    
    requestProperty: 'token',
    getToken: (req) => {
        return req.body.assertion;
    }
}));

app.post('/rfc7523', (req, res) => {
    if (req.token) {
        console.log(`Received token: ${JSON.stringify(req.token)}`);
        const alg = 'HS512'
        const payload = { 
            "iss": 'oauth issuer',
            "sub": 'oauth authority',
            "aud": 'm2mclient',
            "exp": Math.round(Date.now()/1000 + 3) //3 second expiration
        };
        const accessToken = jwt.sign(payload, privateKey, {algorithm: alg});
            
        res.status(200)
        .json({
            token_type: 'bearer',
            rec_token: req.token,
            access_token: accessToken
        });
    }
    else {
        res.status(400).json({error: 'no token found'});
    }
});

Client-side Snippet


(async () => {
    const payload = { 
        "iss": issuer,
        "sub": subject,
        "aud": audience,
        "exp": Math.round(Date.now()/1000 + 3) //3 second expiration        
    };
    const alg = 'HS512'
    const token = jwt.sign(payload, sharedKey, {algorithm: alg});
    const authGrant = encodeURI('urn:ietf:params:oauth:grant-type:jwt-bearer');
    const response = await fetch(url, {
        method: 'POST',
        headers: {'Content-Type': 'application/x-www-form-urlencoded'},
        body: `grant_type=${authGrant}&assertion=${token}`
    });
    
    const json = await response.json();
    console.log(`Results: ${JSON.stringify(json, null, 4)}`);
})();

Results


Results: {
...
    "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJvYXV0aCBpc3 - abbreviated"
}

Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, February 21, 2021

Simulation with Simpy

Summary

In this post, I'll be using the Simpy Python framework to create a simulation model.  I'll cover the base case here of generating requests to a finite set of resources.  I'll compare the simulation results to the expected/theoretical results for an Erlang B model. 

Architecture

The diagram below depicts the base simulation model.  A Request Generator sends a stream of requests at an interval corresponding to a Poisson process.  An intermediary process, the Regulator, makes a decision based on resource availability of where to route the request.  If all resources are busy, the request is blocked.  Otherwise, the resource assumes a position in the queue and is subsequently serviced by a worker for a period of time representing a Normal distribution on the average handle time.



Code Snippet


def scenario_one():
    rand: Random = Random()
    rand.seed(RANDOM_SEED)
    env: Environment = Environment()
    distributor: Distributor = Distributor(rand, env, POSITIONS, 0, HANDLE_TIME_MEAN, 0, QUEUE_COST, HANDLE_COST)
    regulator: Regulator = Regulator(env, distributor)
    requestGenerator: RequestGenerator = RequestGenerator(rand, env, regulator, 0, 0, STEADY_RATE)
    env.run(until=TOTAL_DURATION)
    print(f'***  Scenario 1: ErlangB Sanity Check.  Partial simulation: no workers, no surge, no deflection ***')
    print(f'Total Requests: {requestGenerator.total_requests}')
    print(f'Total Queued Requests: {distributor.total_queued}')
    print(f'Total Serviced Requests: {distributor.total_serviced}')
    print(f'Total Deflected Requests: {regulator.total_deflected}')
    print(f'Total Blocked Requests: {regulator.total_blocked}')
    print(f'Max Queue Depth: {distributor.max_positions_depth}')
    print(f'Max Resources Consumed: {distributor.max_resources}')
    print(f'Probability of Blockage: {regulator.total_blocked/requestGenerator.total_requests}')
    print(f'')

Results


Below are the results with 20 requests/sec rate and 15 minute average handle time.  The Erlang B blocking calculation for those parameters = 0.666694
***  Scenario 1: ErlangB Sanity Check.  Partial simulation: no workers, no surge, no deflection ***
Total Requests: 864377
Total Queued Requests: 288000
Total Serviced Requests: 0
Total Deflected Requests: 0
Total Blocked Requests: 576376
Max Queue Depth: 6000
Max Resources Consumed: 6000
Probability of Blockage: 0.666810893857657


Copyright ©1993-2024 Joey E Whelan, All rights reserved.