Sunday, February 28, 2021

RFC 7523 Demo

Summary

I'll be covering the usage of the RFC 7523 authorization grant type in this post.  I'll create a server-side implementation as a Google Cloud Function and a simple client in Node.js.

Architecture

The implementation consists of a Google Cloud Function that is triggered on an HTTP POST request.  The request has a form-urlencoded body consisting of the grant_type parameter and an assertion with the JSON web token (JWT) - per the RFC.  The response is a JSON object consisting of the token type, bearer token, and the unencoded version of the original JWT received.  That last item isn't required, but I included it to aid in troubleshooting.


Server-side Snippet

As discussed, the server is implemented as a GCF.  For a Node implementation, that consists of an Express server.  The express-jwt middleware is used to implement the JWT handling.

  1. app.use(expressJwt({
  2. secret: sharedKey,
  3. issuer: issuer,
  4. audience: audience,
  5. algorithms: ['HS256', 'HS384', 'HS512'],
  6. requestProperty: 'token',
  7. getToken: (req) => {
  8. return req.body.assertion;
  9. }
  10. }));
  11.  
  12. app.post('/rfc7523', (req, res) => {
  13. if (req.token) {
  14. console.log(`Received token: ${JSON.stringify(req.token)}`);
  15. const alg = 'HS512'
  16. const payload = {
  17. "iss": 'oauth issuer',
  18. "sub": 'oauth authority',
  19. "aud": 'm2mclient',
  20. "exp": Math.round(Date.now()/1000 + 3) //3 second expiration
  21. };
  22. const accessToken = jwt.sign(payload, privateKey, {algorithm: alg});
  23. res.status(200)
  24. .json({
  25. token_type: 'bearer',
  26. rec_token: req.token,
  27. access_token: accessToken
  28. });
  29. }
  30. else {
  31. res.status(400).json({error: 'no token found'});
  32. }
  33. });

Client-side Snippet


  1. (async () => {
  2. const payload = {
  3. "iss": issuer,
  4. "sub": subject,
  5. "aud": audience,
  6. "exp": Math.round(Date.now()/1000 + 3) //3 second expiration
  7. };
  8. const alg = 'HS512'
  9. const token = jwt.sign(payload, sharedKey, {algorithm: alg});
  10. const authGrant = encodeURI('urn:ietf:params:oauth:grant-type:jwt-bearer');
  11. const response = await fetch(url, {
  12. method: 'POST',
  13. headers: {'Content-Type': 'application/x-www-form-urlencoded'},
  14. body: `grant_type=${authGrant}&assertion=${token}`
  15. });
  16. const json = await response.json();
  17. console.log(`Results: ${JSON.stringify(json, null, 4)}`);
  18. })();

Results


Results: {
...
    "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJvYXV0aCBpc3 - abbreviated"
}

Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Sunday, February 21, 2021

Simulation with Simpy

Summary

In this post, I'll be using the Simpy Python framework to create a simulation model.  I'll cover the base case here of generating requests to a finite set of resources.  I'll compare the simulation results to the expected/theoretical results for an Erlang B model. 

Architecture

The diagram below depicts the base simulation model.  A Request Generator sends a stream of requests at an interval corresponding to a Poisson process.  An intermediary process, the Regulator, makes a decision based on resource availability of where to route the request.  If all resources are busy, the request is blocked.  Otherwise, the resource assumes a position in the queue and is subsequently serviced by a worker for a period of time representing a Normal distribution on the average handle time.



Code Snippet


  1. def scenario_one():
  2. rand: Random = Random()
  3. rand.seed(RANDOM_SEED)
  4. env: Environment = Environment()
  5. distributor: Distributor = Distributor(rand, env, POSITIONS, 0, HANDLE_TIME_MEAN, 0, QUEUE_COST, HANDLE_COST)
  6. regulator: Regulator = Regulator(env, distributor)
  7. requestGenerator: RequestGenerator = RequestGenerator(rand, env, regulator, 0, 0, STEADY_RATE)
  8. env.run(until=TOTAL_DURATION)
  9. print(f'*** Scenario 1: ErlangB Sanity Check. Partial simulation: no workers, no surge, no deflection ***')
  10. print(f'Total Requests: {requestGenerator.total_requests}')
  11. print(f'Total Queued Requests: {distributor.total_queued}')
  12. print(f'Total Serviced Requests: {distributor.total_serviced}')
  13. print(f'Total Deflected Requests: {regulator.total_deflected}')
  14. print(f'Total Blocked Requests: {regulator.total_blocked}')
  15. print(f'Max Queue Depth: {distributor.max_positions_depth}')
  16. print(f'Max Resources Consumed: {distributor.max_resources}')
  17. print(f'Probability of Blockage: {regulator.total_blocked/requestGenerator.total_requests}')
  18. print(f'')

Results


Below are the results with 20 requests/sec rate and 15 minute average handle time.  The Erlang B blocking calculation for those parameters = 0.666694
***  Scenario 1: ErlangB Sanity Check.  Partial simulation: no workers, no surge, no deflection ***
Total Requests: 864377
Total Queued Requests: 288000
Total Serviced Requests: 0
Total Deflected Requests: 0
Total Blocked Requests: 576376
Max Queue Depth: 6000
Max Resources Consumed: 6000
Probability of Blockage: 0.666810893857657


Copyright ©1993-2024 Joey E Whelan, All rights reserved.