Summary
In this post, I'll demonstrate use of a Google Natural Language API - Sentiment. The scenario will be a caller's interaction with a contact center agent is recorded and then sent through Google Speech to Text followed by Sentiment analysis.
Implementation
The diagram below depicts the overall implementation. Calls enters an ACD/recording platform that has API access in and out. Recordings are then sent into the Google API's for processing.
Below is a detailed flow of the interactions between the ACD platform and the Google API's.
Code Snippet
App Server
Simple Node.js server below representing the App Server component.
- app.post(properties.path, jsonParser, (request, response) => {
- //send a response back to ACD immediately to release script-side resources
- response.status(200).end();
- const contactId = request.body.contactId;
- const fileName = request.body.fileName;
- let audio;
- logger.info(`contactId: ${contactId} webserver - fileName:${fileName}`);
- admin.get(contactId, fileName) //Fetch the audio file (Base64-encoded) from ACD
- .then((json) => {
- audio = json.file;
- return sentiment.process(contactId, audio); //Get transcript and sentiment of audio bytes
- })
- .then((json) => { //Upload the audio, transcript, and sentiment to Google Cloud Storage
- return storage.upload(contactId, Buffer.from(audio, 'base64'), json.transcript, JSON.stringify(json.sentiment));
- })
- .then(() => {
- admin.remove(contactId, fileName); //Delete the audio file from ACD
- })
- .catch((err) => {
- logger.error(`contactId:${contactId} webserver - ${err}`);
- });
- });
- app.listen(properties.listenPort);
- logger.info(`webserver - started on port ${properties.listenPort}`);
Source: https://github.com/joeywhelan/sentiment
Copyright ©1993-2024 Joey E Whelan, All rights reserved.