Friday, December 30, 2022

Redis Vector Similarity Search

Summary

I'll show some examples of how to utilize vector similarity search (VSS) in Redis.  I'll generate embeddings from pictures, store the resulting vectors in Redis, and then perform searches against those stored vectors.


Architecture




Data Set

I used the Fashion Dataset on Kaggle for the photos to vectorize and store in Redis.

Application

The sample app is written in Python and organized as a single class.  That class does a one-time load of the dataset photos as vectors and stores the vectors in a file.  That file is subsequently used to store the vectors and their associated photo IDs in Redis.  Vector searches of other photos can then be performed against Redis.

Embedding

The code below reads a directory containing the dataset photos, vectorizes each photo, and then writes those vectors to file.  As mentioned, this is a one-time operation.
        if (not os.path.exists(VECTOR_FILE) and len(os.listdir(IMAGE_DIR)) > 0):
            img2vec = Img2Vec(cuda=False)
            images: list = os.listdir(IMAGE_DIR)
            images = images[0:NUM_IMAGES]
            with open(VECTOR_FILE, 'w') as outfile:
                for image in images:
                    img: Image = Image.open(f'{IMAGE_DIR}/{image}').convert('RGB').resize((224, 224))
                    vector: list = img2vec.get_vec(img)
                    id: str = os.path.splitext(image)[0]
                    json.dump({'image_id': id, 'image_vector': vector.tolist()}, outfile)
                    outfile.write('\n')

Redis Data Loading

Redis supports VSS for both JSON and Hash Set data types.  I parameterized a function to allow the creation of Redis VSS indices for either data type.  One important difference in working with JSON or Hashes with Redis VSS:  vectors can be stored as is (array of floats) for JSON documents, but need to be reduced to a BLOB for Hash Sets.


    def _get_images(self) -> dict:

        with open(VECTOR_FILE, 'r') as infile:
            for line in infile:
                obj: object = json.loads(line)
                id: str = str(obj['image_id'])
                match self.object_type:
                    case OBJECT_TYPE.HASH:
                        self.image_dict[id] = np.array(obj['image_vector'], dtype=np.float32).tobytes()
                    case OBJECT_TYPE.JSON:
                        self.image_dict[id] = obj['image_vector']  
                        
def _load_db(self) -> None:

        self.connection.flushdb()
        self._get_images()

        match self.object_type:
            case OBJECT_TYPE.HASH:
                schema = [ VectorField('image_vector', 
                                self.index_type.value, 
                                {   "TYPE": 'FLOAT32', 
                                    "DIM": 512, 
                                    "DISTANCE_METRIC": self.metric_type.value
                                }
                            ),
                            TagField('image_id')
                ]
                idx_def = IndexDefinition(index_type=IndexType.HASH, prefix=['key:'])
                self.connection.ft('idx').create_index(schema, definition=idx_def)

                pipe: Connection = self.connection.pipeline()
                for id, vec in self.image_dict.items():
                    pipe.hset(f'key:{id}', mapping={'image_id': id, 'image_vector': vec})
                pipe.execute()
            case OBJECT_TYPE.JSON:
                schema = [ VectorField('$.image_vector', 
                                self.index_type.value, 
                                {   "TYPE": 'FLOAT32', 
                                    "DIM": 512, 
                                    "DISTANCE_METRIC": self.metric_type.value
                                },  as_name='image_vector'
                            ),
                            TagField('$.image_id', as_name='image_id')
                ]
                idx_def: IndexDefinition = IndexDefinition(index_type=IndexType.JSON, prefix=['key:'])
                self.connection.ft('idx').create_index(schema, definition=idx_def)
                pipe: Connection = self.connection.pipeline()
                for id, vec in self.image_dict.items():
                    pipe.json().set(f'key:{id}', '$', {'image_id': id, 'image_vector': vec})
                pipe.execute()

Search

With vectors loaded and indices created in Redis, a vector search looks the same for either JSON or Hash Sets.  The search vector must be reduced to a BLOB.  Searches can be strictly for vector similarity (KNN search) or a combination of VSS and a traditional Redis Search query (hybrid search).  The function below is parameterized to support both.

 
def search(self, query_vector: list, search_type: SEARCH_TYPE, hyb_str=None) -> list:
        match search_type:
            case SEARCH_TYPE.VECTOR:
                q_str = f'*=>[KNN {TOPK} @image_vector $vec_param AS vector_score]'
            case SEARCH_TYPE.HYBRID:
                q_str = f'(@image_id:{{{hyb_str}}})=>[KNN {TOPK} @image_vector $vec_param AS vector_score]'
        
        q = Query(q_str)\
            .sort_by('vector_score')\
            .paging(0,TOPK)\
            .return_fields('vector_score','image_id')\
            .dialect(2)    
        params_dict = {"vec_param": query_vector}

        results = self.connection.ft('idx').search(q, query_params=params_dict)
        return results

Source

https://github.com/Redislabs-Solution-Architects/vss-ops

Copyright ©1993-2024 Joey E Whelan, All rights reserved.

Monday, December 26, 2022

Redis Search with FHIR Data

Summary

This post will demonstrate search functionality in Redis Enterprise (RE) with FHIR data.  I'll generate FHIR patient bundles with the Synthea application.  Then I'll build a three-node RE cluster and sharded Redis DB via Docker scripting and RE REST API.  Finally, I'll show multiple search scenarios on that healthcare data.

Overall Architecture







Data Generation

The shell script below pulls the Synthea jar file, if necessary, and then creates FHIR patient record bundles for every US state.  One to ten bundles are randomly created for each state.
if [ ! -f synthea-with-dependencies.jar ]
then
    wget -q https://github.com/synthetichealth/synthea/releases/download/master-branch-latest/synthea-with-dependencies.jar
fi

STATES=("Alabama" "Alaska" "Arizona" "Arkansas" "California" "Colorado" "Connecticut" 
"Delaware" "District of Columbia" "Florida" "Georgia" "Hawaii" "Idaho" "Illinois"
"Indiana" "Iowa" "Kansas" "Kentucky" "Louisiana"  "Maine" "Montana" "Nebraska" 
"Nevada" "New Hampshire" "New Jersey" "New Mexico" "New York" "North Carolina"
"North Dakota" "Ohio" "Oklahoma" "Oregon" "Maryland" "Massachusetts" "Michigan" 
"Minnesota" "Mississippi" "Missouri" "Pennsylvania" "Rhode Island" "South Carolina"
"South Dakota" "Tennessee" "Texas" "Utah" "Vermont" "Virginia" "Washington" 
"West Virginia" "Wisconsin" "Wyoming")

MAX_POP=10

for state in "${STATES[@]}"; do   
  pop=$(($RANDOM%$MAX_POP + 1))
  java -jar synthea-with-dependencies.jar -c ./syntheaconfig.txt -p $pop "$state"
done

RE Build

This shell script uses a docker-compose file to create a 3-node Redis Enterprise cluster.  It pulls down the latest GA copies of the Search and JSON modules, executes the compose script, assembles a cluster, loads the Search and JSON modules via REST API, and then finally - creates a 2-shard, replicated database on the cluster via REST API.
SEARCH_LATEST=redisearch.Linux-ubuntu18.04-x86_64.2.6.3.zip
JSON_LATEST=rejson.Linux-ubuntu18.04-x86_64.2.4.2.zip

if [ ! -f $SEARCH_LATEST ]
then
    wget -q https://redismodules.s3.amazonaws.com/redisearch/$SEARCH_LATEST
fi 

if [ ! -f $JSON_LATEST ]
then
    wget https://redismodules.s3.amazonaws.com/rejson/$JSON_LATEST
fi 

echo "Launch Redis Enterprise docker containers"
docker compose up -d
echo "*** Wait for Redis Enterprise to come up ***"
curl -s -o /dev/null --retry 5 --retry-all-errors --retry-delay 3 -f -k -u "redis@redis.com:redis" https://localhost:19443/v1/bootstrap
echo "*** Build Cluster ***"
docker exec -it re1 /opt/redislabs/bin/rladmin cluster create name cluster.local username redis@redis.com password redis
docker exec -it re2 /opt/redislabs/bin/rladmin cluster join nodes 192.168.20.2 username redis@redis.com password redis
docker exec -it re3 /opt/redislabs/bin/rladmin cluster join nodes 192.168.20.2 username redis@redis.com password redis
echo "*** Load Modules ***"
curl -s -o /dev/null -k -u "redis@redis.com:redis" https://localhost:19443/v1/modules -F module=@$SEARCH_LATEST
curl -s -o /dev/null -k -u "redis@redis.com:redis" https://localhost:19443/v1/modules -F module=@$JSON_LATEST
echo "*** Build FHIR DB ***"
curl -s -o /dev/null -k -u "redis@redis.com:redis" https://localhost:19443/v1/bdbs -H "Content-Type:application/json" -d @fhirdb.json

RE Architecture

The diagram below depicts the resulting RE architecture that is created.  Two shards (labeled M1 and M2) and their replicas (R1 and R2) are distributed across the cluster.




Screenshots below of the admin interfaces of the RE cluster and database that was created.











Search Examples

Below are some snippets of some of the search/aggregation examples implemented in Python.

Medical Facility Geographic Search


Below are the Redis index and search commands to find the closest medical facility (that is in the database) to a geographic coordinate.  In this, case the coordinates are for Woodland Park, CO.

Index - JavaScript

        await this.client.ft.create('location_idx', {
            '$.status': {
                type: SchemaFieldTypes.TAG,
                AS: 'status'  
            },
            '$.name': {
                type: SchemaFieldTypes.TEXT,
                AS: 'name'
            },
            '$.address.city': {
                type: SchemaFieldTypes.TAG,
                AS: 'city'
            },
            '$.address.state': {
                type: SchemaFieldTypes.TAG,
                AS: 'state'
            },
            '$.position.longitude': {
                type: SchemaFieldTypes.NUMERIC,
                AS: 'longitude'
            },
            '$.position.latitude': {
                type: SchemaFieldTypes.NUMERIC,
                AS: 'latitude'
            }
        }, { ON: 'JSON', PREFIX: 'Location:'});

Index - Python

        idx_def = IndexDefinition(index_type=IndexType.JSON, prefix=['Location:'])
        schema = [  TagField('$.status', as_name='status'),
            TextField('$.name', as_name='name'),
            TagField('$.address.city', as_name='city'),
            TagField('$.address.state', as_name='state'),
            NumericField('$.position.longitude', as_name='longitude'),
            NumericField('$.position.latitude', as_name='latitude')
        ]
        connection.ft('location_idx').create_index(schema, definition=idx_def)

Search - JavaScript

        result = await this.client.ft.aggregate('location_idx','@status:{active}', {
            LOAD: ['@name', '@city', '@state', '@longitude', '@latitude'],
            STEPS: [
                    {   type: AggregateSteps.APPLY,
                        expression: 'geodistance(@longitude, @latitude, -105.0569, 38.9939)', 
                        AS: 'meters' 
                    },
                    {   type: AggregateSteps.APPLY ,
                        expression: 'ceil(@meters*0.000621371)', 
                        AS: 'miles' 
                    },
                    {
                        type: AggregateSteps.SORTBY,
                        BY: {
                            BY: '@miles', 
                            DIRECTION: 'ASC' 
                        }
                    },
                    {
                        type: AggregateSteps.LIMIT,
                        from: 0, 
                        size: 1
                    }
            ]
        });

Search - Python

        request = AggregateRequest('@status:{active}')\
        .load('@name', '@city', '@state', '@longitude', '@latitude')\
        .apply(meters='geodistance(@longitude, @latitude, -105.0569, 38.9939)')\
        .apply(miles='ceil(@meters*0.000621371)')\
        .sort_by(Asc('@miles'))\
        .limit(0,1)
        result = connection.ft('location_idx').aggregate(request)

Results

[[b'name', b'ARETI COMPREHENSIVE PRIMARY CARE', b'city', b'COLORADO SPRINGS', b'state', b'CO', 
b'longitude', b'-104.768591624', b'latitude', b'38.9006726282', b'meters', b'27009.43', b'miles', b'17']]

Medication Prescriptions


Below are the index and search commands to compile a list of the Top 3 physicians prescribing opioids by script count.

Index - JavaScript

        await this.client.ft.create('medicationRequest_idx', {
            '$.status': {
                type: SchemaFieldTypes.TAG,
                AS: 'status'
            },
            '$.medicationCodeableConcept.text': {
                type: SchemaFieldTypes.TEXT,
                AS: 'drug'
            },
            '$.requester.display': {
                type: SchemaFieldTypes.TEXT,
                AS: 'prescriber',
                SORTABLE: true
            },
            '$.reasonReference[*].display': {
                type: SchemaFieldTypes.TEXT,
                AS: 'reason'
            }
        }, {ON: 'JSON', PREFIX: 'MedicationRequest:'});

Index - Python

        idx_def = IndexDefinition(index_type=IndexType.JSON, prefix=['MedicationRequest:'])
        schema = [  TagField('$.status', as_name='status'),
            TextField('$.medicationCodeableConcept.text', as_name='drug'),
            TextField('$.requester.display', as_name='prescriber', sortable=True),
            TextField('$.reasonReference[*].display', as_name='reason')
        ]
        connection.ft('medicationRequest_idx').create_index(schema, definition=idx_def)

Search - JavaScript

        const opioids = 'Hydrocodone|Oxycodone|Oxymorphone|Morphine|Codeine|Fentanyl|Hydromorphone|Tapentadol|Methadone';
        result = await this.client.ft.aggregate('medicationRequest_idx', `@drug:${opioids}`, {
            STEPS: [
                {   type: AggregateSteps.GROUPBY,
                    properties: ['@prescriber'],
                    REDUCE: [
                        {   type: AggregateGroupByReducers.COUNT,
                            property: '@prescriber',
                            AS: 'opioids_prescribed'
                        }
                    ]   
                },
                {
                    type: AggregateSteps.SORTBY,
                    BY: { 
                        BY: '@opioids_prescribed', 
                        DIRECTION: 'DESC' 
                    }
                },
                {
                    type: AggregateSteps.LIMIT,
                    from: 0, 
                    size: 3
                }
            ]
        });

Search - Python

        opioids = 'Hydrocodone|Oxycodone|Oxymorphone|Morphine|Codeine|Fentanyl|Hydromorphone|Tapentadol|Methadone'
        request = AggregateRequest(f'@drug:{opioids}')\
        .group_by('@prescriber', reducers.count().alias('opioids_prescribed'))\
        .sort_by(Desc('@opioids_prescribed'))\
        .limit(0,3)
        result = connection.ft('medicationRequest_idx').aggregate(request)

Results

[[b'prescriber', b'Dr. Aja848 McKenzie376', b'opiods_prescribed', b'53'], 
[b'prescriber', b'Dr. Jaquelyn689 Bernier607', b'opiods_prescribed', b'52'], 
[b'prescriber', b'Dr. Aurora248 Kessler503', b'opiods_prescribed', b'49']]

Insurer Claim Values


Below are the index and search commands to find the Top 3 insurers by total claim dollar value.

Index - JavaScript


        await this.client.ft.create('claims_idx', {
            '$.status': {
                type: SchemaFieldTypes.TAG,
                AS: 'status'
            },
            '$.insurance[*].coverage.display': {
                type: SchemaFieldTypes.TEXT,
                AS: 'insurer',
                SORTABLE: true    
            },
            '$.total.value': {
                type: SchemaFieldTypes.NUMERIC,
                AS: 'value'
            }
        }, {ON: 'JSON', PREFIX: 'Claim:'});

Index - Python


        idx_def = IndexDefinition(index_type=IndexType.JSON, prefix=['Claim:'])
        schema = [  TagField('$.status', as_name='status'),
            TextField('$.insurance[*].coverage.display', as_name='insurer', sortable=True),
            NumericField('$.total.value', as_name='value')
        ]
        connection.ft('claims_idx').create_index(schema, definition=idx_def)

Search - JavaScript

        result = await this.client.ft.aggregate('claims_idx', '@status:{active}', {
            STEPS: [
                {   type: AggregateSteps.GROUPBY,
                    properties: ['@insurer'],
                    REDUCE: [{   
                        type: AggregateGroupByReducers.SUM,
                        property: '@value',
                        AS: 'total_value'
                }]},
                {
                    type: AggregateSteps.FILTER,
                    expression: '@total_value > 0'
                },
                {   type: AggregateSteps.SORTBY,
                    BY: { 
                    BY: '@total_value', 
                    DIRECTION: 'DESC' 
                }},
                {   type: AggregateSteps.LIMIT,
                    from: 0, 
                    size: 5
                }
            ]
        });

Search - Python

        request = AggregateRequest('@status:{active}')\
        .group_by('@insurer', reducers.sum('@value').alias('total_value'))\
        .filter('@total_value > 0')\
        .sort_by(Desc('@total_value'))\
        .limit(0,3)
        result = connection.ft('claims_idx').aggregate(request)

Results

[[b'insurer', b'Medicare', b'total_value', b'29841923.54'], [b'insurer', b'NO_INSURANCE', b'total_value', b'9749265.48'], 
[b'insurer', b'UnitedHealthcare', b'total_value', b'8859141.59']]

Source


Copyright ©1993-2024 Joey E Whelan, All rights reserved.