Measuring Average Latency with FAISS on Apple M1


FAISS (Facebook AI Similarity Search) is a powerful library for efficient similarity search and clustering of dense vectors. It’s optimized for high performance and scalability, making it an excellent choice for handling large datasets and computing nearest neighbors.

In this short blog, we’ll explore how to measure the average latency of searching 1000 records using FAISS on a Mac with an Apple M1 processor.

Setting Up the Environment

To get started, you’ll need to install FAISS and ensure you have appropriate development tools and libraries. You can install FAISS using pip:

pip install faiss-cpu

Next, we set up our environment by importing necessary packages and defining constants.

import faiss
import numpy as np
import time

# Define constants
num_vectors = 500000
dim = 20
num_searches = 1000
k = 100 # Number of nearest neighbors to retrieve

Generating Random Vectors

FAISS works with dense vectors. Here, we generate random vectors that simulate our dataset:

# Create random vectors
np.random.seed(42) # For reproducibility
vectors = np.random.random((num_vectors, dim)).astype('float32')

Initializing the FAISS Index

We initialize the FAISS index using IndexFlatL2, which computes the Euclidean (L2) distance between vectors:

# Initialize FAISS index
index = faiss.IndexFlatL2(dim) # L2 distance (Euclidean)

# Add vectors to the index
index.add(vectors)

Measuring Latency

To measure the latency of search operations, we generate a query vector and perform multiple searches, recording the time taken for each one:

# Generate query vector
query_vector = np.random.random((1, dim)).astype('float32')

# Measure latency of searches
latencies = []
for _ in range(num_searches):
start_time = time.time()
D, I = index.search(query_vector, k) # search for k nearest neighbors
latencies.append(time.time() - start_time)

Calculating Average Latency

Finally, we calculate the average latency in milliseconds and print the result:

# Calculate average latency in milliseconds
average_latency = np.mean(latencies) * 1000 # Convert to milliseconds
print(f'Average latency for {num_searches} searches: {average_latency:.4f} ms')

Results

Running the above code on a Mac with an M1 processor, we achieve an impressive average latency of around 7 milliseconds for 1000 searches. This performance demonstrates the efficiency of FAISS and the capabilities of Apple M1 processors for machine learning tasks.


Author: robot learner
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source robot learner !
  TOC