Skip to content

gabisonia/FIndex

Repository files navigation

FIndex

Purpose: The goal of FIndex is to learn more about vector databases by building a practical system for face recognition and similarity search. This project demonstrates how to generate, store, and query facial embeddings using a vector database and RESTful APIs.

FIndex is a face recognition and search system powered by embeddings, Qdrant vector database, and a lightweight REST API written in .NET 8. It allows you to embed faces from images, store their vectors, and search visually similar faces efficiently.

Components

Embedding Service (Python)

  • A FastAPI-based microservice that takes an image and returns a 512-dimensional facial embedding.
  • Uses InsightFace under the hood.

Qdrant (Vector DB)

  • High-performance vector search engine for similarity search.
  • Stores all embedded facial vectors.

FIndex API (C# / ASP.NET Core)

  • Accepts image uploads via REST.
  • Sends them to the embedding service, gets vector back.
  • Searches similar vectors in Qdrant and returns matches with confidence scores.

FIndex Console (C#)

  • Console app to upload faces and create database.

FIndex Embbeder (C# / ASP.NET Core/ ML.Net)

  • Asp.net web api service that takes an image and returns a 512-dimensional facial embedding.
  • Powered by ONNX Runtime and OpenCvSharp, using a pre-trained ResNet50 (WebFace600K) model. Download Link: Google Drive.

FIndex VectorDataSample (C#)

  • Standalone console sample that demonstrates embedding a small face dataset and performing searches backed by PostgreSQL with the pgvector extension.
  • Expects the same webface_r50.onnx model as the embedder; place it under FIndex.VectorDataSample/Data/ after downloading from the link above.

Running with Docker

  1. Ensure Docker Desktop (or another Docker host) is running locally.
  2. (Optional) Download the InsightFace models you want to serve and place them under ~/.insightface/models; they are mounted into the embedding container so they persist across rebuilds.
  3. Build and start the stack:
docker compose up --build

The compose file spins up the Python embedding service, the C# embedder, Qdrant, and the public API. All services are wired together on an internal Docker network, so no additional setup is required.

To stop the services press Ctrl+C, or run docker compose down in another terminal to remove the containers.

Choosing an embedder implementation

  • The API is wired to the Python embedding service by default (embedding-service in docker-compose.yml, exposed on port 8001).
  • The C# embedder (embedder-service, exposed on port 5443) is available for experimentation. You can point your client applications to it directly, or adjust the API to call it instead of the Python endpoint if you prefer an all-.NET stack.
  • If you only need one implementation, comment out or remove the other service from docker-compose.yml before running docker compose up.

Ports and URLs

This project consists of three main services: the Python-based embedding service, the Qdrant vector database, and the .NET API. Below are the details for accessing each service:


Service Host URL
Embedding Service http://localhost:8001
Qdrant http://localhost:6333
FIndex API http://localhost:5181
FIndex Embedder (C#) http://localhost:5443

License

This project's code is licensed under the MIT License. You are free to use, modify, and distribute the code with proper attribution.

Some images used in this project are for personal use and local experimentation only.

By using this repository, you agree not to use any of the stored or uploaded images (I'm not that pretty).

About

Face similarity search

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published