Run TinyLiveness as a simple API.
The public website is API-only. The model runs in your Python backend, and the website sends one cropped frame to that endpoint.
Install with pip
Install the TinyLiveness GitHub release/tag directly. The package includes the APCER 1% FP32 ONNX model and JSON decision policy.
pip install "tinyliveness[onnx] @ git+https://github.com/yuvrajraina/TinyLiveness.git@v0.1.0"
Use the bundled model
The helper below loads the packaged ONNX model and policy, so applications do not need to hardcode checkpoint paths after a pip install.
from tinyliveness import create_default_onnx_detector
detector = create_default_onnx_detector()
result = detector.predict_image(aligned_face_rgb_224)
print(result.live_probability, result.decision)
Prepare the input
TinyLiveness expects an aligned RGB face crop. In production, your app should detect the face, align/crop it, resize to 224x224, and then send it to the API. The demo route will center-crop and resize uploaded images, but a real face alignment step gives better behavior.
Call from curl
Send one image as multipart field `image`.
curl -X POST http://127.0.0.1:8000/api/liveness/predict/ \
-F image=@face.jpg
Use it in your own project
TinyLiveness is MIT licensed. You can use the code and included release artifacts in personal, research, internal, or commercial projects, subject to the MIT license notice.
- Use in apps and APIs
- Modify for your camera pipeline
- Redistribute with the license notice
- Validate before production security claims