Face Recognition and Liveness Detection System Based on iPhone LiDAR Depth Sensing
Graduate Course Project for "Object-Oriented C++ Programming" at XJTU (2025)
- Sun Chao (SwunChao)
- Li Zongzhi
- iPhone LiDAR Depth Capture: Utilizes iPhone's front-facing TrueDepth camera for RGB-D data acquisition
- Dual-Mode Liveness Detection:
- RGBD Depth-based Detection: Analyzes facial depth features (nose protrusion, depth variance)
- Blink Detection: Eye Aspect Ratio (EAR) algorithm based approach
- LBPH Face Recognition: OpenCV's Local Binary Pattern Histograms for face identification
- Complete Personnel Management: Add/delete persons, collect face samples, train models
┌─────────────────────────────────────────────────────────────┐
│ UnifiedLivenessSystem │
│ (Application Layer - Main Coordinator) │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────────────┐ ┌─────────────────────────┐ │
│ │ RGBDLivenessDetector│ │ BlinkLivenessDetector │ │
│ │ (RGBD Depth Mode) │ │ (Blink Mode) │ │
│ └──────────┬──────────┘ └───────────┬─────────────┘ │
│ │ │ │
│ └───────────┬───────────────┘ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ BaseDetector │ │
│ │ (Abstract Base) │ │
│ └──────────┬──────────┘ │
│ │ │
│ ┌──────────▼──────────┐ │
│ │ FaceDetector │ │
│ │ (YOLOv8 Face Det) │ │
│ └─────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Data Structure Layer │
│ ┌───────────┐ ┌───────────────┐ ┌─────────────────┐ │
│ │ Person │ │ DepthFeatures │ │ BlinkRecord │ │
│ └───────────┘ └───────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────┘
- OpenCV 4.x: Image processing, face detection and recognition
- Record3D SDK: iPhone camera data acquisition
- CMake 3.15+: Build system
- C++17: Modern C++ features
# 1. Clone repository
git clone https://github.com/swunchao/FaceRecognitionSystem.git
cd FaceRecognitionSystem
# 2. Setup Record3D SDK (place in parent directory)
# ../record3d/
# 3. Generate VS solution
cmake -B build -G "Visual Studio 17 2022" -A x64 -DOpenCV_DIR="path/to/opencv/build"
# 4. Build
cmake --build build --config Release
# 5. Run
.\build\bin\Release\liveness_detection_unified.exe==========================================================
Face Recognition Liveness Detection System (Unified)
==========================================================
[1] Person Management (Add/Delete/Collect)
[2] Model Training
[3] RGBD Depth Liveness Detection
[4] Blink Liveness Detection
[q] Exit
==========================================================
- Add Person: Add new members in person management
- Collect Samples: Capture face images using iPhone
- Train Model: Train LBPH model with collected samples
- Run Detection: Choose RGBD or blink mode for liveness detection
- Install and run Record3D app on iPhone
- Connect iPhone to computer via USB cable
- Launch the program, system will auto-detect connected iOS device
Distinguishes real faces from photos by analyzing facial depth:
- Depth Difference Std Dev (Diff Std): Real faces have 3D surface variations, photos are flat
- Nose Protrusion: Real noses have significant depth difference from face surface
- Thresholds: Diff Std > 2.0mm && Nose Protrusion > 8.0mm
Based on Eye Aspect Ratio (EAR) algorithm:
EAR = (||p2-p6|| + ||p3-p5||) / (2 * ||p1-p4||)
- EAR is higher when eyes are open, lower when closed
- Detecting blink action confirms real person
- Supports personalized threshold calibration
FaceRecognitionSystem/
├── src/
│ ├── liveness/
│ │ ├── BaseDetector.cpp # Base detector class
│ │ ├── FaceDetector.cpp # YOLOv8 face detection
│ │ ├── RGBDLivenessDetector.cpp # RGBD liveness detection
│ │ ├── BlinkLivenessDetector.cpp # Blink liveness detection
│ │ └── UnifiedLivenessSystem.cpp # Main system
│ └── main.cpp
├── include/
│ └── liveness/
│ ├── BaseDetector.h
│ ├── FaceDetector.h
│ ├── RGBDLivenessDetector.h
│ ├── BlinkLivenessDetector.h
│ ├── UnifiedLivenessSystem.h
│ ├── Person.h
│ ├── DepthFeatures.h
│ └── BlinkRecord.h
├── models/ # Pre-trained models
├── data/ # Face data
├── docs/ # Documentation
├── CMakeLists.txt
└── README.md
This project adopts a 3-layer class hierarchy:
Person: Person info (ID, name, sample count)DepthFeatures: Depth features (nose protrusion, depth variance, etc.)BlinkRecord: Blink record (EAR value, blink state, etc.)
BaseDetector: Abstract base class, defines detector interfaceFaceDetector: Face detection using YOLOv8 modelRGBDLivenessDetector: RGBD depth-based liveness detectionBlinkLivenessDetector: Blink-based liveness detection
UnifiedLivenessSystem: Main system class, coordinates all modules
- Polymorphic Design: Base and derived detector classes, supports runtime mode switching
- RAII Resource Management: Smart pointers and RAII for resource safety
- Multi-threading: Record3D stream uses callback mechanism, main loop separated from data capture
- Template Programming: Flexible use of OpenCV Mat template class
Educational Use License - Non-Commercial Only
This project is for educational and learning purposes only. Commercial use is prohibited. See LICENSE for details.
- Record3D - iPhone depth data capture
- OpenCV - Computer vision library
- YOLOv8-face - Face detection model