This project implements a Haar Cascade face detector in Verilog and an emotion classifier in Python. The face detector is designed to be synthesized for an FPGA, while the emotion classifier runs on a host machine and communicates with the Verilog simulation.
.
├── verilog_face_detector/
│ ├── create_project.tcl
│ ├── Makefile
│ ├── parse_cascade.py
│ ├── prepare_test_images.py
│ ├── README.md
│ ├── requirements.txt
│ ├── test_cosimulation.sh
│ ├── visualize_architecture.py
│ ├── data/
│ │ ├── cascade_data.coe
│ │ ├── cascade_data.mem
│ │ ├── feature_lut.coe
│ │ ├── feature_lut.mem
│ │ └── haarcascade_frontalface_default.xml
│ ├── models/
│ │ ├── README.md
│ │ └── download_model.py
│ ├── sim/
│ │ ├── image.txt
│ │ ├── run_sim
│ │ ├── tb_face_detector.v
│ │ ├── test_all_images.sh
│ │ ├── waveform.gtkw
│ │ └── prepared_images/
│ │ └── ...
│ ├── src/
│ │ ├── control_fsm.v
│ │ ├── face_detector.v
│ │ ├── face_detector.xdc
│ │ ├── feature_calculator.v
│ │ ├── feature_lut_rom.v
│ │ ├── haar_cascade_rom.v
│ │ ├── integral_image.v
│ │ ├── stage_evaluator.v
│ │ └── weak_classifier.v
│ └── test_images/
│ └── ...
└── venv/
This project is a hardware/software co-design that combines a hardware-accelerated face detector with a software-based emotion classifier.
- Face Detector: A hardware implementation of the Viola-Jones Haar Cascade algorithm in Verilog. It's designed to be synthesized for a Xilinx FPGA and is capable of real-time face detection in 64x64 grayscale images.
- Emotion Classifier: A Python-based emotion classifier that uses a pre-trained Mini-Xception model. It runs on a host machine and communicates with the Verilog simulation (or the FPGA) to classify the emotions of the detected faces.
- Hardware-Accelerated Face Detection: The Viola-Jones algorithm is implemented in Verilog for high-performance, real-time face detection.
- Software-Based Emotion Classification: A flexible and powerful emotion classifier that can be easily updated or replaced.
- Offline Functionality: The project is designed to be completely self-contained and work offline. The pre-trained emotion classification model is stored locally.
- Vivado Integration: A Tcl script is provided to automate the creation of a Vivado project for synthesis and deployment on a Xilinx FPGA.
- Comprehensive Verification Environment: The project includes a testbench, simulation scripts, and a co-simulation framework for verifying the functionality of the design.
- Icarus Verilog: For compiling and running the Verilog simulation.
- GTKWave: For viewing the simulation waveforms.
- Python 3: With the packages listed in
requirements.txt. - Xilinx Vivado: For synthesizing and deploying the design on an FPGA.
-
Clone the repository:
git clone https://github.com/your-username/Emotion-Classifier-Using-Verilog.git cd Emotion-Classifier-Using-Verilog/verilog_face_detector -
Install Python dependencies:
pip install -r requirements.txt
-
Download the emotion classification model: The emotion classification model is not included in the repository due to its size. You will need to download the pre-trained Mini-Xception model and place it in the
models/directory.-
Option 1: (Recommended) Implement
download_model.pyThemodels/download_model.pyscript is a placeholder. You can implement it to automatically download the model from a trusted source. -
Option 2: Manually download the model Download the
emotion_mini_xception.h5model and place it in theverilog_face_detector/models/directory.
-
-
Prepare the Haar Cascade data: The
haarcascade_frontalface_default.xmlfile is included in thedata/directory. Run the following command to convert it to a format that can be used by the Verilog code:make parse-cascade
-
Prepare the test images: Place your test images (e.g.,
.jpg,.png) in thetest_images/directory. Then, run the following command to convert them to a format that can be used by the Verilog simulation:make prepare-images
-
Start the emotion classification server: In a separate terminal, run the following command to start the Python server that will be used for emotion classification:
make start-server
-
Run the co-simulation: In another terminal, run the following command to start the Verilog simulation and connect to the emotion classification server:
make run-cosim
You can also run the simulation with a specific image:
make run-image IMAGE=prepared_images/face_01.txt
The project includes a comprehensive verification environment that allows you to test the functionality of the design.
-
Testbench: The
sim/tb_face_detector.vfile is the main testbench for the face detector. It reads a test image, sends it to theface_detectormodule, and checks the output. -
Co-simulation: The co-simulation framework allows you to test the interaction between the Verilog code and the Python-based emotion classifier.
-
Waveform Debugging: You can use GTKWave to view the simulation waveforms and debug the design. To open the waveforms, run the simulation and then execute:
make wave
Key Signals to Monitor in the FSM (
control_fsm.v):state: The current state of the finite state machine.ii_done: Indicates that the integral image has been computed.stage_start: Indicates the start of a new stage in the Haar cascade.stage_done: Indicates the end of a stage.stage_passed: Indicates whether the current stage has passed.window_x,window_y: The current position of the sliding window.done: Indicates that the face detection process is complete.
-
Generate the Vivado project: Open Vivado and run the following command in the Tcl console:
source ../create_project.tclThis will create a new Vivado project, add the Verilog source files, and configure the IP cores.
-
Add the constraints file: The
src/face_detector.xdcfile is a template for the design constraints. You will need to modify this file to match the pinout of your target FPGA board. -
Generate the bitstream: In Vivado, click the "Generate Bitstream" button to synthesize the design, implement it, and generate a bitstream.
-
Program the FPGA: Use the generated bitstream to program your FPGA.
To make this project production-ready, consider the following:
- AXI Interface: For a Zynq-based device, you would need to add an AXI interface to the
face_detectormodule to allow the ARM processor to communicate with the FPGA. - Emotion Classification Model: The Mini-Xception model is a good starting point, but you may want to train your own model for better performance or to recognize a different set of emotions.
- Error Handling: Add more robust error handling to the Python server and the Verilog code.
- CI/CD: Set up a continuous integration and continuous delivery (CI/CD) pipeline to automate the testing and deployment process.
- Haar Cascade: The face detector uses a Haar cascade to identify faces in an image. The cascade is a series of stages, where each stage is a collection of weak classifiers.
- Integral Image: An integral image is used to speed up the calculation of the Haar features.
- Sliding Window: A sliding window is used to scan the image for faces.
- Co-simulation: The Verilog simulation communicates with the Python-based emotion classifier over a socket connection.
- Vivado Synthesis: The Verilog code is synthesized for a Xilinx FPGA using Vivado.