This repository provides an Automatic Number Plate Recognition (ANPR) pipeline built with:
- YOLOv3 (OpenCV DNN module) for license plate detection.
- EasyOCR for character recognition.
The system detects license plates in images, crops them, applies preprocessing, and uses OCR to extract alphanumeric text. This is a portfolio-ready implementation, showcasing end-to-end computer vision and text recognition.
anpr-license-plate-recognition/
├─ src/ # Source code (main pipeline, utils)
│ ├─ main.py # Entrypoint with CLI
│ └─ util.py # Helper functions (NMS, draw, outputs)
├─ configs/
│ └─ default.yaml # Default configuration (paths, thresholds)
├─ models/ # YOLOv3 model assets
│ ├─ config/ # darknet-yolov3.cfg
│ ├─ weights/ # model.weights
│ └─ classes.names # classes file
├─ data/ # Sample images (ignored in .git)
│ └─ README.md # Instructions for placing datasets
├─ notebooks/ # (Optional) experiments, EDA
├─ docs/ # Documentation
│ └─ assets/ # Figures, diagrams
├─ build/ # Outputs (annotated images, logs)
└─ README.md # Project documentation
- Python 3.10+
- Tesseract OCR installed and added to PATH (required by EasyOCR on some systems).
- GPU optional (OpenCV DNN runs on CPU by default).
# Clone repository
git clone https://github.com/AlbertoMarquillas/anpr-license-plate-recognition.git
cd anpr-license-plate-recognition
# (Optional) create virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt
# If requirements.txt not present, install manually
pip install opencv-python easyocr numpy pyyaml matplotlib-
Place YOLOv3 model files in
models/:models/config/darknet-yolov3.cfgmodels/weights/model.weightsmodels/classes.names
-
Place input images in
data/.
Run the pipeline with default settings (YAML config):
python .\src\main.py --save --showSpecify arguments explicitly:
python .\src\main.py `
--input-dir data `
--model-dir models `
--cfg config/darknet-yolov3.cfg `
--weights weights/model.weights `
--classes classes.names `
--langs en `
--save --showKey options:
--save→ Save annotated outputs inbuild/outputs/.--show→ Display annotated images interactively.--langs→ Specify EasyOCR languages, e.g.,--langs en es.
Detected plate text with confidence scores printed in terminal, and annotated images saved in build/outputs/.
[car1.jpg] 1234ABC (score=0.89)
[car2.png] 4567XYZ (score=0.83)
- Datasets: not included. Place your own test images in
data/. - Models: not included. Add YOLOv3 config, weights, and classes to
models/. - See
data/README.mdandmodels/README.mdfor detailed instructions.
- License plate detection using YOLOv3 + OpenCV DNN.
- Non-Maximum Suppression (NMS) for cleaner bounding boxes.
- Plate preprocessing (grayscale, thresholding).
- Text recognition with EasyOCR.
- CLI with PowerShell examples.
- Configurable thresholds and model paths via YAML.
- Portfolio-ready repo structure and documentation.
- Building an end-to-end ANPR system combining detection + OCR.
- Using OpenCV DNN to load and run YOLO models.
- Handling detection outputs, confidence thresholds, and NMS.
- Integrating EasyOCR for multilingual text recognition.
- Designing a portfolio-friendly project structure for recruiters.
- Add Dockerfile for reproducible environments.
- Expand support for YOLOv5/YOLOv8 detection.
- Integrate video stream input (real-time ANPR).
- Add unit tests in
test/. - Pretrained model links via GitHub Releases.
This project is licensed under the MIT License.