Robustness of ANPR to localized, image-space adversarial perturbations
Research notebook accompanying a broader research effort on the robustness of modern ANPR systems to image-space adversarial perturbations. The pipeline combines YOLO-based license plate detection with PaddleOCR-based text recognition, and evaluates three attack possibilities.
- Overview
- Environment & Setup
- Data
- Models
- Baseline Evaluation
- Attacks
- Detection DoS
- Targeted Region Transfer
- Imperceptible OCR Attack
- Metrics & Reporting
- Reproducibility Notes
- How to Run
- Ethical Use
- CPS Integration (MITM Threat Model)
- MQTT Transport Variant
- Technical Report
- Results Gallery (Figures)
- References
- Acknowledgments
- How to Cite
- Goal: Evaluate robustness of ANPR system to image-space adversarial perturbations that (a) break plate detection or (b) alter OCR output, while keeping perturbations localized and visually subtle when possible.
- Pipeline:
- YOLO detector for plate localization (bounding boxes + confidence).
- PaddleOCR for text recognition within detected plate regions.
- Three attacks tested for DoS on detection, region-targeted transfer, and imperceptible OCR changes.
- Notebook: adversarial_ANPR.ipynb (run cells top-to-bottom).
- Python: 3.9–3.11 recommended (tested with common ML stacks).
- Install dependencies:
pip install -r requirements.txt
- GPU (optional but recommended): Install a CUDA-enabled PyTorch build per your CUDA version for faster YOLO inference.
- Kaggle access: The notebook uses
kagglehubto download the dataset.- Ensure you have a Kaggle account and have accepted the dataset terms.
- Configure Kaggle API credentials (KAGGLE_USERNAME / KAGGLE_KEY) or sign in as required by kagglehub.
- Root notebook: adversarial_ANPR.ipynb
- Project README: README.md
- Dependencies: requirements.txt
- Technical report: technicalReport/Adversarial_Machine_Learning_for_ANPR.pdf
- Figures: technicalReport/images
- Dataset: Spain License Plate Dataset from Kaggle
- Source:
unidpro/spain-license-plate-dataset(downloaded viakagglehub.dataset_download(...)). - The notebook automatically downloads and lists a subset of
.jpg/.pngfiles for experiments.
- Source:
- Detector: YOLO (Ultralytics) loaded from a local checkpoint.
- Code expects a weight at
../models/best.ptrelative to the notebook. Adjust the path if needed.
- Code expects a weight at
- OCR: PaddleOCR initialized with
use_angle_cls=True, lang="en".
- The notebook loads N sample images and computes:
- Detection presence and confidence for the first predicted plate per image.
- A quick OCR probe on a detected plate using helper
extract_license_plate()which selects the highest-confidence text and cleans non-alphanumerics.
- Baseline outputs printed in the notebook include a detection success rate and sample image shapes.
High-level diagrams of attacker position and capabilities in a networked ANPR setting.
-
Function:
fgsm_attack_simple(model, image, epsilon, targeted_region=None) -
Idea: Add bounded, pixel-space perturbations (uniform random in this implementation) with magnitude
epsilon(0–255 scale). Optionally restrict to the detected plate region (bbox) to localize changes. -
Procedure:
- For each successfully detected image, sweep
epsilon ∈ {5,10,15,20,25,30,40,50}. - Re-run YOLO on the perturbed image and record new confidence.
- Success criterion: detector misses the plate OR confidence drops below 0.5.
- For each successfully detected image, sweep
-
Outputs:
- Console summary (per-image results, drops, success rate).
- Visualizations saved:
original_image_{i}.png,adversarial_image_{i}.png,perturbation_{i}.png.
Illustration of the DoS-style perturbation that reduces or removes YOLO detections.
-
Idea: Transfer the visual appearance of a target plate’s ROI into the source image’s detected plate ROI, enforcing a bounded delta and smoothing edges to remain plausible.
-
Steps:
- Detect plate on source image (bbox S) and target image (bbox T).
- Resize target ROI to source ROI size; apply scale/offset; clamp per-pixel delta by
epsilon=80. - Apply mild Gaussian smoothing to mask seams.
- Evaluate YOLO on the adversarial image and compute IoU between the new detection and T to check whether the detector “migrates” toward the target-like region.
-
Outputs:
- Console logs with bbox, confidence, IoU to target.
- Visualization saved:
adv_region_transfer_strong.pngplus side-by-side plots in the notebook.
Example of transferring a target plate’s ROI appearance into a source frame.
-
Goal: Change OCR text with visually subtle, localized perturbations restricted to high-frequency regions of the plate.
-
Method:
- Build an edge-weighted perturbation mask via Canny+dilation+blur to focus on strokes and boundaries.
- Perform a finite-difference directional search in the masked ROI:
- Sample a smoothed random direction
z. - Evaluate OCR confidence for
ROI ± σ·zand choose the direction that reduces the current OCR confidence/text stability. - Update with a sign step bounded by
ε, apply bilateral filtering for visual smoothness.
- Sample a smoothed random direction
- Early stop when OCR text changes from the baseline.
-
Hyperparameters (example sweeps in notebook):
epsilon_list = [4, 6, 8, 10, 12],steps_list = [6, 8, 10, 12],sigma = 1.5.
-
Outputs:
- Console prints showing OCR text and confidence per step.
- Saved images:
tgt_image.png,adv_fgsm_sweep.png(with side-by-side and amplified |Δ|).
Representative frames from the OCR-focused attack showing iterative loop.
- Detection DoS:
- Per-ε success rate = (# successes / # attempts) × 100.
- Confidence drop statistics (avg per ε).
- Targeted Transfer:
- Post-attack detection confidence and IoU between final bbox and target bbox.
- Imperceptible OCR:
- Whether text changed; final confidence of new text; visual inspection of perturbation magnitude.
- The notebook prints tabular summaries and produces figures for quick interpretation.
- Randomness: Attacks 1 and 3 depend on random noise directions; set Python/NumPy seeds early in the notebook for more determinism if desired.
- Hardware: GPU inference can change timing; core results should remain qualitatively similar.
- Weights: Ensure
../models/best.ptexists and corresponds to your intended detector; results depend on detector quality. - Data: Make sure the Kaggle dataset download completes successfully and that sample images are available.
- Install requirements:
pip install -r requirements.txt. - Ensure YOLO weights are accessible at the path used in the notebook (or edit the path).
- Open and run
adversarial_ANPR.ipynbin order:- Import + dataset download
- Model initialization
- Baseline detection + OCR probe
- Attack 1 (Detection DoS) + visualization + analysis
- Attack 2 (Targeted Region Transfer) + OCR probe
- Attack 3 (Imperceptible OCR Attack)
- Inspect generated images and console summaries for quantitative and qualitative results.
This work is for academic research on robustness, safety, and defenses of computer vision systems. Do not deploy or apply these techniques for unlawful or unethical purposes. Always follow local laws and institutional review policies when working with license plate imagery.
- Ultralytics YOLO: https://docs.ultralytics.com/
- PaddleOCR: https://github.com/PaddlePaddle/PaddleOCR
- Kaggle Dataset (Spain License Plate): https://www.kaggle.com/datasets/unidpro/spain-license-plate-dataset
If you use this work in your research, academic projects, or publications, please include the corresponding citation:
A. de Castro, «Adversarial Machine Learning Attacks on Automatic Number Plate Recognition Systems». Zenodo, dic. 13, 2025. doi: 10.5281/zenodo.18302845.
The attack model and adversarial ML methodology implemented in this repository are described in detail in the following technical report:
A. de Castro. Adversarial Machine Learning Attacks on Automatic Number Plate Recognition Systems. Technical Report, Zenodo, 2025.








