Skip to content

A computer vision system designed for DD Robocon 2024 competition that uses YOLOv8 to detect and track colored balls (Blue, Purple, Red) and silos in real-time.

License

Notifications You must be signed in to change notification settings

Ojas-Thombare/DD-Robocon-2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python License YOLOv8 OpenCV

DD Robocon 2024 - Ball & Silo Detection System

A computer vision system designed for DD Robocon 2024 competition that uses YOLOv8 to detect and track colored balls (Blue, Purple, Red) and silos in real-time. The system calculates precise distance measurements and positional offsets from the camera center, enabling autonomous robot navigation and object manipulation.

Competition Context

This system was developed for the DD Robocon 2024 competition, providing robots with vision capabilities to:

  • Identify and locate colored balls on the competition field
  • Detect silo positions for accurate ball placement
  • Calculate real-time distances to objects for navigation
  • Determine X-Y offsets from camera center for precise alignment

Features

  • Multi-Object Detection

    • Blue Ball detection
    • Purple Ball detection
    • Red Ball detection
    • Silo detection
    • YOLOv8-based real-time inference
  • Distance Measurement

    • Focal length-based distance calculation
    • Known object size reference (19.5cm for balls, 42.5cm for silo)
    • Real-time distance display in centimeters
    • Camera calibration support
  • Position Tracking

    • X-Y offset calculation from camera center
    • Bounding box center point detection
    • Frame-relative positioning
    • Real-time coordinate display
  • Dual Mode Operation

    • Real-time mode: Live webcam/camera feed processing
    • Image mode: Static image analysis and testing
    • Configurable confidence thresholds
  • Visual Feedback

    • Color-coded bounding boxes
    • Confidence percentage display
    • Distance overlay
    • Position offset indicators

Repository Layout

DD-Robocon-2024/
├── code/
│   ├── yolov8_robocon24.py    # Real-time camera detection
│   └── for_img.py              # Static image detection
├── moble/
│   └── robocon_ball&silo.pt   # Custom trained YOLOv8 model
└── notebook/
    └── train-yolov8-*.ipynb   # Training notebook

Requirements

  • Python 3.9+
  • Webcam or USB camera
  • 4GB RAM minimum
  • GPU recommended for real-time performance

Installation

Create a virtual environment and install dependencies:

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt

Required Dependencies

pip install ultralytics
pip install opencv-python
pip install cvzone
pip install numpy

Configuration

Object Specifications

The system uses predefined object dimensions for distance calculation:

reference_object_widths_cm = {
    'BlueBall': 19.5,      # Blue ball diameter in cm
    'PurpleBall': 19.5,    # Purple ball diameter in cm
    'RedBall': 19.5,       # Red ball diameter in cm
    'Silo': 42.5           # Silo width in cm
}

Focal Length Calibration

Default focal length is set to 1200. For accurate distance measurement, calibrate your camera:

  1. Place an object at a known distance (e.g., 100 cm)
  2. Measure the object width in pixels from the detection
  3. Calculate focal length:
    focal_length = (object_width_pixels × distance_cm) / real_width_cm
    

Usage

Real-time Detection (Webcam)

Run the real-time detection system:

python code/yolov8_robocon24.py

Controls:

  • Press q to quit

Output:

  • Live video feed with detections
  • Distance to each detected object (cm)
  • X-Y offset from camera center
  • Confidence percentage
  • Console output with coordinates

Static Image Detection

Process a single image:

python code/for_img.py

Make sure to update the image path in the script:

image = cv2.imread('your_image_path.jpeg')

Detection Classes

The model is trained to detect 4 classes:

Class Index Color Code Purpose
BlueBall 0 Blue Competition ball
PurpleBall 1 Purple Competition ball
RedBall 2 Red Competition ball
Silo 3 Target Ball placement target

Output Information

For each detected object, the system provides:

  1. Bounding Box: Red rectangle around the object
  2. Class & Confidence: Object type with detection confidence
  3. Distance: Calculated distance from camera in centimeters
  4. Position Offset: X and Y coordinates relative to frame center
    • Positive X: Object is to the right
    • Negative X: Object is to the left
    • Positive Y: Object is below center
    • Negative Y: Object is above center

Console Output Example

Distance to BlueBall: 87.34 cm
BlueBall - X : 45, Y : -23
Distance to Silo: 134.21 cm
Silo - X : -12, Y : 56

Distance Calculation

The system uses the pinhole camera model for distance estimation:

Distance (cm) = (Real Object Width × Focal Length) / Object Width in Pixels

Formula Components:

  • Real Object Width: Known physical size of the object
  • Focal Length: Camera-specific constant (requires calibration)
  • Object Width in Pixels: Measured from bounding box

Performance

  • Detection Speed: 30+ FPS on GPU
  • Confidence Threshold: 50%
  • Detection Range: 50cm - 300cm (optimal)
  • Accuracy: ±5cm at 100cm distance (after calibration)

Model Training

The custom YOLOv8 model was trained on:

  • Competition-specific dataset with colored balls and silos
  • Various lighting conditions
  • Multiple angles and distances
  • Augmented data for robustness

Training notebook available in notebook/ directory.

Integration with Robot

This vision system can be integrated with robot control systems:

# Example pseudo-code for robot navigation
if distance_cm < 30:  # Object is close
    if abs(X) < 10:  # Object is centered
        # Trigger gripper/mechanism
        robot.grab()
    else:
        # Align robot with object
        robot.turn(angle=X * correction_factor)
else:
    # Move forward
    robot.move_forward()

Troubleshooting

Issue: Distance measurements are inaccurate

  • Solution: Calibrate focal length for your specific camera

Issue: Low detection confidence

  • Solution: Improve lighting conditions
  • Solution: Ensure objects are within optimal detection range

Issue: Slow FPS

  • Solution: Use GPU acceleration
  • Solution: Reduce camera resolution
  • Solution: Use YOLOv8n (nano) model

Issue: Camera not detected

  • Solution: Change camera index in cv2.VideoCapture(0) to 1, 2, etc.

Competition Notes

  • Ensure proper lighting on competition field
  • Test with actual competition balls and silos
  • Calibrate distance measurements on competition day
  • Consider lens distortion for edge detections
  • Implement redundancy for critical decisions

Future Enhancements

  • Multiple camera support for stereo vision
  • Ball trajectory prediction
  • Automatic focal length calibration
  • Object tracking across frames
  • Real-time telemetry display
  • Integration with ROS (Robot Operating System)

License

MIT License - See LICENSE file for details

Acknowledgments

  • DD Robocon 2024 Competition
  • Ultralytics YOLOv8 team
  • OpenCV community

Contact

For questions or collaboration:


Note: This system is designed specifically for DD Robocon 2024 competition requirements. Modify object dimensions and detection classes as needed for your specific use case.

About

A computer vision system designed for DD Robocon 2024 competition that uses YOLOv8 to detect and track colored balls (Blue, Purple, Red) and silos in real-time.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages