A computer vision system designed for DD Robocon 2024 competition that uses YOLOv8 to detect and track colored balls (Blue, Purple, Red) and silos in real-time. The system calculates precise distance measurements and positional offsets from the camera center, enabling autonomous robot navigation and object manipulation.
This system was developed for the DD Robocon 2024 competition, providing robots with vision capabilities to:
- Identify and locate colored balls on the competition field
- Detect silo positions for accurate ball placement
- Calculate real-time distances to objects for navigation
- Determine X-Y offsets from camera center for precise alignment
-
Multi-Object Detection
- Blue Ball detection
- Purple Ball detection
- Red Ball detection
- Silo detection
- YOLOv8-based real-time inference
-
Distance Measurement
- Focal length-based distance calculation
- Known object size reference (19.5cm for balls, 42.5cm for silo)
- Real-time distance display in centimeters
- Camera calibration support
-
Position Tracking
- X-Y offset calculation from camera center
- Bounding box center point detection
- Frame-relative positioning
- Real-time coordinate display
-
Dual Mode Operation
- Real-time mode: Live webcam/camera feed processing
- Image mode: Static image analysis and testing
- Configurable confidence thresholds
-
Visual Feedback
- Color-coded bounding boxes
- Confidence percentage display
- Distance overlay
- Position offset indicators
DD-Robocon-2024/
├── code/
│ ├── yolov8_robocon24.py # Real-time camera detection
│ └── for_img.py # Static image detection
├── moble/
│ └── robocon_ball&silo.pt # Custom trained YOLOv8 model
└── notebook/
└── train-yolov8-*.ipynb # Training notebook
- Python 3.9+
- Webcam or USB camera
- 4GB RAM minimum
- GPU recommended for real-time performance
Create a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txtpip install ultralytics
pip install opencv-python
pip install cvzone
pip install numpyThe system uses predefined object dimensions for distance calculation:
reference_object_widths_cm = {
'BlueBall': 19.5, # Blue ball diameter in cm
'PurpleBall': 19.5, # Purple ball diameter in cm
'RedBall': 19.5, # Red ball diameter in cm
'Silo': 42.5 # Silo width in cm
}Default focal length is set to 1200. For accurate distance measurement, calibrate your camera:
- Place an object at a known distance (e.g., 100 cm)
- Measure the object width in pixels from the detection
- Calculate focal length:
focal_length = (object_width_pixels × distance_cm) / real_width_cm
Run the real-time detection system:
python code/yolov8_robocon24.pyControls:
- Press
qto quit
Output:
- Live video feed with detections
- Distance to each detected object (cm)
- X-Y offset from camera center
- Confidence percentage
- Console output with coordinates
Process a single image:
python code/for_img.pyMake sure to update the image path in the script:
image = cv2.imread('your_image_path.jpeg')The model is trained to detect 4 classes:
| Class | Index | Color Code | Purpose |
|---|---|---|---|
| BlueBall | 0 | Blue | Competition ball |
| PurpleBall | 1 | Purple | Competition ball |
| RedBall | 2 | Red | Competition ball |
| Silo | 3 | Target | Ball placement target |
For each detected object, the system provides:
- Bounding Box: Red rectangle around the object
- Class & Confidence: Object type with detection confidence
- Distance: Calculated distance from camera in centimeters
- Position Offset: X and Y coordinates relative to frame center
- Positive X: Object is to the right
- Negative X: Object is to the left
- Positive Y: Object is below center
- Negative Y: Object is above center
Distance to BlueBall: 87.34 cm
BlueBall - X : 45, Y : -23
Distance to Silo: 134.21 cm
Silo - X : -12, Y : 56
The system uses the pinhole camera model for distance estimation:
Distance (cm) = (Real Object Width × Focal Length) / Object Width in Pixels
Formula Components:
- Real Object Width: Known physical size of the object
- Focal Length: Camera-specific constant (requires calibration)
- Object Width in Pixels: Measured from bounding box
- Detection Speed: 30+ FPS on GPU
- Confidence Threshold: 50%
- Detection Range: 50cm - 300cm (optimal)
- Accuracy: ±5cm at 100cm distance (after calibration)
The custom YOLOv8 model was trained on:
- Competition-specific dataset with colored balls and silos
- Various lighting conditions
- Multiple angles and distances
- Augmented data for robustness
Training notebook available in notebook/ directory.
This vision system can be integrated with robot control systems:
# Example pseudo-code for robot navigation
if distance_cm < 30: # Object is close
if abs(X) < 10: # Object is centered
# Trigger gripper/mechanism
robot.grab()
else:
# Align robot with object
robot.turn(angle=X * correction_factor)
else:
# Move forward
robot.move_forward()Issue: Distance measurements are inaccurate
- Solution: Calibrate focal length for your specific camera
Issue: Low detection confidence
- Solution: Improve lighting conditions
- Solution: Ensure objects are within optimal detection range
Issue: Slow FPS
- Solution: Use GPU acceleration
- Solution: Reduce camera resolution
- Solution: Use YOLOv8n (nano) model
Issue: Camera not detected
- Solution: Change camera index in
cv2.VideoCapture(0)to 1, 2, etc.
- Ensure proper lighting on competition field
- Test with actual competition balls and silos
- Calibrate distance measurements on competition day
- Consider lens distortion for edge detections
- Implement redundancy for critical decisions
- Multiple camera support for stereo vision
- Ball trajectory prediction
- Automatic focal length calibration
- Object tracking across frames
- Real-time telemetry display
- Integration with ROS (Robot Operating System)
MIT License - See LICENSE file for details
- DD Robocon 2024 Competition
- Ultralytics YOLOv8 team
- OpenCV community
For questions or collaboration:
- GitHub: @Ojas-Thombare
Note: This system is designed specifically for DD Robocon 2024 competition requirements. Modify object dimensions and detection classes as needed for your specific use case.