A comprehensive medical imaging analysis platform that provides AI-powered medical image segmentation, anatomical structure analysis, and 3D visualization capabilities.
ORION is a modular medical AI system designed for medical research and clinical analysis workflows. It combines state-of-the-art AI models with robust medical imaging processing capabilities to provide:
- DICOM Processing: Complete DICOM file handling and series management
- AI Segmentation: MedSAM-powered medical image segmentation
- Anatomical Analysis: Real-time anatomical structure detection and ROI analysis
- 3D Visualization: Volume rendering and mesh generation from medical images
- Cache Management: Intelligent caching for improved performance
- RESTful API: Complete FastAPI-based backend for integration
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ORION Medical AI System β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββ βββββββββββββββββββ ββββββββββββββββ β
β β Frontend UI β β API Gateway β β Admin Panel β β
β β (External) β β (FastAPI) β β (Optional) β β
β βββββββββββββββββββ βββββββββββββββββββ ββββββββββββββββ β
β β β β β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββ β
β β β
βββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ€
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Main Application (testing.py) β β
β β β β
β β βββββββββββββββ βββββββββββββββ βββββββββββββββ β β
β β β Models β β AI Core β β Cache/Store β β β
β β β β β β β β β β
β β β β’ Data β β β’ MedSAM β β β’ Disk Cacheβ β β
β β β Models β β β’ ROI β β β’ Vector DB β β β
β β β β’ Pydantic β β Analyzer β β β’ Memory β β β
β β β Schemas β β β’ AI Models β β Cache β β β
β β βββββββββββββββ βββββββββββββββ βββββββββββββββ β β
β β β β
β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β Utilities Module β β β
β β β β β β
β β β β’ DICOM Processing β’ 3D Mesh Generation β β β
β β β β’ Image Analysis β’ Google Cloud Integration β β β
β β β β’ ROI Calculations β’ File System Operations β β β
β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β External Dependencies β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ β
β β PyTorch/CUDA β β Google Cloud β β File System β β
β β (AI Models) β β Storage β β (DICOM) β β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β DICOM ββββββΆβ Volume ββββββΆβ AI Model β
β Input β β Processing β β Inference β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β β
βΌ βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Results βββββββ Cache βββββββ Anatomical β
β Output β β Management β β Analysis β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
- Python 3.8+
- CUDA-compatible GPU (optional, for accelerated AI inference)
- At least 8GB RAM
- 10GB free disk space
-
Clone the repository
cd /path/to/your/workspace git clone <repository-url> cd ORION
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
cp .env.example .env # Edit .env with your configuration -
Run the application
python testing.py
The server will start on http://localhost:6500
ORION/
βββ testing.py # Main application entry point
βββ testing_original_backup.py # Original monolithic version
βββ requirements.txt # Python dependencies
βββ .env # Environment configuration
βββ .gitignore # Git ignore rules
βββ README.md # This file
β
βββ modules/ # Modularized components
β βββ __init__.py
β βββ models.py # Data models and Pydantic schemas
β βββ ai_core.py # AI models (MedSAM, ROI Analyzer)
β βββ cache_storage.py # Cache and storage management
β βββ utils.py # Utility functions and helpers
β
βββ ai_models/ # AI model weights and configs
β βββ Swin_medsam/
β βββ model.pth
β
βββ Swin_LiteMedSAM/ # MedSAM model architecture
β βββ models/
β β βββ mask_decoder.py
β β βββ prompt_encoder.py
β β βββ swin.py
β β βββ transformer.py
β βββ ...
β
βββ cache/ # Persistent cache storage
β βββ global_context/
β
βββ vector_db/ # Vector database for RAG
βββ static/ # Static files
βββ uploads/ # File uploads
βββ frontend/ # Frontend application (if applicable)
Create a .env file with the following variables:
# DICOM Data Configuration
DICOM_DATA_ROOT=/path/to/dicom/data
# Google Cloud Storage (Optional)
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
GCS_BUCKET_NAME=your-bucket-name
# Cache Settings
CACHE_TTL_HOURS=168
MAX_CACHE_SIZE_MB=500
# API Configuration
API_HOST=0.0.0.0
API_PORT=6500
LOG_LEVEL=INFO# Run with auto-reload
python testing.py
# Or with uvicorn directly
uvicorn testing:app --host 0.0.0.0 --port 6500 --reload# Run with optimized settings
uvicorn testing:app --host 0.0.0.0 --port 6500 --workers 4FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 6500
CMD ["python", "testing.py"]# Build and run
docker build -t orion-medical-ai .
docker run -p 6500:6500 orion-medical-aiOnce the server is running, access the interactive API documentation:
- Swagger UI: http://localhost:6500/docs
- ReDoc: http://localhost:6500/redoc
| Endpoint | Method | Description |
|---|---|---|
/api/health |
GET | System health check |
/api/scan-directory |
GET | Scan for DICOM series |
/api/load-series |
POST | Load DICOM series |
/api/analyze-roi |
POST | Analyze region of interest |
/api/generate-global-context |
POST | Generate anatomical context |
/api/3d/generate-mesh/{series_uid} |
POST | Generate 3D mesh |
import requests
# Health check
response = requests.get("http://localhost:6500/api/health")
print(response.json())
# Scan for DICOM series
response = requests.get("http://localhost:6500/api/scan-directory?dir_path=/path/to/dicom")
series = response.json()
# Load a series
payload = {"series_uid": "1.2.3.4.5", "files": ["file1.dcm", "file2.dcm"]}
response = requests.post("http://localhost:6500/api/load-series", json=payload)# Run all tests
pytest
# Run with coverage
pytest --cov=modules tests/
# Run specific test file
pytest tests/test_models.py# Test API endpoints
pytest tests/test_api.py
# Test AI models
pytest tests/test_ai_core.py- Health Check:
/api/health - Metrics:
/api/metrics - System Stats: Real-time memory, CPU, and GPU monitoring
Logs are configured to output to stdout/stderr for Docker compatibility:
# Log levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
LOG_LEVEL=INFO # Set in .envThe system tracks:
- Request processing times
- Memory usage patterns
- Cache hit/miss ratios
- AI model inference times
- Purpose: Medical image segmentation
- Input: Medical images + point/box prompts
- Output: Segmentation masks
- Location:
modules/ai_core.py:LocalMedSAM - Model Download: Download the Swin_LiteMedSAM model from https://github.com/RuochenGao/Swin_LiteMedSAM
- Purpose: Anatomical content analysis
- Input: Region of interest images
- Output: Anatomical classification
- Location:
modules/ai_core.py:LocalROIAnalyzer
- Memory Cache: Fast access for active sessions
- Disk Cache: Persistent storage for global context
- Vector Database: Semantic search for anatomical data
- Local Cache:
cache/global_context/ - Vector DB:
vector_db/ - AI Models:
ai_models/ - DICOM Data: Configurable via
DICOM_DATA_ROOT
- DICOM data is processed locally by default
- No sensitive data in logs
- Service account keys should be properly secured
- API rate limiting recommended for production
-
AI Models Not Loading
# Check model files exist ls -la ai_models/Swin_medsam/ # Check CUDA availability python -c "import torch; print(torch.cuda.is_available())"
-
DICOM Files Not Found
# Check DICOM_DATA_ROOT setting echo $DICOM_DATA_ROOT # Verify directory permissions ls -la /path/to/dicom/data
-
Memory Issues
# Monitor memory usage watch -n 1 'free -h' # Clear caches curl -X DELETE http://localhost:6500/api/cache/global-context/clear
- Enable GPU: Install CUDA-compatible PyTorch
- Increase Cache Size: Adjust
MAX_CACHE_SIZE_MB - Use SSD Storage: For better I/O performance
- Scale Horizontally: Deploy multiple instances
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Create an issue in the repository
- Check the API documentation at
/docs - Review the logs for error details
ORION Medical AI System - Advancing medical imaging through AI innovation π₯β¨