Skip to content
/ ORION Public

Finding Anomalies at selected region in medical imaging,used Swin_MedSAM and Medgsiglib,with google medgemma models

Notifications You must be signed in to change notification settings

shyan23/ORION

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ORION Medical AI System

A comprehensive medical imaging analysis platform that provides AI-powered medical image segmentation, anatomical structure analysis, and 3D visualization capabilities.

πŸ₯ Overview

ORION is a modular medical AI system designed for medical research and clinical analysis workflows. It combines state-of-the-art AI models with robust medical imaging processing capabilities to provide:

  • DICOM Processing: Complete DICOM file handling and series management
  • AI Segmentation: MedSAM-powered medical image segmentation
  • Anatomical Analysis: Real-time anatomical structure detection and ROI analysis
  • 3D Visualization: Volume rendering and mesh generation from medical images
  • Cache Management: Intelligent caching for improved performance
  • RESTful API: Complete FastAPI-based backend for integration

πŸ—οΈ Architecture

System Architecture Diagram

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    ORION Medical AI System                      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚   Frontend UI   β”‚    β”‚   API Gateway   β”‚    β”‚  Admin Panel β”‚ β”‚
β”‚  β”‚   (External)    β”‚    β”‚   (FastAPI)     β”‚    β”‚  (Optional)  β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚           β”‚                       β”‚                      β”‚      β”‚
β”‚           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                   β”‚                             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                   β–Ό                             β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚                 Main Application (testing.py)               β”‚ β”‚
β”‚  β”‚                                                             β”‚ β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚ β”‚
β”‚  β”‚  β”‚   Models    β”‚  β”‚  AI Core    β”‚  β”‚ Cache/Store β”‚         β”‚ β”‚
β”‚  β”‚  β”‚             β”‚  β”‚             β”‚  β”‚             β”‚         β”‚ β”‚
β”‚  β”‚  β”‚ β€’ Data      β”‚  β”‚ β€’ MedSAM    β”‚  β”‚ β€’ Disk Cacheβ”‚         β”‚ β”‚
β”‚  β”‚  β”‚   Models    β”‚  β”‚ β€’ ROI       β”‚  β”‚ β€’ Vector DB β”‚         β”‚ β”‚
β”‚  β”‚  β”‚ β€’ Pydantic  β”‚  β”‚   Analyzer  β”‚  β”‚ β€’ Memory    β”‚         β”‚ β”‚
β”‚  β”‚  β”‚   Schemas   β”‚  β”‚ β€’ AI Models β”‚  β”‚   Cache     β”‚         β”‚ β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚ β”‚
β”‚  β”‚                                                             β”‚ β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚  β”‚  β”‚                 Utilities Module                        β”‚ β”‚ β”‚
β”‚  β”‚  β”‚                                                         β”‚ β”‚ β”‚
β”‚  β”‚  β”‚ β€’ DICOM Processing    β€’ 3D Mesh Generation             β”‚ β”‚ β”‚
β”‚  β”‚  β”‚ β€’ Image Analysis      β€’ Google Cloud Integration       β”‚ β”‚ β”‚
β”‚  β”‚  β”‚ β€’ ROI Calculations    β€’ File System Operations         β”‚ β”‚ β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                     External Dependencies                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  PyTorch/CUDA   β”‚  β”‚  Google Cloud   β”‚  β”‚  File System    β”‚   β”‚
β”‚  β”‚  (AI Models)    β”‚  β”‚   Storage       β”‚  β”‚   (DICOM)       β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Data Flow Diagram

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   DICOM     │────▢│   Volume    │────▢│  AI Model   β”‚
β”‚   Input     β”‚     β”‚ Processing  β”‚     β”‚ Inference   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                             β”‚                   β”‚
                             β–Ό                   β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Results    │◀────│    Cache    │◀────│ Anatomical  β”‚
β”‚   Output    β”‚     β”‚ Management  β”‚     β”‚  Analysis   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Prerequisites

  • Python 3.8+
  • CUDA-compatible GPU (optional, for accelerated AI inference)
  • At least 8GB RAM
  • 10GB free disk space

Installation

  1. Clone the repository

    cd /path/to/your/workspace
    git clone <repository-url>
    cd ORION
  2. Create virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Set up environment variables

    cp .env.example .env
    # Edit .env with your configuration
  5. Run the application

    python testing.py

The server will start on http://localhost:6500

πŸ“ Project Structure

ORION/
β”œβ”€β”€ testing.py                 # Main application entry point
β”œβ”€β”€ testing_original_backup.py # Original monolithic version
β”œβ”€β”€ requirements.txt           # Python dependencies
β”œβ”€β”€ .env                      # Environment configuration
β”œβ”€β”€ .gitignore               # Git ignore rules
β”œβ”€β”€ README.md                # This file
β”‚
β”œβ”€β”€ modules/                 # Modularized components
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ models.py           # Data models and Pydantic schemas
β”‚   β”œβ”€β”€ ai_core.py          # AI models (MedSAM, ROI Analyzer)
β”‚   β”œβ”€β”€ cache_storage.py    # Cache and storage management
β”‚   └── utils.py            # Utility functions and helpers
β”‚
β”œβ”€β”€ ai_models/              # AI model weights and configs
β”‚   └── Swin_medsam/
β”‚       └── model.pth
β”‚
β”œβ”€β”€ Swin_LiteMedSAM/        # MedSAM model architecture
β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”œβ”€β”€ mask_decoder.py
β”‚   β”‚   β”œβ”€β”€ prompt_encoder.py
β”‚   β”‚   β”œβ”€β”€ swin.py
β”‚   β”‚   └── transformer.py
β”‚   └── ...
β”‚
β”œβ”€β”€ cache/                  # Persistent cache storage
β”‚   └── global_context/
β”‚
β”œβ”€β”€ vector_db/             # Vector database for RAG
β”œβ”€β”€ static/                # Static files
β”œβ”€β”€ uploads/               # File uploads
└── frontend/              # Frontend application (if applicable)

πŸ”§ Configuration

Environment Variables

Create a .env file with the following variables:

# DICOM Data Configuration
DICOM_DATA_ROOT=/path/to/dicom/data

# Google Cloud Storage (Optional)
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
GCS_BUCKET_NAME=your-bucket-name

# Cache Settings
CACHE_TTL_HOURS=168
MAX_CACHE_SIZE_MB=500

# API Configuration
API_HOST=0.0.0.0
API_PORT=6500
LOG_LEVEL=INFO

πŸƒβ€β™‚οΈ Running the System

Development Mode

# Run with auto-reload
python testing.py

# Or with uvicorn directly
uvicorn testing:app --host 0.0.0.0 --port 6500 --reload

Production Mode

# Run with optimized settings
uvicorn testing:app --host 0.0.0.0 --port 6500 --workers 4

Docker Deployment

FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
EXPOSE 6500

CMD ["python", "testing.py"]
# Build and run
docker build -t orion-medical-ai .
docker run -p 6500:6500 orion-medical-ai

πŸ“– API Documentation

Once the server is running, access the interactive API documentation:

Key Endpoints

Endpoint Method Description
/api/health GET System health check
/api/scan-directory GET Scan for DICOM series
/api/load-series POST Load DICOM series
/api/analyze-roi POST Analyze region of interest
/api/generate-global-context POST Generate anatomical context
/api/3d/generate-mesh/{series_uid} POST Generate 3D mesh

Example API Usage

import requests

# Health check
response = requests.get("http://localhost:6500/api/health")
print(response.json())

# Scan for DICOM series
response = requests.get("http://localhost:6500/api/scan-directory?dir_path=/path/to/dicom")
series = response.json()

# Load a series
payload = {"series_uid": "1.2.3.4.5", "files": ["file1.dcm", "file2.dcm"]}
response = requests.post("http://localhost:6500/api/load-series", json=payload)

πŸ§ͺ Testing

Unit Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=modules tests/

# Run specific test file
pytest tests/test_models.py

Integration Tests

# Test API endpoints
pytest tests/test_api.py

# Test AI models
pytest tests/test_ai_core.py

πŸ” Monitoring and Logging

Health Monitoring

  • Health Check: /api/health
  • Metrics: /api/metrics
  • System Stats: Real-time memory, CPU, and GPU monitoring

Logging

Logs are configured to output to stdout/stderr for Docker compatibility:

# Log levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
LOG_LEVEL=INFO  # Set in .env

Performance Monitoring

The system tracks:

  • Request processing times
  • Memory usage patterns
  • Cache hit/miss ratios
  • AI model inference times

🧠 AI Models

MedSAM (Medical Segment Anything Model)

  • Purpose: Medical image segmentation
  • Input: Medical images + point/box prompts
  • Output: Segmentation masks
  • Location: modules/ai_core.py:LocalMedSAM
  • Model Download: Download the Swin_LiteMedSAM model from https://github.com/RuochenGao/Swin_LiteMedSAM

ROI Analyzer (BiomedCLIP)

  • Purpose: Anatomical content analysis
  • Input: Region of interest images
  • Output: Anatomical classification
  • Location: modules/ai_core.py:LocalROIAnalyzer

πŸ’Ύ Data Management

Cache Strategy

  1. Memory Cache: Fast access for active sessions
  2. Disk Cache: Persistent storage for global context
  3. Vector Database: Semantic search for anatomical data

Storage Locations

  • Local Cache: cache/global_context/
  • Vector DB: vector_db/
  • AI Models: ai_models/
  • DICOM Data: Configurable via DICOM_DATA_ROOT

πŸ”’ Security Considerations

  • DICOM data is processed locally by default
  • No sensitive data in logs
  • Service account keys should be properly secured
  • API rate limiting recommended for production

🚨 Troubleshooting

Common Issues

  1. AI Models Not Loading

    # Check model files exist
    ls -la ai_models/Swin_medsam/
    
    # Check CUDA availability
    python -c "import torch; print(torch.cuda.is_available())"
  2. DICOM Files Not Found

    # Check DICOM_DATA_ROOT setting
    echo $DICOM_DATA_ROOT
    
    # Verify directory permissions
    ls -la /path/to/dicom/data
  3. Memory Issues

    # Monitor memory usage
    watch -n 1 'free -h'
    
    # Clear caches
    curl -X DELETE http://localhost:6500/api/cache/global-context/clear

Performance Optimization

  1. Enable GPU: Install CUDA-compatible PyTorch
  2. Increase Cache Size: Adjust MAX_CACHE_SIZE_MB
  3. Use SSD Storage: For better I/O performance
  4. Scale Horizontally: Deploy multiple instances

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ†˜ Support

For support and questions:

  • Create an issue in the repository
  • Check the API documentation at /docs
  • Review the logs for error details

ORION Medical AI System - Advancing medical imaging through AI innovation πŸ₯✨

About

Finding Anomalies at selected region in medical imaging,used Swin_MedSAM and Medgsiglib,with google medgemma models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages