- Identify the song
- Detect the mood of the song using Machine Learning
- Recommend similar songs
Music platforms like Shazam and Spotify analyze audio signals to recognize songs and suggest similar music.
- User uploads an audio clip (5–15 seconds)
- Backend saves and processes the audio
- Song is identified using a music recognition API
- Audio features are extracted
- ML model predicts the mood
- Similar songs are recommended
- Results are displayed on the website
- ⚛️ React 18.3 - Modern UI framework
- 📘 TypeScript 5.5 - Type-safe development
- 🎨 Tailwind CSS - Utility-first styling
- 🎭 Framer Motion - Smooth animations
- 🚀 FastAPI - High-performance Python framework
- 🐘 PostgreSQL - Robust relational database
- 🔥 Supabase - Real-time database & auth
- 🎼 Librosa – audio feature extraction
- 📊 NumPy – numerical operations
- 🎲 Scikit-learn
- 🤖 Pre-trained classical ML models
- 🎙️ ACRCloud / Shazam API – song identification
- 🎧 Spotify Web API – recommendations
This project intentionally avoids deep learning to keep it beginner-friendly.
Using Librosa, we extract numerical features from the audio clip:
- Tempo (BPM)
- Zero Crossing Rate
- Spectral Centroid
- Spectral Bandwidth
- RMS Energy
- MFCCs (Mel-Frequency Cepstral Coefficients)
These features convert audio into a numerical vector, which ML models can understand.
Classify a song into one of the moods:
- Happy
- Sad
- Calm
- Energetic
You can start with any one of these:
- Easy to understand
- Fast to train
- Good baseline model
- Dataset: Pre-labeled music datasets (e.g., GTZAN, Free Music Archive)
- Labels: Mood category
- Train-test split: 80/20
- Evaluation metrics:
- Accuracy
- Confusion matrix
The trained model is saved using joblib or pickle.
- Audio uploaded by user
- Features extracted
- Features passed to trained ML model
- Model outputs predicted mood
- Mood shown on frontend
- Use Spotify Audio Features:
- Valence
- Energy
- Tempo
- Recommend songs with similar values
- Use Cosine Similarity
- Compare feature vectors of songs
- Recommend top N similar songs
📌 Beginners can start with API-based recommendations.
tune-trace/
│
├── frontend/ # React + TypeScript
│ ├── src/
│ │ ├── components/ # UI components
│ │ │ ├── AudioUpload.tsx
│ │ │ ├── SongResult.tsx
│ │ │ └── Recommendations.tsx
│ │ │
│ │ ├── services/ # API calls
│ │ │ └── api.ts
│ │ │
│ │ ├── App.tsx
│ │ └── main.tsx
│ │
│ └── package.json
│
├── backend/ # FastAPI backend
│ ├── main.py # API entry point
│ ├── audio.py # Audio upload & processing
│ ├── song.py # Song identification
│ ├── mood.py # Mood prediction (ML)
│ ├── recommend.py # Song recommendations
│ ├── ml_model.pkl # Trained ML model
│ └── requirements.txt
│
├── uploads/ # Temporary audio files
│
├── README.md
├── .gitignore
└── .env.example
- CNN-based audio classification
- Spectrogram-based deep learning
- User-personalized recommendations
- Multi-label mood prediction
We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.