Implementation of audio, image, and spectrogram augmentation techniques provided by the librosa, Keras and audiomentations
-
Updated
May 24, 2022 - Jupyter Notebook
Implementation of audio, image, and spectrogram augmentation techniques provided by the librosa, Keras and audiomentations
The code implements the Deep CNN model described in Salamon and Bello's paper for Environmental Sound Classification on Urbansound8k dataset
Replication of the Paper Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification by Salomon & Bello
CNN-RNN multibranch architecture for Environmental Sound Classification.
This repository contains the implementation of Environmental Sound Classification on the ESC-50 dataset using the ACDNet.
This project explores various approaches for audio classification using neural networks with TensorFlow and Keras. The notebook demonstrates the complete process from data loading and preprocessing to model building, training, evaluation, and inference.
The aim of this project was to design and implement a Flask web application for classifying environmental sounds which uses convolutional neural network architecture.
AI system that analyzes urban soundscapes to optimize city planning, reduce noise pollution, and enhance acoustic environments using audio deep learning.
This is the translation of our Turkish language published article to English language. For Turkish Link: https://www.set-science.com/manage/uploads/ISAS2022_0088/SETSCI_ISAS2022_0088_0011.pdf
🔊 Analyze urban soundscapes and monitor noise pollution with AI to enhance city planning and create acoustically balanced environments.
Add a description, image, and links to the environmental-sound-classification topic page so that developers can more easily learn about it.
To associate your repository with the environmental-sound-classification topic, visit your repo's landing page and select "manage topics."