Datasets and tasks¶
The following is a list of supported datasets, sorted by task.
If you’re more interested in the corresponding PyTorch Dataset
, see
this page
Speech separation¶
wsj0-2mix dataset¶
wsj0-2mix is a single channel speech separation dataset base on WSJ0. Three speaker extension (wsj0-3mix) is also considered here.
Reference
@article{Hershey_2016,
title={Deep clustering: Discriminative embeddings for segmentation and separation},
ISBN={9781479999880},
url={http://dx.doi.org/10.1109/ICASSP.2016.7471631},
DOI={10.1109/icassp.2016.7471631},
journal={2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
publisher={IEEE},
author={Hershey, John R. and Chen, Zhuo and Le Roux, Jonathan and Watanabe, Shinji},
year={2016},
}
WHAM dataset¶
WHAM! is a noisy single-channel speech separation dataset based on WSJ0. It is a noisy extension of wsj0-2mix.
More info here.
References
@inproceedings{WHAMWichern2019,
author={Gordon Wichern and Joe Antognini and Michael Flynn and Licheng Richard Zhu and Emmett McQuinn and Dwight Crow and Ethan Manilow and Jonathan Le Roux},
title={{WHAM!: extending speech separation to noisy environments}},
year=2019,
booktitle={Proc. Interspeech},
pages={1368--1372},
doi={10.21437/Interspeech.2019-2821},
url={http://dx.doi.org/10.21437/Interspeech.2019-2821}
}
WHAMR dataset¶
WHAMR! is a noisy and reverberant single-channel speech separation dataset based on WSJ0. It is a reverberant extension of WHAM!.
Note that WHAMR! can synthesize binaural recordings, but we only consider the single channel for now.
More info here. References
@misc{maciejewski2019whamr,
title={WHAMR!: Noisy and Reverberant Single-Channel Speech Separation},
author={Matthew Maciejewski and Gordon Wichern and Emmett McQuinn and Jonathan Le Roux},
year={2019},
eprint={1910.10279},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
LibriMix dataset¶
The LibriMix dataset is an open source dataset derived from LibriSpeech dataset. It’s meant as an alternative and complement to WHAM.
More info here.
References
@misc{cosentino2020librimix,
title={LibriMix: An Open-Source Dataset for Generalizable Speech Separation},
author={Joris Cosentino and Manuel Pariente and Samuele Cornell and Antoine Deleforge and Emmanuel Vincent},
year={2020},
eprint={2005.11262},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
Kinect-WSJ dataset¶
Kinect-WSJ is a reverberated, noisy version of the WSJ0-2MIX dataset. Microphones are placed on a linear array with spacing between the devices resembling that of Microsoft Kinect ™, the device used to record the CHiME-5 dataset. This was done so that we could use the real ambient noise captured as part of CHiME-5 dataset. The room impulse responses (RIR) were simulated for a sampling rate of 16,000 Hz.
Requirements
wsj_path : Path to precomputed wsj-2mix dataset. Should contain the folder 2speakers/wav16k/. If you don’t have wsj_mix dataset, please create it using the scripts in egs/wsj0_mix
chime_path : Path to chime-5 dataset. Should contain the folders train, dev and eval
dihard_path : Path to dihard labels. Should contain
*.lab
files for the train and dev set
References
Original repo
@inproceedings{sivasankaran2020,
booktitle = {2020 28th {{European Signal Processing Conference}} ({{EUSIPCO}})},
title={Analyzing the impact of speaker localization errors on speech separation for automatic speech recognition},
author={Sunit Sivasankaran and Emmanuel Vincent and Dominique Fohr},
year={2021},
month = Jan,
}
SMS_WSJ dataset¶
SMS_WSJ (stands for Spatialized Multi-Speaker Wall Street Journal) is a multichannel source separation dataset, based on WSJ0 and WSJ1.
All the information regarding the dataset can be found in this repo.
References If you use this dataset, please cite the corresponding paper as follows :
@Article{SmsWsj19,
author = {Drude, Lukas and Heitkaemper, Jens and Boeddeker, Christoph and Haeb-Umbach, Reinhold},
title = {{SMS-WSJ}: Database, performance measures, and baseline recipe for multi-channel source separation and recognition},
journal = {arXiv preprint arXiv:1910.13934},
year = {2019},
}
Speech enhancement¶
DNS Challenge’s dataset¶
The Deep Noise Suppression (DNS) Challenge is a single-channel speech enhancement challenge organized by Microsoft, with a focus on real-time applications. More info can be found on the official page.
References The challenge paper, here.
@misc{DNSChallenge2020,
title={The INTERSPEECH 2020 Deep Noise Suppression Challenge: Datasets, Subjective Speech Quality and Testing Framework},
author={Chandan K. A. Reddy and Ebrahim Beyrami and Harishchandra Dubey and Vishak Gopal and Roger Cheng and Ross Cutler and Sergiy Matusevych and Robert Aichner and Ashkan Aazami and Sebastian Braun and Puneet Rana and Sriram Srinivasan and Johannes Gehrke}, year={2020},
eprint={2001.08662},
}
The baseline paper, here.
@misc{xia2020weighted,
title={Weighted Speech Distortion Losses for Neural-network-based Real-time Speech Enhancement},
author={Yangyang Xia and Sebastian Braun and Chandan K. A. Reddy and Harishchandra Dubey and Ross Cutler and Ivan Tashev},
year={2020},
eprint={2001.10601},
}
Music source separation¶
MUSDB18 Dataset¶
The musdb18 is a dataset of 150 full lengths music tracks (~10h duration) of different genres along with their isolated drums, bass, vocals and others stems.
More info here.
DAMP-VSEP dataset¶
All the information regarding the dataset can be found in zenodo.
References If you use this dataset, please cite as follows :
@dataset{smule_inc_2019_3553059,
author = {Smule, Inc},
title = {{DAMP-VSEP: Smule Digital Archive of Mobile
Performances - Vocal Separation}},
month = oct,
year = 2019,
publisher = {Zenodo},
version = {1.0.1},
doi = {10.5281/zenodo.3553059},
url = {https://doi.org/10.5281/zenodo.3553059}
}
Environmental sound separation¶
FUSS dataset¶
The Free Universal Sound Separation (FUSS) dataset comprises audio mixtures of arbitrary sounds with source references for use in experiments on arbitrary sound separation.
All the information related to this dataset can be found in this repo.
References If you use this dataset, please cite the corresponding paper as follows:
@Article{Wisdom2020,
author = {Scott Wisdom and Hakan Erdogan and Daniel P. W. Ellis and Romain Serizel and Nicolas Turpault and Eduardo Fonseca and Justin Salamon and Prem Seetharaman and John R. Hershey},
title = {What's All the FUSS About Free Universal Sound Separation Data?},
journal = {in preparation},
year = {2020},
}
Audio-visual source separation¶
AVSpeech dataset¶
AVSpeech is an audio-visual speech separation dataset which was introduced by Google in this article Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation.
More info here.
References
@article{Ephrat_2018,
title={Looking to listen at the cocktail party},
volume={37},
url={http://dx.doi.org/10.1145/3197517.3201357},
DOI={10.1145/3197517.3201357},
journal={ACM Transactions on Graphics},
publisher={Association for Computing Machinery (ACM)},
author={Ephrat, Ariel and Mosseri, Inbar and Lang, Oran and Dekel, Tali and Wilson, Kevin and Hassidim, Avinatan and Freeman, William T. and Rubinstein, Michael},
year={2018},
pages={1–11}
}