Abstract
Generating continuous f0 annotations for tasks such as melody extraction and multiple f0 estimation typically involves running a monophonic pitch tracker on each track of a multitrack recording and manually correcting any estimation errors. This process is labor intensive and time consuming, and consequently existing annotated datasets are very limited in size. In this paper we propose a framework for automatically generating continuous f0 annotations without requiring manual refinement: the estimate of a pitch tracker is used to drive an analysis/synthesis pipeline which produces a synthesized version of the track. Any estimation errors are now reflected in the synthesized audio, meaning the tracker's output represents an accurate annotation. Analysis is performed using a wide-band harmonic sinusoidal modeling algorithm which estimates the frequency, amplitude and phase of every harmonic, meaning the synthesized track closely resembles the original in terms of timbre and dynamics. Finally the synthesized track is automatically mixed back into the multitrack. The framework can be used to annotate multitrack datasets for training learning-based algorithms. Furthermore, we show that algorithms evaluated on the automatically generated/annotated mixes produce results that are statistically indistinguishable from those they produce on the original, manually annotated, mixes. We release a software library implementing the proposed framework, along with new datasets for melody, bass and multiple f0 estimation.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017 |
Editors | Zhiyao Duan, Douglas Turnbull, Xiao Hu, Sally Jo Cunningham |
Publisher | International Society for Music Information Retrieval |
Pages | 71-78 |
Number of pages | 8 |
ISBN (Electronic) | 9789811151798 |
State | Published - Jan 1 2017 |
Event | 18th International Society for Music Information Retrieval Conference, ISMIR 2017 - Suzhou, China Duration: Oct 23 2017 → Oct 27 2017 |
Publication series
Name | Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017 |
---|
Conference
Conference | 18th International Society for Music Information Retrieval Conference, ISMIR 2017 |
---|---|
Country | China |
City | Suzhou |
Period | 10/23/17 → 10/27/17 |
Fingerprint
ASJC Scopus subject areas
- Music
- Information Systems
Cite this
An analysis/synthesis framework for automatic f0 annotation of multitrack datasets. / Salamon, Justin; Bittner, Rachel M.; Bonada, Jordi; Bosch, Juan J.; Gómez, Emilia; Bello, Juan.
Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017. ed. / Zhiyao Duan; Douglas Turnbull; Xiao Hu; Sally Jo Cunningham. International Society for Music Information Retrieval, 2017. p. 71-78 (Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - An analysis/synthesis framework for automatic f0 annotation of multitrack datasets
AU - Salamon, Justin
AU - Bittner, Rachel M.
AU - Bonada, Jordi
AU - Bosch, Juan J.
AU - Gómez, Emilia
AU - Bello, Juan
PY - 2017/1/1
Y1 - 2017/1/1
N2 - Generating continuous f0 annotations for tasks such as melody extraction and multiple f0 estimation typically involves running a monophonic pitch tracker on each track of a multitrack recording and manually correcting any estimation errors. This process is labor intensive and time consuming, and consequently existing annotated datasets are very limited in size. In this paper we propose a framework for automatically generating continuous f0 annotations without requiring manual refinement: the estimate of a pitch tracker is used to drive an analysis/synthesis pipeline which produces a synthesized version of the track. Any estimation errors are now reflected in the synthesized audio, meaning the tracker's output represents an accurate annotation. Analysis is performed using a wide-band harmonic sinusoidal modeling algorithm which estimates the frequency, amplitude and phase of every harmonic, meaning the synthesized track closely resembles the original in terms of timbre and dynamics. Finally the synthesized track is automatically mixed back into the multitrack. The framework can be used to annotate multitrack datasets for training learning-based algorithms. Furthermore, we show that algorithms evaluated on the automatically generated/annotated mixes produce results that are statistically indistinguishable from those they produce on the original, manually annotated, mixes. We release a software library implementing the proposed framework, along with new datasets for melody, bass and multiple f0 estimation.
AB - Generating continuous f0 annotations for tasks such as melody extraction and multiple f0 estimation typically involves running a monophonic pitch tracker on each track of a multitrack recording and manually correcting any estimation errors. This process is labor intensive and time consuming, and consequently existing annotated datasets are very limited in size. In this paper we propose a framework for automatically generating continuous f0 annotations without requiring manual refinement: the estimate of a pitch tracker is used to drive an analysis/synthesis pipeline which produces a synthesized version of the track. Any estimation errors are now reflected in the synthesized audio, meaning the tracker's output represents an accurate annotation. Analysis is performed using a wide-band harmonic sinusoidal modeling algorithm which estimates the frequency, amplitude and phase of every harmonic, meaning the synthesized track closely resembles the original in terms of timbre and dynamics. Finally the synthesized track is automatically mixed back into the multitrack. The framework can be used to annotate multitrack datasets for training learning-based algorithms. Furthermore, we show that algorithms evaluated on the automatically generated/annotated mixes produce results that are statistically indistinguishable from those they produce on the original, manually annotated, mixes. We release a software library implementing the proposed framework, along with new datasets for melody, bass and multiple f0 estimation.
UR - http://www.scopus.com/inward/record.url?scp=85054290573&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85054290573&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85054290573
T3 - Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017
SP - 71
EP - 78
BT - Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017
A2 - Duan, Zhiyao
A2 - Turnbull, Douglas
A2 - Hu, Xiao
A2 - Cunningham, Sally Jo
PB - International Society for Music Information Retrieval
ER -