Magdalena Fuentes

Hola! I'm a Provost's Postdoctoral Fellow at the Music and Audio Research Lab and the Center for Urban Science and Progress at New York University. Before, I did my Ph.D. at Université Paris Saclay , and my B.Eng. in Electrical Engineering at Universidad de la República, where I also worked as a research and teaching assistant at the Engineering School and the Music School.

My research interests include Human-Centered Machine Learning, Machine Listening, Self-Supervised Learning, Music Information Retrieval, Environmental Sound Analysis and Sound Source Localization.

mgfuenteslujambio [at] gmail [dot] com


Publications


MONYC: Music of New York City Dataset
M. FUENTES, D. ZHAO, V. LOSTANLEN, M. CARTWRIGHT, C. MYDLARZ, J. P. BELLO
IN Proceedings of the Workshop on Detection and Classification of Acoustic Scenes and Events, DCASE 2021

Exploring Modality-Agnostic Representations for Music Classification
H. WU, M. FUENTES, J. P. BELLO
IN Proceedings of the 18th Sound and Music Computing Conference, SMC 2021

On the use of automatic onset detection for the analysis of Maracatu de Baque Solto
J. FONSECA, M. FUENTES, F. B. BARALDI, M. E. P. DAVIES,
Perspectives on Music, Sound and Musicology, SPRINGER 2021

SONYC-UST-V2: An Urban Sound Tagging Dataset with Spatiotemporal Context
M. CARTWRIGHT, J. CRAMER, A. E. M. MENDEZ, Y. WANG, H. WU, V. LOSTANLEN, M. FUENTES, G. DOVE, C. MYDLARZ, J. SALAMON, O. NOV, J. P. BELLO
IN Proceedings of the Workshop on Detection and Classification of Acoustic Scenes and Events, DCASE 2020

Moving in Time: Computational Analysis of Microtiming in Maracatu de Baque Solto
M. E. P. DAVIES, M. FUENTES, J. FONSECA, L. ALY, M. JERÓNIMO, F. B. BARALDI
IN Proceedings of the International Society for Music Information Retrieval Conference, ISMIR 2020

Tracking Beats and Microtiming in Afro-Latin American Music Using Conditional Random Fields and Deep Learning
M. FUENTES, L.S. MAIA , M. ROCAMORA, L. W. P. BISCAINHO, H. CRAYENCOUR, S. ESSID, J.P. BELLO
IN Proceedings of the International Society for Music Information Retrieval Conference, ISMIR 2019

mirdata: Software for Reproducible Usage of Datasets
R. M. BITTNER, M. FUENTES, D. RUBINSTEIN, A. JANSSON, K. CHOI, T. Kell
IN Proceedings of the International Society for Music Information Retrieval Conference, ISMIR 2019

SAMBASET: A Dataset of Historical Samba de Enredo Recordings for Computational Music Analysis
L.S. MAIA , M. FUENTES, L. W. P. BISCAINHO, M. ROCAMORA, S.ESSID
IN Proceedings of the International Society for Music Information Retrieval Conference, ISMIR 2019

A Music Structure Informed Downbeat Tracking System Using Skip-Chain Conditional Random Fields and Deep Learning
M. FUENTES, B. McFEE, H. CRAYENCOUR, S. ESSID, J.P. BELLO
IN Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019

Analysis of Common Design Choices in Deep Learning Systems for Downbeat Tracking
M. FUENTES, B. McFEE, H. CRAYENCOUR, S. ESSID, J.P. BELLO
IN Proceedings of the International Society for Music Information Retrieval Conference, ISMIR 2018

A Novel Dataset of Brazilian Rhythmic Instruments and Some Experiments in Computational Rhythm Analysis
L.S. MAIA, P.D. DE TOMAZ JR., M.FUENTES, M. ROCAMORA, L. W. P. BISCAINHO, M. V. M. DA COSTA, S. COHEN
IN Proceedings of the AES Latin American Conference, AES LAC 2018

An ENF-Based Audio Authenticity Method Robust to MP3 Compression
P. ZINEMANAS, M.FUENTES, P. CANCELA, J. A. APOLINÁRIO JR.
Circuits, Systems and Signal Processing, Springer 2018

Detection of Follicles in Ultrasound Videos of Bovine Ovaries
A. GÓMEZ, G. CARBAJAL, M.FUENTES, C. VIÑOLES
IN Proceedings of the Iberoamerican Congress on Pattern Recognition, CIARP 2016

Detection of ENF discontinuities Using PLL for Audio Authenticity
M.FUENTES, P. ZINEMANAS, P. CANCELA, J. A. APOLINÁRIO JR.
IN Proceedings of the IEEE Latin American Symposium on Circuits and Systems, LASCAS 2016

An Audio-Visual Database of Candombe Performances for Computational Musicological Studies
M. ROCAMORA, L. JURE, B. MARENCO, M.FUENTES, F. LANZARO, A. GÓMEZ
IN Proceedings of the International Congress on Science and Music Technology, CICTeM 2015

A Multimodal Approach for Percussion Music Transcription from Audio and Video
B. MARENCO, M.FUENTES, F. LANZARO, M. ROCAMORA, A. GÓMEZ
IN Proceedings of the Iberoamerican Congress on Pattern Recognition, CIARP 2015


Research projects


STAREL

This interdisciplinary research project aims to develop innovative technological and music-analytical methods to gain fresh insight into the understanding and modeling of the rhythmic/metrical structure in audio recordings of expressive music performances. More information here.


Multimodal Signal Processing for Music Information Retrieval

The focus of this project was to assess the impact of multimodal processing techniques for the analysis and transcription of percussive Afro-Uruguayan performance music. More information in this video.