I am a PhD researcher at the UKRI CDT in Artificial Intelligence and Music at the Centre for Digital Music, London.


I use machine learning to explore the auditory world and understand how we can use computational representations of sound to analyze and synthesize music. My PhD is focused specifically on auditory representations of sound. I seek to integrate what we know about texture and timbre perception into music analysis and synthesis with gradient-based approaches in deep learning.  I am interested in how domain knowledge from psychoacoustics can be used to train and control differentiable signal processing models.











Academia / Research
Cyrus Vahidi, Google Scholar︎