I am a PhD researcher at the UKRI CDT in Artificial Intelligence and Music at the Centre for Digital Music, London.


The aim of my research is to develop perceptual representations of sound with machine learning. I primarily seek to integrate what we know about timbre perception into music audio analysis and synthesis.


Human hearing involves top-down and bottom-up processes. Understanding how we hear sound can provide insights into the design of computational acoustic models that reflect human auditory perception.








Academia / Research
Cyrus Vahidi, Google Scholar︎