Describing music can be a difficult task. Our system uses AI and machine listening to teach computers to analyse and categorise music into more accurate emotional & mood clusters.
We perform deep extraction of meaningful emotional and mood-related features from music and audio signals. We measure the way humans experience and express music and cross-reference a multi-dimensional database to deliver precision results.
The outcomes will provide more accurate and powerful search engines for music libraries and emotion-based classification for streaming music platforms.
We utilise our partner company Melodie’s music production library as a testing ground for future machine learning applications. Melodie doubles as high quality data source of human composed and human tagged music
UV EmoIntel. Copyright©2019 Uncanny Valley Pty Ltd