Project by Fanny Curtsson and Nina Nokelainen
Abstract Sound in interaction is a growing area and so is personalization of services. The goal of the project described in this paper was to develop a program that continuously plays a soundscape that, in real time, adapts depending on the user's emotion. This was done by using facial recognition software that identified the user’s current emotion, and then having the application map the identified emotion to different environmental sounds. The result was an iOS application with a minimalistic user interface (only a title and a camera view), that played a soundscape which constantly altered its sound based on four emotions: “happy”, “sad”, “angry” and “calm”.