Ernest

Ernest is an interactive exploration of machine learning, placing the viewer in the role of training an artificial intelligence designed to make people happy.

Ernest uses computer vision to perceive the user ambiently through facial recognition and emotional analysis based on the viewer’s facial expression. This allow us to infer proximity and attention, and also gauge the emotional state of the viewer as they are shown various pieces of content. Viewers will interact with and train Ernest’s happiness model through keyboard prompts.

This installation is designed to be simple, both in terms of the interaction and the sophistication of the algorithm. Ernest is accessible to viewers of any age or level of technical knowledge, and on the surface is nothing more than a game.

The aesthetic of the installation is designed to be approachable, harkening back to an era when technology was non-threatening and carried the promise of a utopian future. It looks and feels like a nostalgic homebrew terminal, existing in a time prior to mass connectivity and the ubiquitous collection and commercialization of personal data.

While the sophistication of the AI is low, it is not done without intention. For lay people these topics are unapproachable, and this lack of understanding often manifests in feelings of unease, compounded by a consistent pop-cultural narrative of a dystopian Big Brother future.

While these possible futures may manifest, machine learning also carries the potential to greatly benefit society, if it is ethically trained and applied in human-centered ways.

Therein lies the true purpose of Ernest—as viewers interact and become aware that it is perceiving them in ways that feel vaguely human, it opens the door to critical questioning.

For example, if Ernest is trying to make the viewer happy, who determines what “happy” is? What if the other participants train Ernest to think that disturbing content is “happy,” and what if they do so intentionally? What if the technology, with all its good intentions, is repurposed to maximize anxiety or anger? What are the effects of technology like Ernest being deployed widely, knowing that it can be both right and wrong, assigning judgements to people without their knowledge?

Ernest is designed to open a dialogue around the ethics of gathering the data used to train and validate models, the gap between intent and real-world applications of these models, and the tensions around applying automated decision making with imperfect or inaccurate models.

By demystifying and simplifying the mechanics of machine learning through an experiential installation, we will open the door to broader understanding and questioning of the underlying technology and its potential societal impact.

Ernest has been developed by Sean Mulholland with support from Matt Visco and IDEO San Francisco.

Below is a scale model of the installation as it is scoped for IDEO San Francisco in Oct 2018.

Build Documentation