1st Place: Choreosome

Choreosome is a service that uses Perception Neuron motion capture inputs to run spatial analysis and generate beautiful, immersive experiences that can be viewed in any VR headset. The spatial analysis generates a geometry that has artistic merit in itself, but also allows a user to interpolate between the motions of a dance and learn dance more easily than with any existing technologies. As we record these pieces, future iterations can use an artificial neural network to identify and codify a database of cultural dance moves incorporating not only the technical methods, but also the cultural contexts and geographic spread. This data will be available to a lay user via a mobile app that can help bring this advanced motion capture technology to as many people as possible.

Team members Jenny Fan, Kenneth So, Saif Haobsh, Grace Young, Mahika Phutane and Cindy Zeng were inspired by their common interest in learning and experiencing dance with emerging technologies. Their initial idea of building a shared platform to learn how to dance evolved to its current iteration when they realized the importance of preserving the ethnic and heritage dances of the world. The interface limitations of the difficulties in learning how to dance from a shaky YouTube video could have even more critical consequences when they consider the cultural diversity of the world. Folk dance is an intrinsically human activity that is as universal as humanity itself. They view motion capture technology in this application to be as critical to the preservation of human motion as writing was to oral tradition. It is central to being Human.


2nd place: Emochi

eMochi knows you by tracking your habits through your devices, creating a virtual avatar based on your activity data. It's hard for people to take care of themselves. This is not a health app or fitness tracker. It is a digital avatar that is an extension of you and the choices that you make. The better you live, the more it grows. eMochi reads you through your devices. And creates a pet that can become anything you want. Here’s an example: just like a real pet, your eMochi needs to eat. In order to nurture it, you need to eat healthy. By doing the activities that keep it healthy, you are also keeping yourself healthy. Your eMochi doesn’t just live on your cell phone. It can travel with you to any app, game, or device.

Team members Katherine Wang, Rebecca Turbee, Amily He, Christina Qi, Kevin Ouyang and Kida Gateaux were inspired by making self-car fun and interactive. They are proud of having 18 hours of sleep between 7 people and creating the first serverless third party integration.


Hackers Choice: EAT

EAT is an immersive grape eating experience that lives between the human and the digital world. 

EAT is a four phase, four grape experience. In each phase, participants are in Virtual Reality. They are handed a Vive controller with a fork attached, and there is a grape on the fork. They see the controller, the fork, and the grape in Virtual Reality. The participant's task is to eat the grape. However, the fork they see on the Vive may not be exactly where the fork is in real life. This leads to some hilarious results. Phase 1: Normal Participants see the fork in VR at the same position as it is in real life. Phase 2: Translation Participants see the fork in VR about an inch to the left of where it is in real life. This causes them to miss their faces. Phase 3: Depth Participants see the fork in VR as being closer to them than it actually is. This causes them to be confused when the grape never enters their mouth Phase 4: Flavor Participants eat the grape, but the grape isn't actually a grape, it's a sour candy.

Team members Caroline Hermans, Wiley Corning, Gabe Fields, and Hishem Bedri were inspired by the theme "Why Human?" to create a project that explores the boundaries between the real world and the digital world. They also like to eat grapes.


Christen Lien   /   TIE FOR 1st Place:  Elpis        


1st place Perception Neuron: Choreosome


1st Place BuzzAngle: Melly


Christen Lien   /  TIE for 1st Place: Third Sense

2nd Place Perception Neuron: Tie between Re gesture and Motion Sculpting


2nd Place Buzz Angle: Crowd Sense