
一 (yi) / oneness
TouchDesigner, p5.js, ml5.js, Google MediaPipe, OSC, WebSockets, Node.js
↗ GitHub ↗ Full Performance
For Ceci Sun's senior thesis, 一 (yi) / oneness, I was the sole designer of the real-time interactive visuals accompanying her live dance performance.



Background
In Golan Levin's Fall 2023 Creative Coding course, I used p5.js to create a gesture expander that rendered in real-time. The project used the
via the MIT ML5 Bodypose Keypoints library. Ceci Sun, a friend and
dancer at Johns Hopkins University, performed choreography to one of my
favorite songs at the time, "Motion Picture Soundtrack" by Radiohead.
When Ceci and I caught up in Winter 2025, she asked if I would be interested
in creating a new version of the project for her senior thesis performance.
Given that it had been two years since we last collaborated, and newer
technologies had become available, I was really excited to create an improved
version.

Thematic Underpinnings
Much of Ceci's creative practice is informed by mind-body connections through qigong principles. Her work combines Eastern and Western philosophical perspectives: Eastern traditions emphasize balance and the flow of energy, while Western contemporary dance practices explore emotional expression and psychological experience. As an American-born Chinese American, I have also grown up with a mix of Eastern and Western philosophies. Topics such as meditation and traditional Chinese medicine are deeply ingrained in my personal life, which I was able to draw on when working on this project.

Inspiration
I was heavily inspired by discrete figures by Daito Manabe's Rhizomatiks Research group, as well as Lingdong Huang's {Shan, Shui}*, which Golan had previously shown us and undeniably influenced how we approached the mountain visuals. For time-based visuals, I often find it easier to figure out the music first, and Ceci sent me some placeholder tracks that helped guide what the visuals should look like.

Web Render vs. OSC
I had two options for connecting MediaPipe to TouchDesigner. TouchDesigner's Web Render node can display a webpage as a texture, which would show the MediaPipe visualization directly. With this approach, I would receive an image as an input. The alternative was using OSC to send the numerical pose data. I decided to use OSC because it has a lower latency and the programming workflow of using variables as opposed to encoding and decoding an image felt more intuive to me. Additionally, I could create effects using placeholder values in TouchDesigner and then replace them with the OSC values later.
When it came to deciding what equipment to use for the live performance, it turned out that using the OSC approach would be more reliable. This is because images are very expensive to send. Depending on the size of the venue, using longer cables may introduce some undefined behavior as well.

MediaPipe is really powerful because it can retreive a lot of information super quickly. The framework provided three parameters for all 33 joints and deciding which ones to use so that the mapping would look intuitive to the audience was an interesting challenge. When connecting the data to TouchDesigner, I observed that the all of the data had a natural jitter, so it turned out that only one to two parameters was enough to communicate the overall movement of the choreography.
I decided to use the distance between two fixed points to drive the animation. For example, the distance between the left hand and the left shoulder is easy for the dancer to control. The slider in the p5 GUI would control how sensitive the mapping was.

Limitations and Challenges
Since I only needed a few parameters to communicate the movement, it was not necessary to use all 33 joints. Unfortunately, the MediaPipe library does not support turning off certain joints. If I were to scale up the project, I would need to create a custom model that only uses the necessary joints to improve overall performance. Also, MediaPipe is best trained for waist-up poses filmed on the webcam and tracks at most one person at a time. When an improved model is created in the future, a lot of new possibilities will open up in terms of the types of performances that can be created. Another challenging aspect of this project was actually making the generated visuals tell a story. One piece of advice that Golan gave me that helped a lot was to think of particles as a substance that can be molded to mimic natural phenomena, such as clouds, snow, or sand. To me, it felt more intuitive to use p5.js to fine-tune the parameters to get the desired effect.
More Experiments


Credits
Special Thanks
I would like to thank Ceci Sun for inviting me to collaborate on this project, Golan Levin and Kyle McDonald for their mentorship, Calder Sprinkle for programming the lighting to go with the projections, and Viviana Chen for testing out the software in Pittsburgh. I would also like to thank my beautiful and intelligent classmates for believing in me and providing candid feedback to help me grow.

← Previous
Next →