top of page
jessicadai1018

Assignment 7

Updated: Nov 6, 2023

Utilizing the ml5.js library's handpose model, this project allows players to control the game paddle with the motion of their hand in real-time through a standard webcam. I've incorporated the detection and tracking of multiple keypoints of the hand. These keypoints, which span across the palm and fingers, provide a detailed map of the hand's position and orientation. Although I've included code to visualize these points as green dots for immediate visual feedback, it's optional and can be removed to clean up the display for the end-user. I chose to link the paddle's position on-screen to the index finger's horizontal movement because it's an intuitive way for users to interact. This choice is grounded in the natural human behavior of pointing and aligning with the precision we often attribute to our index finger. Moreover, the model would display the current score and the confidence level of the handpose model's prediction. As a result, the paddle movement in the game feels like an extension of the player's gestures, potentially enhancing the sense of control and immersion.



Dr. Rebecca Fiebrink’s work at the intersection of machine learning and creative practices offers insightful pathways into how ML can support and amplify human creativity. I am mostly aware that ML can support existing creative practices by automating repetitive tasks, offering new tools for analysis and pattern recognition, and facilitating complex decision-making processes, thus freeing artists and creators to focus on the more intuitive aspects of their work.

For instance, in music production, ML can analyze and classify large libraries of samples, assisting artists in finding the perfect sound more efficiently. In visual arts, ML algorithms can suggest color palettes or textures based on an artist's past works, enabling a form of dialogue between the artist and the tool. In the realm of writing, natural language processing can help writers draft, edit, and even brainstorm new ideas.


For my envision, I'm creating a space where the stories and gods of old are not just told but experienced and influenced through the motion of one's own body—a performative environment where mythology comes to life. I am integrating gestures and movements, captured in real-time by 3D cameras and interpreted by my machine learning algorithms, to let participants conjure storms with the wave of a hand or summon deities through dance.

Imagine holding a replica of Hermes' caduceus and watching as the system recognizes this ancient symbol, weaving the participant into a narrative featuring the messenger god himself. I'd like the voice to be more than just sound—it should be a powerful tool for altering the story, with every chant or note played on a traditional lyre enriching the soundscape.

I

am especially excited about the adaptability of the narrative; it's not just about what tales are told but how they are shaped by the emotional tone set by the participant. As for the outputs, I want them to be as vivid and dynamic as the myths themselves. The visuals will not be static images but a real-time epic painted with every movement, casting participants against a canvas of oceans, heavens, and underworlds. The audio will not just accompany but react and adapt, creating an orchestral narrative that rises and falls with the action. This immersive journey could redefine storytelling, turning every participant into both a spectator and a hero within their mythological tale.


3 views0 comments

Recent Posts

See All

Comments


bottom of page