Light up the Balls

Audiovisual Interactive Art

It engages the audience with their hand motion to generate rhythmic visuals such as ‘lighting and dancing balls’ which produce mixing sounds with the beat as well.

 

@Goldsmiths University, Computational Arts MA

The best harmony of mixing sounds

Firstly, the Nature Musial version, which is composed of 1) Rain, 2) Wind and 3) Water Rippling sounds from the ducks in the lake. It generates a nature-based musical for comfortable and meditation mood.

Secondly, it’s Orchestra version, which is composed of 1) Rain, 2) Harmony song and 3) Water Rippling sounds. The audience would join to hear sounds mixing with nature pieces and orchestra mood.

Lastly, it’s Acoustic version, which is composed of 1) gun sounds, 2) tongue and 3) Water Rippling sounds. This is the most dynamic sound where we could feel the entertaining mood. When the audience performs their hand gesture they could generate their own rhythms of this version as well.

These data in the below image turned out to be my best choices for mixing sounds that I applied in the program to generate audiovisual musical

These data in the below image turned out to be my best choices for mixing sounds that I applied in the program to generate audiovisual musical

Technical Implementation

I particularly focused on bringing sensory experience with hand gestures mapping to generate visual which triggers sounds. In that process, I analysed and applied on Training Algorithm (Machine Learning) & Data analysis Technology.

Input as 'Hand Gesture'In the Input program(Openframework), I used the device 'Leapmotion' to detect features on hand/finger motions. Then it communicates data signals with the training program 'Wekinator'. I figured out to get a specific joint…

Input as 'Hand Gesture'

In the Input program(Openframework), I used the device 'Leapmotion' to detect features on hand/finger motions. Then it communicates data signals with the training program 'Wekinator'. I figured out to get a specific joint and position of finger/hands to make it features of classification to apply to the training program. 

Training Process

In the Training program (Wekinator), I aimed to train features of hands movement as 31 inputs and make 3 types of sound mixing outputs. I trained with my right hand's gestures and made classification to map into audiovisual art. It is applying input features of hand movement to generate rhythmic drum sounds in this program. On each training model, It makes 6 positions of 5 fingers (Right hand), which was mapped into 3 types of sound in another processing program(output) with total 18 trainining data set.

Audiovisual Output

In the output program(Processing), I generated audiovisual art based on movements of hands/fingers classified and trained in Openframework and Wekinator(I trained with 18 training data, it was enough to generate art). It mixes ‘Audio data’ with hands movement in order to sync well with BPM.

Audiovisual aspect: Particularly, I have created audiovisuals with the metaphor of spheres bouncing and dancing upon features of hand gestures and it generates diverse beat and mixes of music beats using training data. I kept updating to generate rhythmic visuals/graphics for the audience to feel joyful beats. The ball bounces on the beat of music which is created with hand gestures. As project aim was dancing movement triggers audiovisual art, I focused on dancing visuals like bouncing spheres through the depth of 3-Dimensional space with adjusted material of diffusion effect and light control as well.

Bibliography

  • McLean. A, Algorave : Algorithmic dance music. TEDxHull. https://www.youtube.com/watch?v=nAGjTYa95HM. [Accessed 2 Feb 2020]

    ‘Algoraves are parties where people dance to algorithms, where all the music and visuals are made by live coders who work by exposing AV Art’, ‘Pattern in Music : consists of repetition, symmetry, interference, deviation.’

  • Wang. G, This is Computer Music. TEDxStanford. https://www.youtube.com/watch?v=S-T8kcSRLL0. [Accessed 15 Feb 2020]

    ‘Gesture based music where it is using this amazing machine learning device, which seems like pulling the sound upon the gestures’


References

  • Workshops in Creative Coding. Lecture session. 'Machine Learning'. https://learn.gold.ac.uk/course/view.php?id=12859§ion=18

  • Workshops in Creative Coding. Lecture session 'Audiovisual programming'. https://learn.gold.ac.uk/course/view.php?id=12859§ion=19

  • Minim Library in processing for sound output.

  • Aaron. S, Programming as Performance. TEDxNewcastle. https://www.youtube.com/watch?v=TK1mBqKvIyU. [Accessed 10 Feb 2020]

  • Springer has released 65 Machine Learning and Data books for free. https://towardsdatascience.com/springer-has-released-65-machine-learning-and-data-books-for-free-961f8181f189 [Accessed 1/05/20]

  • G.Co / AI Experiments. Using t-SNE. https://experiments.withgoogle.com/drum-machine. [Accessed 01/04 2020]

  • Memo. A. Simple Harmonic Motion. http://www.memo.tv/works/simple-harmonic-motion-5/. [Accessed 01/04/2020]

  • Cameron, Dan. (2017) ‘Kinetic Utopia on the road to the magic zone’, Kinesthesia. Newyork : Getty Foundation.

  • Maeder, M. (2015) Trees: Pinus sylvestris . https://jar-online.net/exposition/abstract/trees-pinus-sylvestris [Accessed 23/02/20].

Previous
Previous

Wave Scape

Next
Next

The New Mobility of Things