This sketch was created using Processing & uses the video & minim libraries. The main part of the code loops through the live feed of the video image from the webcam pixel by pixel. Based off of manual adjusting in the code you can set a threshold based on the RGB input of each specific pixel. Based on the input color of the pixel the code redraws the pixel as a different given color. I currently have the threshold based on the red in each pixel, if red is higher than the green & blue color value than the pixel is redrawn as a shade of red/purple/pink, in the red value is lower than the pixel is redraw as white. This gives the illusion of the viewer being turned into a series of red pixels.
The sketch is also taking in the volume of the environmental audio. This input is then mapped to the z coordinate of the red pixels in 3D space so that each pixel extends forward into space based on the level of the volume.
Look at code HERE.
This is a project hat I completed for a homework assignment in my advanced creative coding class. I used OpenCV and minim libraries in Processing. OpenCV registers a full face or profile as well as open eyes. This processing sketches uses a live feed from the on-computer webcam to track if it recognizes the profile of a face or open eyes. Every time is recognizes a profile that triggers a sample drum audio file.
You can take a look at the code HERE.
This sketch was created in OpenFrameworks. It takes audio input internally through whatever the computer is playing and uses the volume as an input to map out an abstract universe. The Mesh is created at a consistent speed through 3D space on the z-axis, and the globes are drawn at different x,y coordinates based on the volume at that time. Once drawn the globes and the connecting lines (which are all saved into an array of points) pulse to the volume of the audio.