Motion Painting — Data Analytics Team Intern, Chan Ho Park
The Old Game: Light Painting
Have you ever come across images like this?
I’m sure one would have come across this style of photography. Since lights on the road are traces of countless cars that passed through the road, the trace creates an aesthetic view.
These paintings of light are technically referred to as long exposure photography. As the name suggests, the camera is exposed for a long period; the movement of outstanding light over this period is then captured. In a way it is like taking a photo of swinging fired cans or fireworks: the faster you swing; the more fire the camera will capture. Instead of exposing one’s film with a split second of the light, one would leave the light to be captured for a longer period; a dark background is usually needed to perform these long exposures.
Our New Game: Semantic Painting
Getting tired of the old game, my supervisor Y.K., had suggested me to implement a variation of light painting with the recently available real-time semantic segmentation technologies. Analogous to outstanding light in light painting, outstanding semantic part in segmentation lead us to our new game: semantic painting. When those semantic parts are chosen to be our body part, we have body motion painting.
Demo: https://hanlunai.github.io/long-exposure/
Compare and Contrast
Both the old game and the new game can be regarded as motion painting and require the camera to expose for a long period. In contrast, the new methodology does not require a special background nor any outstanding light source to play with. The only tools you will need for the new game is a camera with a browser and some computing power, such as your laptop! And you can do it anywhere, anytime. But the performance of this new game depends on the accuracy of the semantic segmentation. Thanks to the recent developments in computer vision, we can now perform semantic segmentation in real time and play the game of semantic painting. We expect the technology of semantic segmentation and motion tracking to keep advancing in coming years and the art of body painting to be popularized soon.
More Paintings
Before getting into technical details, let’s look at more paintings I have made.
Well it’s not as aesthetic as the pictures above because my imagination is limited. However, since the model can detect more semantics, someone may come up with a better idea to create a better output than me. If you wish to make some better painting of your own: go to this playground.
Technical Details
I make the above playground with a machine learning framework called TensorFlow. It is a library for research and production easily accessible by everyone. One of the numerous models offered by TensorFlow allows one to input a person’s image and receive an output which segments the person’s body into different parts, like the head, right arm, left arm, etc. It also provides open source pretrained models in their github for performing this functionality. The pretrained model in use is bodypix.
In the following, I show the implementation of Body painting:
- Receive a stream of images through webcam
- Detect and segment the person’s image into different parts
- User chooses a part (parts) of the body act as a brush on a canvas
- Move freely!
- Movement of the chosen body part will be captured and recorded
- Stop the webcam
- Download the created art piece
In terms of the steps that I wrote above, the only step that my code comes in is step 5. Even though the user interface and incorporating the model inside the website is also included in the website, those are something already established by JavaScript and TensorFlow developers. Step 5 can be easily replicated if one understands the stream of image as a stream of combination of the background and the target body part. The movement of the body can be utilized if one compares the output of the model from consecutive frames.
Though in my model I have chosen to simply draw the movement of the person, depending on the way one designs the movement, the output can become more interesting.
Through this article, I wish to thank my team members in Hanlun AI; Y.K., Harold, Max and Kers. Hanlun AI is an AI consulting firm based in Hong Kong where I spent my summer doing my internship. The art I described above is only a small fraction of the intern. For the larger portion of this summer, I encountered interesting questions arising due to the distortion of an image; fisheye distortion and rotation transformations in particular.
This summer internship was connected by University of Hong Kong’s mathematics department so I would like to thank them for providing me with this opportunity.
The company is currently working on two client requested projects, computer vision applications on Tai Chi machine and knowledge graph of an e-learning platform. To keep updated with us, please subscribe to this channel.