what a difference- a very useless tutorial 2.

26 04 2011

This post is going to be another super useless tutorial. 


here are the steps:

1. we will make a  a basic motion detection with random resize.

2. we will change the  motion image in to a mask

3. we will use this mask in a feedback loop to produce this glitchy movement.

During this tutorial i will use a fix resolution (800×600), it is possible to use different size or use a ‘rendering destination image’, but i prefer the fix size.

1. lets use a ‘video input’ as an input video signal and lets resize it randomly, i used two simple ‘random’ nodes one to the pixel wide and one to the pixel high. I used 10-400 and 10-300 for the min and max values of the random. Now lets stop here for a while. when i scale down the image the result will look like this:

this image is a bit blurry and for the more glitchy result i want to see more the pixels. Like this:

Both image is downscaled to 80×60 pixels, but at the first image i used the default pixel interpolation, at the second result i used the nearest neighbor. A simple ‘core image’ node is able to solve this problem without writing code. Lets click on the ‘core image’ node and at the inspector window lets go to the ‘settings’ (command + 2) and lets use the  ‘Show Advanced input options’. Now click back to the ‘input parameters’ (command+1) and change the ‘linear’ to ‘nearest’. Now we just need another resize to uniform the image to 800×600.

Then lets work on the motion detection part. There are many solutions to do this(optical flow by Vade) , but i would like to use the simplest solution. Lets get the difference of two frames, for this lets use a ‘queue’ node (size 2) and a structure index member (index: 0). How does it works? I put my video signal in to the queue, so it will generate a structure of images, in this case a structure with two members (0 and 1), after i will choose the first image from the structure. Now i just need to compare with the original image, for this i will use a ‘different blend’. So i will put the original image as background and the structure index member output as the other image. Voila we have the motion image.

As you see, there is a ‘signal’ node connected to the ‘Filling’ input of the queue. For me the motion image was a bit vibrating so i put the ‘signal’ to give a ‘bang’ at every .1 seconds. Not necessary but handy for the better result.

2. Lets create the mask from this image. Lets use a threshold. As we can recognize very quickly, there is no built in threshold in quartz composer, so we have to find one. There are different custom plugins (2 of them are really cool) but we can use the simplest ‘Core Image’ code.

kernel vec4 multiplyEffect(sampler image, float threshold)
float a = (1==2/2)?1:0;
vec4 px = sample(image, samplerCoord(image));
// premultiply (px);
a = (px.r * px.g * px.b);
// Test if product of color compoonents is below threshold and set to zero if it is, else 1.
// float b = (a <threshold)? 0:1; alternative function
float b = step(threshold, a);
vec4 px_c = px *b;
px_c.a = b; // Alpha component for mask (unnecesary though)
return px_c ;

I found this snippet on the kineme forum, but i have no idea who put there.Sorry for that.
I would put an interpolation to the threshold amount. What is really important, the minimum value has to be 0 (this will clear the feedback loop).
The thresholded image will be our mask image. So lets turn to the final step (and a special thanks to wordpress for the different fonts).

3. for the feedback i will use a custom plugin from Noise Industries. They are writing effects for final cut pro, avid and after effects. They are using mostly QC based stuff, and they let us use several custom plugins.
Lets choose the one called ‘dissolve with mask’.

we have three inputs:
1. image: the output from ‘Accumulator’ (dont forget to adjust the pixel size), and lets hook the output of
‘dissolve with mask’ to the input of ‘Accumulator’. This will generate the feedback.
2. target image: the original image (rescaled to the right size, now 800×600)
3. mask image: the threshold’s output

special thanks to Andrew Benson who wrote a very similar tutorial to Jitter what i read a couple of months ago, and now i utilize it very deeply.

CurlyCode- Screenplay

15 04 2011

Screenplay event is over, it was a great success. Lots of guests, good vibration and sometimes not enough space for seating. We started with a solid podium discussion at 3pm and finished with a solid after party at 3am, but between total craziness.

podium discussion (from left to right: me, Mia Makela, Sandra Neumann, Aude Francoise, Yro, Can Togay) on the screen Mr Greenaway.

At the begin of the night i played my CurlyCode performance with the live music of Anorganik.

and here is a quick edit of the performance:

CurlyCode performance

6 04 2011

This will be the first official performance with my 3d realtime drawing application. I started this project 2 years ago, but now iam making an improved version with several new additions.

the performance takes place during the screenplay event this saturday.  Music made by one of the earliest hungarian electronic musician, who lives in Berlin as well (Gabor Deutsch -Anorganik,Raster)

CurlyCode links:


Screenshots from the beginning.

Creative application article

and here is the first video presentation what i made with CC.: