what a difference- a very useless tutorial 2.

26 04 2011

This post is going to be another super useless tutorial. 


here are the steps:

1. we will make a  a basic motion detection with random resize.

2. we will change the  motion image in to a mask

3. we will use this mask in a feedback loop to produce this glitchy movement.

During this tutorial i will use a fix resolution (800×600), it is possible to use different size or use a ‘rendering destination image’, but i prefer the fix size.

1. lets use a ‘video input’ as an input video signal and lets resize it randomly, i used two simple ‘random’ nodes one to the pixel wide and one to the pixel high. I used 10-400 and 10-300 for the min and max values of the random. Now lets stop here for a while. when i scale down the image the result will look like this:

this image is a bit blurry and for the more glitchy result i want to see more the pixels. Like this:

Both image is downscaled to 80×60 pixels, but at the first image i used the default pixel interpolation, at the second result i used the nearest neighbor. A simple ‘core image’ node is able to solve this problem without writing code. Lets click on the ‘core image’ node and at the inspector window lets go to the ‘settings’ (command + 2) and lets use the  ‘Show Advanced input options’. Now click back to the ‘input parameters’ (command+1) and change the ‘linear’ to ‘nearest’. Now we just need another resize to uniform the image to 800×600.

Then lets work on the motion detection part. There are many solutions to do this(optical flow by Vade) , but i would like to use the simplest solution. Lets get the difference of two frames, for this lets use a ‘queue’ node (size 2) and a structure index member (index: 0). How does it works? I put my video signal in to the queue, so it will generate a structure of images, in this case a structure with two members (0 and 1), after i will choose the first image from the structure. Now i just need to compare with the original image, for this i will use a ‘different blend’. So i will put the original image as background and the structure index member output as the other image. Voila we have the motion image.

As you see, there is a ‘signal’ node connected to the ‘Filling’ input of the queue. For me the motion image was a bit vibrating so i put the ‘signal’ to give a ‘bang’ at every .1 seconds. Not necessary but handy for the better result.

2. Lets create the mask from this image. Lets use a threshold. As we can recognize very quickly, there is no built in threshold in quartz composer, so we have to find one. There are different custom plugins (2 of them are really cool) but we can use the simplest ‘Core Image’ code.

kernel vec4 multiplyEffect(sampler image, float threshold)
float a = (1==2/2)?1:0;
vec4 px = sample(image, samplerCoord(image));
// premultiply (px);
a = (px.r * px.g * px.b);
// Test if product of color compoonents is below threshold and set to zero if it is, else 1.
// float b = (a <threshold)? 0:1; alternative function
float b = step(threshold, a);
vec4 px_c = px *b;
px_c.a = b; // Alpha component for mask (unnecesary though)
return px_c ;

I found this snippet on the kineme forum, but i have no idea who put there.Sorry for that.
I would put an interpolation to the threshold amount. What is really important, the minimum value has to be 0 (this will clear the feedback loop).
The thresholded image will be our mask image. So lets turn to the final step (and a special thanks to wordpress for the different fonts).

3. for the feedback i will use a custom plugin from Noise Industries. They are writing effects for final cut pro, avid and after effects. They are using mostly QC based stuff, and they let us use several custom plugins.
Lets choose the one called ‘dissolve with mask’.

we have three inputs:
1. image: the output from ‘Accumulator’ (dont forget to adjust the pixel size), and lets hook the output of
‘dissolve with mask’ to the input of ‘Accumulator’. This will generate the feedback.
2. target image: the original image (rescaled to the right size, now 800×600)
3. mask image: the threshold’s output

special thanks to Andrew Benson who wrote a very similar tutorial to Jitter what i read a couple of months ago, and now i utilize it very deeply.



2 responses

2 05 2011

hey pixelnoizz, thanks for the tutorial. it is always great to learn something new from tutorials like this one.
I am wondering what would be useful points to manipulate during a live set-up? what kinds of values are good to control from within vdmx?
In my rebuilt qtz from this tutorial I have published some of the controls from the interpolation patch which triggered the threshold on the “NI threshold patch”? and I have a randomized (within a specific range 10-50) the pixel width/height of the first “image resize patch”? what do you think?

17 06 2011

Completely useless! Thanks so much!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: