Nothing schmancy fancy here. Just a strange thought I had to explore.
I just took the footage from my camera. Fed it into a depth map node and let it create its thing. The Depth Map. I then fed it into a 3D plane that’s then displaced.With a camera and set of lights that spin around while there’s some sort of starfields swirling around.
In my defence… it may not be pretty, or useful. But it at least was a short test.
A discussion on facebook got me thinking what it would actually look like to take a wobbly video, feed it into a Gaussian Splat program like Jawset Postshot.
So I grabbed my camera. I did a really bad camera motion⦠And I put it through Postshot.
The result is quite remarkable, considering it was a quick test. I let Postshot downscale the footage to max 1600px and 30Ksteps training. It’s just a simple dolly in. So it helps. But still. The tech makes me wonder what I can use it for in future projects.
For more info on Gaussian Splatting and its uses, I highly recommend Olli’s youtube channel that I have no affiliations with whatsoever. I am merely a fan of his work.
So, I was looking around at old shows on my own server. And I suddenly got really really obsessed with the title sequence for Ultra Q. The pseudo-anthology monster show that would later, in its second iteration, become the more well known Ultraman.
I saw this title sequence, I can guess that it was probably made with a big tub of colored paint that had the title on it and a couple of spinning things on the bottom scrambled it while an upside down camera captured the distortion. The result, when strung right side up in a final print would then be shown as reversed footage where it starts as a swirly mess and magically unswirls into the intricate design of the title. At least, that’s how I guess they did that. And it just stuck in me. I NEEDED to replicate it. Somehow. So… I opened Fusion. And. A few hours later… I got a fairly convincing result out of a few dozens of nodes.
I’m not quite sure what I’ll do with this newfound skill. But at least I could satisfy my need to replicate an ancient title sequence.
Also. I had found the show in two forms. Both as a colorized version and the original black and white. Both looked reaally cool in their own rights. So I did my Fusion renders in both color and black and white as well.
Feel free to reach out to me here on the webpage with a comment if you want me to go further into how I did do this replicated effect.
I was quite literally sitting on the toilet thinking, “waitβ¦ what if I film in super slow rate and feed it into the DaVinci Resolve timeline with frame interpolation set to optical flow instead of nearest neighbour for real time playback?”
A test needed to be done, and that’s this video. Just a simple video explaining what is trying to be achieved and the resulting test footage
Where we meet up with Herr Nicht Werner and see what he’s been up to since last year when he got turned into a golden orb and was sucked into a kitchen drawer.
Bonus: Making the storyboard
I’ve been livestreaming on youteube on a semi-daily basis. This is a condensed version of these streams up to this point. They vaguely tell the story of how the script and story of the upcoming Advent Calendar was created.
I think I’ll continue to make these stream-recaps at various intervals. Just to show what is done here.
Oh, and the streams themselves are mostly sonically empty. I tend to listen to copyrighted music and videos during these streams so the streams are set up so that only speech is broadcasted. And I am not a talkative person during these streams. To overcompensate for that. This version is now filled with ungodly ear-piercing noises.
please do enjoy what you can of it.
Bonus 2: Nicht Werner Regarding Missing Episodes
Where Nicht Werner is forced to apologize about episodes not publishing in time???
Ever wondered what it might feel like to tumble across space uncontrollably while a hellish organ-like sound pounds your ear? Well. This may not be it. But that’s what I thought it looked and sounded like as it went rather quickly from a simple test of line-animations (it’s not in the video any more) to this stroboscopic faux Gaspar Noe experience.
Please do enjoy if you are so inclined. If ye do not. Then I hope future endeavors will be more to your particular liking.
And now, here’s a handful of experimental shots. Trying my hand at the ancient and noble art of Matte Painting. But as it’s my first attempt, i did keep it on Black and white to at least not have to worry about color theory and color matching.
Oh, andβ¦ No AI was used in these paintings. If it looks like poop. It’s because I paint like poop.
And those poop like paintings were done in the wonderful Free and open source program called Krita. And video editing and soundwork was done in not free and open source (but really good anyways) app DaVinci Resolve. Source footage obtained with a BMPCC6K2G.
I just kind of thought it looked coool. Especially as I added the Motion-blur node and things moved too fast for it so it kind of broke and it started to show strange squiggly lines as artifacts. So of course I put edgedetect and xglow on it to accentuate it as electical squigglies.
As usual. The bleps and blΓΆps are made by me with whatever I have in my “studio”.
Table of Contents:
00:00 – Titles 00:29 – The main feature. 10:29 – Outro
Ever wanted to be your own Wong Kar Wai? Ever wanted to try out StepFrame recording? Ever wanted to use 180degree shutters while going slower than normal framerates? Were you one of the few that saw my SlowRate Recording Test ( https://www.youtube.com/watch?v=paAGscJ_kxg ) from a couple of years ago and wondered how to actually do what I did?
Did you think that the video needed to be 10 times longer and filled with bad voice-recording methodolgies and a constant noise that flows between the speakers continiuously throughout the whole video?
Well, to I have the video for you, mr/mrs/ms person who fit all of those strange criteria!
As I allude to in the video. Do not take this as a promise that I will start to do tons of exciting camera and Resolve tutorials. I may follow up with other tutorials and walk-throughs of how I do some of the stuff I do. Just like I am not a gear reviewer. And I am not about to start doing consistent content. This channel has failed to go pro ever since 2006 and I’ll be darned if I start to succeed now.
About a year and a half ago, I made most of an almost hour long video that explored how long a 4×4 single bit video at 24 fps could be without repeating any frames. Thenβ¦ I somehow, through sidetracksβ¦ I kind of forgot to finish it and upload it.
Now, now that I was cleaning out old projects from my project drive on my edit bay. I stumbled over it once more. I thoughtβ¦ it’s kind of a shame to let this go to the bin. So. I took a few hours to do the remaining edits and gave it a noisy soundtrack. Note for sticklers of detail: the narration credits the Behringer DeepMind 12 as the instrument being played. But it ended up only being used for the first 12 minutes or so (fittingly), the rest is done with an arturia keystep running the same sequence on a combo of a MOOG Mother-32 and a DFAM. Where the M32 is going through the Chorus setting of my Mackie Mix12FX because I liked how it widened the mono signal of that moog.
So, if you have an hour to spare for looking at a video that is mostly just flashing squares in a 4×4 gridβ¦ I do hope you enjoy.
As the preamble tells it. I was reminded that the Perseids were going to be at maximum on the night of aug 12. So I gathered what gear I have and set out to try to record some of those quick light streaks.
And. Yeah. I did not record a single one.
But! instead, I did get to use the same cameras to record the unplanned Aurora Borealis that were going on right above my head.
If you’re wondering why I’m in a lot of these shots. The plan was to shoot the hero timelapse shots on my GH4 as it is better at timelapse-recording than my big BMPCC6KG2 camera. The GH4 can do full long exposures of several seconds. While the BlackMagic Camera can at most be pushed to 1/5 second exposures. So I set it up as I mention in my video on slow-rate recording, with the camera at Off Speed Recording at 5 fps. Shutter at 360 degrees and timelapse to only record every other frame. That way I get the maximum exposure of the camera while keeping a 180 degree effective shutter, with an effective fps of 2.5. So I rigged up a microphone and did claps to then do in camera narration.
But the 2.5 fps footage looked better at full 24 fps as I pivoted to capturing the lightshow instead of the shooting stars that I planned to film. And the ISO25600 setting, while noisy. Was surprisingly usable.
As usual. The noises you hear are produced and edited by yours truly. I still won’t call it music. But it’s something for the soundtrack side of things as I scrapped the whole idea of voice narration.
Listen to Walter Much talk about Worldizing here: https://www.youtube.com/watch?v=_py6jVyOqUY
I just realized that I didn’t upload this one. It’s not too hard to watch. I think. And according to Youtube’s statistics people want to see and hear more of/from the Zoom M4. At least that’s how I interpret the abnormal interest that recorder has on my viewership.
So, enjoy this little experiment in old school sound design.
Oh, and just to be clear. Unless otherwise noted in the video. I’m using my DJI wireless mic thingies to pick up normal speech. The M4 is only used for the worldized sounds.
So, I was sitting at the computer. Watching clips on youtube (as one does) and up popped a video that had a certain clip from a certain japanese show where young teens are forced to evangelize the birth of neon. And I thought. Huh. I could probably do what I saw in that clip.
Okay I may not be able to do the crisp character animation of the Eva Unit 01. But I was not thinking about that. I was thinking about the background. The red and yellow paint flowing past at incredible speed in the background. Reminding me of when filmmakers with little regards to their own safety film close-ups of volcano eruptions with a telephoto lens.
That. I think I can replicate that, at least.
So I opened Fusion and started connecting nodes. And the result was this.
Which resulted in this:
Ok. I couldn’t resist putting it angled over the virtual camera. And as there’s no foreground animation I went ahead and made the colors more contrasty.
All the animation in it is procedural. It’s basically just a few fast noise nodes that have been put through some distortions and colorizations. The only thing making it all move is a single expression that the noise nodes are linked together with.
Point(0.0, time*(2/3))
It moves the canvas of the noise upwards 2/3 of the total height of the resolution.
The eagle eyed of you might have noticed that there indeed are two saver nodes in that node tree. The other is there because I tried extracting the yellow with a color keyer and put it through an XGlow node from the Reactor toolset (please, someone pry me away from XGlow nodes! I love how they look but they take up soooo much time in my node trees! ;D)
The result reminded me of some kind of old school space battle where streaks of lazers burn through the view. Or maybe a kind of athmospheric re-entry of a vehicle. Anyway. It just looked plain cool.
So I had to give it its own render:
I’m not sure what I’ll use these for. It was mainly just an exercise to see if I can put my money where my mouth is, so to speak. I can say. I know how to do that, and be sure that I actually can.
Oh, And I’m counting these in the weekly upload pledge that I am failing so miserably to fulfill. I may know how to do videos, But I struggle with stuff like weekly uploads.
A series of Fusion Comps where I tortured my computer for hours on end with lots and lots of glows, noise nodes and whatever I could think of. Loosely strung together with an exploring shape that’s sometimes a simple triangle and sometimes a 3D Tetrahedron.
I might revisit some of it to show how the comps were built if viewers want it.