Category Archives: Blog

250126 – Further adventures in film photography…

Well, I for some reason wanted to do some more film photography, so I got myself a bunch of rolls of various forms of black and white stock. I also got a fresh bottle of Cinestill DF96 (because I am a bit cheap and a single monobath is still the most cost effective option).

The thing I mainly wanted to try out with this first roll was slow film. So I loaded my camera with some Ilford Pan F 50. I wanted to go further down the ISO range so I rated it at 25. Pulling it a stop. And I started shooting.

To my main dismay. I got about a dozen shots in… And I couldn’t ignore one thing. The thing being… that the roll-crank wasn’t moving when I cranked forward each new shot. I’ve seen this happen before… and I got saddened. It meant I fumbled the loading. I opened the thing up and sure enough. Not a single exposure was done on a fresh bit of film. I mean. I am a bit miffed about it. I mean. I had gotten some great shots of moose-poo, I thought!

Oh well. Reload properly. Some test shots. Yup. Now the thingy moves as it should. And a few days later. I have the roll fully exposed, I have it developed. And I start to scan.

And to tell the truth. Yes. I really liked the results… for the first half. But for the other half… Where I really commited to using my red filter to try to get extra contrast. Wow. There was pretty much nothing there. I mean. Out of 36 shots on the roll. I have deemed 21 sort of fit for public consumption. But really… the last few… No. Not only did I lose all sort of good contrast. But I also started seeing splotches from something in the developing. And there’s some light leaks during the long exposures…

All in all… Lets call it a learning experience.

Not quite sure though what I’ll load with for the next roll. I am tempted to do Ilford Delta 3200 at 1000. But I will need those rolls later on next week. But knowing myself.. I probably won’t be shooting much on film until then anyways.

Anywhoo. Here be the results that I am willing to share.

Replicating the Ultra Q title sequence!

So, I was looking around at old shows on my own server. And I suddenly got really really obsessed with the title sequence for Ultra Q. The pseudo-anthology monster show that would later, in its second iteration, become the more well known Ultraman.

I saw this title sequence, I can guess that it was probably made with a big tub of colored paint that had the title on it and a couple of spinning things on the bottom scrambled it while an upside down camera captured the distortion. The result, when strung right side up in a final print would then be shown as reversed footage where it starts as a swirly mess and magically unswirls into the intricate design of the title.
At least, that’s how I guess they did that. And it just stuck in me. I NEEDED to replicate it. Somehow. So… I opened Fusion. And. A few hours later… I got a fairly convincing result out of a few dozens of nodes.

I’m not quite sure what I’ll do with this newfound skill. But at least I could satisfy my need to replicate an ancient title sequence.

Also. I had found the show in two forms. Both as a colorized version and the original black and white. Both looked reaally cool in their own rights. So I did my Fusion renders in both color and black and white as well.

Feel free to reach out to me here on the webpage with a comment if you want me to go further into how I did do this replicated effect.

A Pseudo-Unboxing

Just had an idea while driving home after picking up a packet with a couple of blurays from Vinegar Syndrome. Turned into a week-long project filling up my spare time.

Does this mean I will be starting reviewing films? Will I return to that series of reviews that after over a decade still is getting most of the views of my channel?

Maybe? Probably not. But still… ? No… I do not promise anything

240317 – Messing about in Fusion

So, I was sitting at the computer. Watching clips on youtube (as one does) and up popped a video that had a certain clip from a certain japanese show where young teens are forced to evangelize the birth of neon. And I thought. Huh. I could probably do what I saw in that clip.

Okay I may not be able to do the crisp character animation of the Eva Unit 01. But I was not thinking about that. I was thinking about the background. The red and yellow paint flowing past at incredible speed in the background. Reminding me of when filmmakers with little regards to their own safety film close-ups of volcano eruptions with a telephoto lens.

That. I think I can replicate that, at least.

So I opened Fusion and started connecting nodes. And the result was this.

Which resulted in this:

Ok. I couldn’t resist putting it angled over the virtual camera. And as there’s no foreground animation I went ahead and made the colors more contrasty.

All the animation in it is procedural. It’s basically just a few fast noise nodes that have been put through some distortions and colorizations. The only thing making it all move is a single expression that the noise nodes are linked together with.

Point(0.0, time*(2/3))

It moves the canvas of the noise upwards 2/3 of the total height of the resolution.

The eagle eyed of you might have noticed that there indeed are two saver nodes in that node tree. The other is there because I tried extracting the yellow with a color keyer and put it through an XGlow node from the Reactor toolset (please, someone pry me away from XGlow nodes! I love how they look but they take up soooo much time in my node trees! ;D)

The result reminded me of some kind of old school space battle where streaks of lazers burn through the view. Or maybe a kind of athmospheric re-entry of a vehicle. Anyway. It just looked plain cool.

So I had to give it its own render:

I’m not sure what I’ll use these for. It was mainly just an exercise to see if I can put my money where my mouth is, so to speak. I can say. I know how to do that, and be sure that I actually can.

Oh, And I’m counting these in the weekly upload pledge that I am failing so miserably to fulfill. I may know how to do videos, But I struggle with stuff like weekly uploads.

So… anyways…

Be seeing you!

24 Screams of Christmas

Just in case someone interested missed it. During most of December I uploaded a series of youtube-videos that charitably could be called an “Advent Calendar”. Each day there was Herr Nicht Werner and each day he presented another scream. Some other shenanigans did ensue. But that was the main thing.

I welcome you now to watch this whole series. Each episode is just a few minutes. So you can probably get through the whole thing in a single sitting.

If this is not actually to your particular fancy… I… I do not know why you are here. 🙂

Oh, and if you wonder why it’s 24 and not 25. Well, here in Sweden, we celebrate eves, not days.

Take care now! Bye bye then!

MalmScope – Or the Constant Resolution Solution to the (nonexistant) problem of Aspect Ratio Sizes

DISCLAIMER:

The following is the text I have been using for a planned video that never seems to materialize. I have now decided to simply just make a blog post on this blog that I rarely update. One of these days I might actually finish the accompanying video. And on that day. I will add it to this blog post But as of writing this preface/disclaimer. I just want the drfx up on the blog so I can point people to it.

TLDR: the block of text references a bunch of stuff that was supposed to be in the post. But in fairness. There is only one downloadable file present. so I put it up here at the top. Read on to find out what it does. Why I made it and how it was made.

Please do enjoy the following diatribe


Ok. Look. Look at this. This. This is a solution to a problem. But, the thing is. It is not a solution to a global life threatening problem. It is not a problem connected to the current special military operation in Ukraine (the war), it’s not the ongoing economic turmoil faced by the Chinese financial world that could spill over to the world market. It’s not even one of those solutions that is in need of a hitherto not found problem. No. It is far worse. It is a problem. A problem no one actually cares about. Except me… I think.  

Indeed. This solution even makes for a situation that is probably worse than what we have today. But it is a solution to a problem that has been bothering my mind for quite a few years now. Let me explain.

Here’s the thing. The thing. The problem. 

I love movies. I’ve watched them all my life. I have been enjoying them as they are presented. And as I matured to be a teenager, I even got some bizarre want to make some myself. And so I began to search out filmmaking theories and practices. It became slightly obsessive. And as my own collection of movies grew. I jokingly realized I was buying DVDs, and later on, Blurays, not so much based on how good they were as movies, but rather, what I could learn from studying them. And more importantly, their supplements. The extras and featurettes, the commentaries, the interactive menus and the bonus features. 

And one of the earliest things that I started to think about was… the aspect ratio. The shape of the image. And what we as viewers get to see when we pay to watch these cinematic extravaganzas. Is it what we are supposed to see? Or is it cropped? And… is it really worth cropping an image to fill the finite pixels on screen?

Yes. Full screen. Pan scan. Anamorphic widescreen, letterboxes and pillarboxes. And over the years. I start to catalogue my findings and see repeating trends. And as IMAX started to partner up with movie studios to make thrill rides in a format mainly used for educational science projects. One of these recurring things recurred. Namely. The shape of the screen became a selling point once again. It was the widescreen debate again, Just like in the fifties, and in the silent ages, and the late 90s. But now in reverse. We had gotten used to Hollywood presenting things in cinemascope. And it was about as wide as we could possibly tolerate. There were wider options, sure, but Scope became the normal wide alternative. So now we were sold movies where the main point is the verticality they can let you see. And it is always illustrated in the same way. You have the biggest picture. And you crop it to show how much you lose by watching a film in a lesser venue. And…

And simply it did not sit right with me. Why? Because. They are using the shape to sell a size. And it is always with the assumption that the originating format is the intended format and that showing all of it is the intended thing. And that the originating shape corresponds with the biggest size that a cinema can offer. None of those things are necessarily true. 

You have seen these illustrations, I am sure. It shows a full IMAX  frame. And overlaid on it are these markings that show how much is cropped for each way you can watch it. Usually it has full IMAX 15 perf and 5 perf vertical 70mm and 35mm anamorphic, and DCI for 2K and 4K. And. Yes. Looking at it like that. It makes you wonder why not all movies are presented in 1.43:1 IMAX. But. Remember. The exact same tactic was used to sell us on widescreen releases back in the day. Only then, it had a Cinemascope full image as the base and from that you’d show the crops for 16:9 and 4:3. Look at that with the same logic and you’ll say, why not make all movies in Cinemascope and show the full width. And that’s not even mentioning productions shot on super35 where the negative is 1.37:1 and the result is cropped to 4:3, 16:9, 2.35:1 or whatever the filmmakers want. The Matrix and Independence Day and Terminator 2 and on and on. (Man, my references are old). These were released with full width scope prints to theaters and on home video they used more vertical space to fit 4:3 TV’s without the usual compromises that a strictly widescreen negative entails when doing the conversion. 

So. Going back to my solution to a problem that no one cares about. The problem is what I have touched upon here. That, when we discuss aspect ratios, that is, the shape of the image. We always assume there is this biggest shape that fits the mastering medium and we crop from that. So. If the mastering medium is 35mm cinemascope. Going to any other shape will always mean the image gets smaller. Same for 4:3 CRT tubes and 1.43:1 IMAX film. You get a smaller image than you could by filling the screen. It is all very logical. I think it is. Or?

Or does it have to be like this? 

What if we could choose the shape not based on whether it would be bigger or smaller. But. What the shot actually needs? I mean. Even in the cases where a movie storyteller wants to play with the shape. It is always done only in one dimension. The other dimension is always constant. Even when they venture out to muck about with both variables. It is always assumed that we should make each aspect ratio as big as possible for our distribution media. That we should max it out. When Wes Anderson made that film about a quirky adventure at a hotel. We got both 4:3 and 2.35:1. But they were maxed out and when they had sections in 1.85:1 it was basically full screen 16:9. This made both the 4:3 and 2.35:1 sections feel smaller than the 1.85:1 sections. I didn’t feel the width of 2.35:1, likewise I didn’t feel the height of 4:3. Because I was reminded that 1.85:1 was just as tall and wide as both of them together. 

So. Ok. Now I am going to propose part one of my solution. Which is… learn to accept window-boxing.

Yes. Window-boxing. 

For those unfamiliar with the term. It is the bastard stepchild inbred sibling of the more well known ways of presenting cropped images in film. 

Letterboxing is when you reduce the height but keep the width. It got its name from how it looked as if you were peering into a home through a letterbox. Yes, children, in them olden days. Mail came on physical paper to your home, through a hole or a box. Sometimes that hole was in your front door and it was apparently so common to peer through them that people understood what you meant by “letterboxing” an image. Kind of a creepy situation, now that I think about it. 

Pillarbox likewise keeps the height but puts black bars on the sides. It looks as if you watch things between pillars. These names are very creatively chosen, indeed.

Windowboxing is when you combine the two. You get a black frame all around. And since you normally have no excuse to do this (especially since we could get away from overscan on televisions), it is looked at as a sign of a mistake. Because you are essentially just wasting valuable screen area. Traditionally, even if you crop both the width and the height of the recording medium. You still usually would want to scale it up to fill out the target ratio with either pillar or letterboxing. 

These are the accepted facts. 

My proposal is to re-evaluate the third one. To make windowboxing acceptable. Make it work better than what we think we should do. 

To get back to The Grand Budapest Hotel. I am proposing that Wes Anderson should have windowboxed the 1.85:1 segments. That way. When it switches to scope. You get a wider view. And when you have the academy ratio, you get a taller image. You get the best of both worlds. 

Ok. But should the 1.85:1 bits really be the smallest ones? Confined by the height of scope and the width of academy? No, because then you are using the same old thinking I want to get away from. 1.85:1 should be just as significant on screen as the other shapes. 

So… that’s the rub… that question. If it should be windowboxed. And the purpose isn’t to make it feel smaller than the other ratios. How small should we make it? Between this and this. What is the appropriate scaling? Well. That’s the second part of my ruminations. That’s where my mathematical figures play their part. 

So. I started to dabble around in various forms. Using several techniques to try to get something sensible out of this nonsensical task I wanted to complete. 

At first. I went for the naïve approach. I took a canvas. I put in the widest and tallest ratios to get the extremes. And I drew a straight line between the corners. I then made crop guides where each aspect ratio between the two extremes were touching the guidelines. Ok. That is one result. But it did bother me. Doing it like this, did not get me the actual pixel dimensions. I would always need to draw that guideline to calculate the dimensions of each ratio visually. And I wondered if this even was a fair approach to go about this. Is this really a way to get an image that is of equivalent size when comparing the ratios.

As a side thing I also experimented making the guideline into a curve. Trying to mimic the intersection as if it was made with an ellipse instead of a rotated rectangle. I made a bunch of those crop guides and while it was a nice collection of rectangles it still felt like a very imprecise method to go about this. 

I am a subscriber to Matt Parkers channel. I wanted the impartiality of science. I wanted the dimensions not to be arbitrarily chosen. I wanted the assurance of… maths! And maybe a Klein Bottle… 

So to make this problem into something solvable by maths, I needed to boil it down to variables and constants. And I need to decide on what I wanted the solution to adhere to and fulfill. 

So. To start things off. I searched my feelings. I let go. I made the first decision based on logic. Since both height and width in this comparison are variable. The one thing that can be constant between the resolutions is the resulting resolution. 

X and Y in this equation therefore are unknown but Resolution is known. Because this is derived from the known X and Y of another resolution. 

So. We can now have this:

originalX * originalY = Resolution

And:

newX * newY = Resolution

In those two equations only newX and newY are unknown. And Resolution is the same in both. 

I know. It’s not exactly quantum maths. But here’s where MY brain got stumped. 

If we keep the numbers tiny, we can have an example like this. With a 4:3 ratio being converted to a 6:2 ratio with a constant resolution:

4 * 3 = 12

newX * newY = 12

It will work if newX is 6 and newY is 2. Because 6*2 = 12. The numbers are tiny enough that you can guess the right result. And I made it easier on myself by saying 6:2 instead of 3:1 even though the two are mathematically equal. 

But. Let’s throw it into a real world scenario. 

4:3 in a 1080p master is commonly shown as 1440 wide and 1080 high. Now let’s say you want to show a 1.9:1 ratio with the same amount of pixels as that 4:3 image.

So. Let’s populate the equation:

1 440 * 1 080 = 1 555 200

newX * newY = 1 555 200

ok. Now it gets harder to just guess what newX and newY should be to both have an aspect ratio of 1.9:1 and that 1 555 200 resolution.

Again. I was stumped. For years I couldn’t figure it out. My brute force solution was to… just brute force each ratio. Yes. I would simply just type a vertical number into a calculator and multiply it by the ratio and make a spreadsheet of the results. Adjusting the height of each try until the result was as close to the target resolution as possible. Until I would land at the result of:

1716 * 906 = 1 554 696

But that is such a terribly inefficient method. Again. It’s not exactly rocket science. And I do enjoy a good spreadsheet in regular intervals. I should be able to get there quicker than just trial and error. 

So. Years passed. I had basically given up. My daytime job had some restructuring. I found I had an opportunity to take some classes. I took maths with an entirely unrelated reasoning. But this problem lurked away. Maybe I could tackle it one day. And shortly after I had a literal Heureka moment. In a shower, not a bath. But. I stood there slack-jawed. Holy carp. THAT’S IT!

To put my old thinking into context. Since newX and newY is written out as two variables I had thought of it as a two variable problem, and as such the solutions involved graphs and plotting and could get two answers where only one would be relevant. BUT!

The realisation that struck me is that newX is not independent of newY. No, newX can only be one thing for each newY. Yes, newX is completely calculated by newY*newRatio. So therefore.. 

newX = newY*newRatio

So this

newX * newY = Resolution

Is exactly the same as:

(newY * newRatio) * newY = Resolution

Yes, I know the multiplication marks are a bit redundant with the parenthesis. But I prefer to be overtly clear about these things. Nothing should be considered as given. If I can misunderstand it, I WILL. And I don’t want that.

Anyways. And since I know what newRatio is. (It’s the x of an x:1 aspect ratio, that’s easy to calculate by just dividing width with height. 16:9 is 16/9 which is 1.78, roughly, and you just put :1 to the right of it) I now have reduced the problem to one with only one variable. As long as I can find out what newY is I get newX for free!

So. With basic algebra I restructured it so all the knowns are on one side and the sole unknown is on the other. 

So, 

(newY * newRatio) * newY = Resolution

becomes

newY^2 * newRatio = Resolution

Which becomes 

newY^2 = Resolution / newRatio

Which finally is 

newY = sqrt(Resolution/newRatio)

And to put that to the test we take the example: 

1 440 * 1 080 = 1 555 200

Solving with 1.9:1 aspect ratio:

newY = sqrt(1 555 200 / 1.9)

Which is 

newY = 904.724…

And since

newX = newY * newRatio

We have

newX = 905 * 1.9

So.

newX is 1719

And 

newY is 905. 

And as such, I ran around the town streets, naked, flailing my arms about, shampoo and lukewarm water getting everywhere.  Laughing maniacly. It is done! It has been solved! I can now get new accurate dimension at any ratio while keeping the resolution constant! All in one beautiful equation! Ok, it’s two, but still! And dodging more policeman-officers than I really thought wouldn’t be out this time of day. By the time I realized where I was and what state I was in, I was already caught. I had been fitted a nice new snug but kind of constricting jacket. And now I was transported to a fine facility where I was told I would be greeted by specialists in fields that would be beneficial to my current predicament. Top men, they said reassuringly.. top… men…

A few court cases later where crying children and angry parents on witness stands really wanted me to stay indoors for the foreseeable future, I was nevertheless let go. Deemed maybe not completely mentally sane, but at least not a danger towards myself and otters… I mean others! Or maybe I meant otters. 

Nonetheless. While my story here may have been rambling, and in a few cases… exaggerated… that is largely how I ended up with this formula. Using it, you can get within a couple of pixels of the dimensions needed to take one source images resolution and make another aspect ratio while keeping the resolution constant. 

I put it in a google sheets doc (Disclaimer: This sheets doc is not made public yet) to make the process even more automated. 

Now. The pedants out there probably did notice I fudged the numbers slightly. But it was only to make the numbers of pixels even (since computers hate odd numbers in general) and to get total resolution below the source resolution instead of above it for general neatness. 

Now! In conclusion!

You maybe wondering. Where does this formula take us? If I did persuade you in the first bit. How do you use it? Should you bother? And are there gamebreaking pitfalls when using it? What does this mean for viewers at home and in cinemas? Can we reevaluate hardware in current setups?

First off. If the story gains nothing from playing with the shape of the screen. Do not bother. Pick one shape. Keep it maximized throughout. This whole thing only really makes sense if you are intending to mix ratios and are willing to open the Pandora’s box of issues viewers will think they have when watching something mastered in the way I propose. Remember the nighttime battle in Game of Thrones? Some of you are still bitter about it. I assure you, there will be viewers that will make Game of Thrones fans look meek and compliant if you mess with the eldritch horrors of windowboxing.

But, if you decide to sign my waiver of responsiblities. How do I propose you use this formula? Well, here’s my suggestion:

  1. You decide what shape the shot needs to be. 
  2. You shoot it in a way that ensures that there is as many pixels as possible in your budget after cropping to that ratio. 
  3. You make a windowbox that suits your master frame (usually this is one of the broadcast or projection standards). You use the base resolution in the formula and derive from it the new dimensions. 
  4. You scale and frame the source video to fit that window-box. 
  5. Go to 1 for the next shot. Rinse and repeat. 

Yes. That is how easy it should be. But ok. I get it. Most people don’t have time to build window-boxes for each shot-shape. And I mean… only a madman would spend the man-minutes needed to create a bunch of them and upload them in a big package to the internet…

Yes. 

Yes I did. I did set up tons of vector-shapes in layers in Krita and used the formula to get dimensions for each aspect ratio I could think of. See the link in the description to find that zip-file. (Disclaimer: I never did upload them… sorry) They are very simply built. Just a black frame around the white. To make the white transparent you can in your NLE of choice simply use an appropriate blending mode. I do prefer Multiply. If you know of a better one. Add it into the desert of the comments. 

As you may see in the file structure. These crop-guides are organized in folders according to the image resolution. And under those there are for most of them a level of folders which corresponds to the master-frame of the system. To help you navigate these I have made this spreadsheet. It shows what the dimensions are for the resolution for each and every aspect ratio. 

It should also be noted that in order for these to work as intended. You need to add them to the target timeline with no added scaling. It should be centered and pixel by pixel 100 % scale. In Resolve (my favourite), this can be set for the whole project here in Project Settings > Image Scaling > Mismatched Resolution Files > Center Crop With no Resizing. To have it specific to the timeline in question you can look in the timeline settings for the same setting. And you may even want only these windowboxes to behave this way while the actual filmed footage should resize to fit the master frame for ease of use. So you can on the timeline override the timeline and project settings for single files by selecting the clip, opening the inspector and in Retime and Scaling, you have the settings. Set scaling to crop to retain the 1:1 pixel scaling of the source file. 

And for those that wondered about nr 2 in that list of suggested steps. That’s very dependent on the camera you have access to. You can for instance, in some cameras, gain extra pixels on the vertical dimension by setting the camera to film in 4:3 mode or similar. On my own Panasonic GH4 I can use that 4:3 mode by going into the menu and finding a cryptically named setting that was intended to be used with anamorphic lenses. So with it I can get more vertical pixels for aspect ratios that are horizontally narrower than 1.54:1 and maximize the resulting resolution post-crop. For example. If I intend to shoot on that camera and I plan on cropping the sides to IMAX-shaped 1.43:1. I have basically two options. Either use UHD 3840×2160 recording and crop to 3088*2160 to get a maximum resolution of 6 670 080. Or I can use that same 4:3 mode to record 3328×2496 and crop the height to 3328×2328 which has a resolution of 7 747 584. Yes. You gain a whole megapixel by choosing an appropriate recording setting. 

But I do digress.

Especially as I now have gotten hold of a Blackmagic Pocket Cinema Camera 6K 2nd Gen which has very different ratios and resolutions to play with. Oh what fun.

Just look into your documentation that comes with the camera to find out what settings are best for you. 

So wait. Why do I even have all these folders of slightly different resolutions in that package of windowbox cropguides? Surely there should be a method that involves even less end user input? I mean. I can almost hear you now. You look through the files and you find that the specific shape you need is not there. You ABSOLUTELY POSITIVELY must use a shape of 1.47:1 and you need it for your timeline resolution of 1371×99999 and neither 1.43:1 or 1.50:1 will be acceptable for your pixel-peeping eyes. Surely. There cannot be a solution for you to be able to choose ANY source resolution and target ratio? Surely! To spend hours of my life to build something that spits out the correct shape and doesn’t need to be in separate image files. Something that you can import to the NLE and have it do all of this for you. I said at the beginning of this whatever length video that this is a problem noone has, and that noone should bother about? 

Yes… I built a Fusion Macro where you enter the source resolution. It calculates the correct new resolution, makes a rectangle and makes it a windowbox for any resolution and any target scale and it can be used both in Fusion and in Resolves Edit page… 

And I have it here for free… link in the description. 

Feel free to ungroup it in Fusion to customize things to your hearts content. 

I edited this whole mess of a video using this macro. Did you enjoy it? I did.

Now… Whatever should I do with my life when I finally have solved this ancient problem that has not bothered filmmakers worldwide for years.

I think I will lie down in my bed. And I will sleep…

And no. 

I will not do this thing for after effects or premiere pro. Or Final Cut or any of the Open Source NLE I have not been able to run as well as the proprietary DaVinci Resolve.

why?

Because I can’t be bothered. It’s all in here if you want to rebuild it for other platforms. If I can do the maths, then surely most of you can as well. All I ask is that you credit me in just a text note somewhere or something. My ego wants the attention. 

Good bye now. I need to return to my other eternity projects that seemingly will never see endings. 

Please. Just go away. I need to sleep.

Oh for the love of…

(THE END)
…? 

Something Kaleidoscopic This Way Cometh

Last night I had an urge to do something kaleidoscopic. No real plan beyond that. So this is a fast noise with a duplicate node giving 100 duplicates. Constantly rotating. Interacting with each other. And the usual film treatment on top.

The sound is a drone sound where I turned on my Deepmind 12 and found that the preset it was on at the moment behaved very cool when you just held the note. So I held two low notes, and I pressed the hold key to keep them down virtually. And I just recorded the output to Audacity while manipulating the various faders and volume knob on the synthesizer during the 10+ minute runtime. Just a compressor in post to even out the sound volume as it drifts in and out. I was planning on adding more layers of sounds. But this raw evolving drone was just too neat sounding to risk drowning out.

SpaceWater (Short)

Abstract forms dance in front of a field of stars. Just an abstract experiment. Presented in Black and White with stereophonic sound in select venues.

_____________________________________

Shot with #BMPCC6KG2. #BRAW 12:1, 2.7K 120fps.
Found sounds collected with #Zoom #M4 #Mictrak.
Synth sounds created with #VCVRackV2 Sounds processed with #Audiothings #Reels, Audiothings #Springs and #Softube #TapeEchoes

Edited and graded in #BlackmagicDesign #DavinciResolve and rendered in glorious #MonoChrome #BlackAndWhite

Learning Blender 3D’s Grease Pencil – Day 1 & 2

After much temptation I have now finally started my attempts to try to learn Grease Pencil in Blender 3D. I have dabbled for a while with Blender in general. Doing some abstract models and animations. But now is the time for me to jump in to do what I have spent most of my hobbies doing. 2D animation.

This will be an intermittent series of posts where I simply document what I am doing in Grease Pencil. Following various tutorials and trying to find ways to learn this thingamajigg enough to be able to call myself proficient in it.

Day 1 consisted of just getting a hang of the interface. How to draw simple lines. How to make the keyframes play in the order I want. And what better way to do that than to bring out ye olde bouncing ball. When all else fails. One never can go wrong with the bouncy ball.

Day 2 is today and I went ahead doing some more bouncy balls.

But balls are fun and all,, though I wanted to try out colors. So instead of a bouncy ball, here’s a blinking ducky… thing…

Ok… I realize now that exporting these as videos might not be that great of an idea as I they are very short loops. But with that ducky thingy I did find a rather nice workflow thing where I basically set up each color as a material. And I can then hot-swap them after I did the coloring of the drawings and it automatically updates on all frames that uses that material/color. I mean… this is a feature I have heard of for years and it seems like a very nice thing to have when doing big projects. So in a sense, it’s basically just me being late to the proverbial party.

Oh, well..

I’ll see if I can get some more stuff through this thing.

Oh, and holy heck it’s been a long time since I did anything on this site.

210628 GoProHero9BlackSlowestMoTest

210503 – “Hey!” short

210406

210321 – Yet another sped up twitch stream

As the title says, it is another one of them. I need to set something up so I can make these on a more regular basis. And actually knowing what I am supposed to animate before I start to stream to an audience of… 1… I think that’s a bug… It’s probably zero viewers.

Testing out Krita – And my animation template!

Continue reading Testing out Krita – And my animation template!

Yes! Yet another reboot of the SMA-project! (rules and such)

 

Yes! You read that correctly.

In an effort of continuing the streak I’ve been having with streaming. (well. I’ve done them at least. I have yet to gain followers.) In that attempt I will now revive the old idea of making a feature length movie using random chance as my sole guide.

The deck of 85 cards I started out with is still present and it’s been complemented with another one with 60 cards.

So, The rules I’m setting out for myself (subject of change if need be):

  1. The work will be live-streamed to anyone who cares to look in. (viewable on twitch.tv/jmalmsten_com )
  2. Choosing of the timecode to work on will be done with two decks of cards each streaming session:
    – Deck of 85 cards will select the minute
    – Deck of 60 cards will select the second
    Once drawn the minute card (from the deck of 85) will not be returned to the deck until the project is done!
    Before drawing the seconds card, the timeline will be checked to see that there are no animation already done inside the minute. If it is, then the corresponding cards for those seconds will be set aside until next full draw.
    – It is then up to me as a creator to come up with what will happen during this particular point in the narrative and draw frames accordingly. This can be more than a minute worth’s of content but beware of point 3 below.
    – The method of animation is all up to the animator at the moment. Again. Beware point 3.
  3. A new timecode to work on will only be drawn from the piles when the last timecode is fully finished. Once finished it will not be adjusted in any way until project is finished in full. This includes all visuals. Soundwork may be done according to point 4
  4. The soundwork for each timecode can extend to outside the animation. But once finished, it too should not be touched.
  5. The project will be created in 2K DCI Cinemascope resolution (2048×854) 24 fps and 5.1 surround sound. The finished timecodes will be uploaded to youtube in full HD Cinemascope (1920×800) 2.0 dolby stereo downmix.
  6. THERE IS NO RULE SIX!
  7. The Finished 4K form will be in two versions. Timeline Corrected where the narrative flows as on the timeline of the NLE. And a second one will be created that keeps the randomised shot-order mainly for fun of the viewer.
  8. Two playlist will be maintained on youtube ( youtube.com/jmalmsten ), the timeline-accurate order and the randomised chronological uploaded order. Gaps in the timeline will have placeholder footage.
  9. Once project is over. A remix of the audio with voice-acting and music will probably ensue before the final directors cut will be released alongside the other two versions.
  10. I’ll probably add something or subtract… we’ll see how things goes. This is a project mainly for fun anyways… (nervous laughter).

181101 – Just a Smile! :D

Just a smiling gif for use on Twitch streams when happy stuff happens. Continue reading 181101 – Just a Smile! 😀

181023 Twitch-stream – Corgie With A JetPack

Started out kind of aimlessly… Continue reading 181023 Twitch-stream – Corgie With A JetPack

181004 – Livestream Test Twitch.com – Poop

I have just spent the last few hours animating a gif where a man poops his pants… Continue reading 181004 – Livestream Test Twitch.com – Poop