Posts tagged: CinemaDNG
I’ve been trying out a lot of different workflows over the past year, based on system specs, software version, firmware version, etc.
Some people probably still think I work with, what I called, the CineForm workflow. But I haven’t used it since last summer, to be honest.
When I started using a dII I was still on a Windows 32 bit system. After a couple of months of working with CinemaDNG footage I choose a Mac Mini with a Promise Pegasus 12 TB Thunderbolt raid 5 hooked up to it, to function as a media archive and was hoping that Apple would soon release a new Mac Pro with Thunderbolt capabilities. I also anticipated a Windows driver for the Ikonoskop ExpressCard Reader.
When I started testing the Mac Mini, I was somewhat amazed by the power that such a silent little block of computing ingenuity could hold. After a couple of weeks I completely switched to the OSX platform, but was still using the CineForm workflow (via Parallels), because of its speed and raw format.
When summer arrived it was quite clear that we wouldn’t be getting a new Mac Pro, nor a Windows driver for the ExpressCard Reader. So I asked a hardware/software savvy friend to build me an editing system, with support for Thunderbolt and running on OSX.
Meanwhile, Adobe had released CS6 with SpeedGrade and for the first time I was able to play some CinemaDNG footage realtime on the Mac Mini and had insanely detailed control on grading. It made me forget about CineForm raw.
When the Hackintosh was ready in September my workflow got drastically changed for a second time.
Let’s look at a very personal assessment, on the 2 systems, of the three mainly used decoder/transcoder software for ‘developing’ and grading Ikonoskop footage:
I think it’s clear looking at the results above what, for me, at this time is the best tool to do the debayering, grading and transcoding of my CinemaDNG footage. But it’s also clear that it depends on the system you’re using. So on my main editing system I’m now working in DaVince Resolve 9 Lite (and that’s free folks!). When going mobile, I’d rather use SpeedGrade.
I haven’t integrated CineForm in the test because I don’t use it anymore and I didn’t get it to work in Parallels 8/Windows 8. But I tested the GoPro Cineform Studio Premium for Windows just before the summer and I was amazed by the new functionalities that were integrated to debayer raw footage. Dynamic range was much improved and of course it was no longer a command line tool. But sadly enough, no Mac version with the same raw functionalities.
But then there’s John Hable at 19lights.com. I already wrote about his Ginger HDR in my last post, but in the mean time he has been working hard to improve his plug-in and I’m astonished by what he has accomplished in such a short time.
This is honestly spectacular. There’s still room for improvement (f.i. better CUDA support) and John is working on this. But this is very promising.
UPDATE: obsolete as of firmware 1.27 (March 2013)
There’s a lot of magenta in the shadows of Ikonoskop footage in DaVinci Resolve. I don’t think it has always been that way, but it definitely is the case in Resolve 9 lite.
There are several ways to get rid of it. I offer you one solution here that’s very quick and easy. In my opinion you have to fix it before you do any Color Correction in the Color Tab. That’s why I’ve created an input LUT that fixes the magenta issue: IkonoVinci. Well it will fix it untill Blackmagic changes something on the decoder side and/or Ikonoskop change something on the encoder side. Then I’d might have to make a new one ;-)
This means you don’t have to fiddle about in the CinemaDNG settings (Camera Raw tab) in Resolve. So leave the White Balance, Color Space, Gamma settings and use the Camera Metadata to do the Decoding.
Unpack the zip and copy the folder into the LUT directory of DaVince Resolve. On OSX this would be: Library>Application Support>Blackmagic Design>DaVinci Resolve>LUT.
Now go to your Project Settings . Select Look Up Tables and select under 3D Input Lookup Table the IkonoVinci.ilut.
Now in your MEDIA tab you’ll still see the magenta. But when you go to the COLOR tab , you’ll see it’s gone. Now you can start grading.
OK, I haven’t forgotten about this blog!
But I should’ve written this post two months ago. Because I’ve been grading Ikonoskop footage all summer in SpeedGrade and I must say, at this moment, it’s absolutely the right tool to grade A-cam dII CinemaDNG with.
How about an open cross platform CinemaDNG processing tool for developing, grading, transcoding, … your CinemaDNG sequences. Impossible? Will never happen?
A while back I wrote to firstname.lastname@example.org to ask them about their plans for their CinemaDNG workflow.
Elle Schneider answered me last week and wrote:
Our current plan is to work with Pomfort, the makers of Silverstack (pomfort.com) to create a custom workflow manager to handle DNG files that could probably be used with the A-cam or any other DNG system.
I can’t tell you any details right now, but we are indeed discussing several options for the release of the Digital Bolex.
So stay tuned!
OK, I’m staying tuned …
The fastest way to transcode CinemaDNG sequences to an intermediate is via CineForm’s DPX2CF command line tool.
But what if you don’t want to render your final print from the CineForm raw files? What if you don’t trust anything that has some form of compression?
So what if you want to use CineForm raw only as a proxy and render your final cut from the original CinemaDNG’s using the Adobe Dynamic Link workflow (AE/Pr)?
I’ve been looking at this in After Effects from all kinds of angels, but can’t seem to find a standard way of linking a folder of proxies (not created in AE) in batch to a bunch of original footage in the AE project. So I fabricated the following:
It’s really a fast way of linking in batch a bunch of proxies, that weren’t created in After Effects, to your source footage. But does this make sense? I just can’t believe there’s not more of a standard way built into After Effects to do this kind of stuff. Is there?
Please enlighten me …
People are cursing over the “available” CinemaDNG workflows, every day. At this time you can’t edit CinemaDNG sequences in any NLE. So you need to transcode to an intermediate or proxy first.
The tools to transcode CinemaDNG’s to an editable intermediate, I know of and/or have used, are the following:
Yesterday I did a little test on a Mac Mini i7 with 8GB RAM. The test footage was 59.18 seconds or 1.434 frames long and about 4.5 GB big.
And that’s why I use CineForm. Even if you don’t want to render your edit from the CineForm raw file, but want to go back to the original CinemaDNG’s, this is still the fastest workflow (using AE/Pr and dynamic link).
What exciting news: a 2K CinemaDNG camera for $3.300 and with a roaring name - Bolex. Their Kickstarter project got funded in less than 24 hours.
Looking at their specs and pictures of the prototype I’m making some assumptions:
A very clever project indeed. I’m not all that convinced by the nostalgic retro design, but that’s just me, and at least it’s some justification for calling it a Bolex.
I think this might be a very good thing for dII users as well. If this camera gets as popular as it seams to be getting, real adoption of CinemaDNG in post software might become a reality.
Some more valuable information on RAW image processing can be found in the comments of Dan Hudgin’s videos on Vimeo. I recognized Dan’s name from some comments on the Ikonoskop forum from quite a while back. He has developed and is still developing his own de-Bayer and Image Processing software (white balance, grading, etc.).
I’ve been testing his software for a while now and still have alot of testing to do. The learning curve for this software is at first quite steap, but I’m getting there (if I find the time).
And it’s definitely worth the effort, because there’s something amazing about the result. But more about this later, as at this time I’m not sure I can trust my eyes.
Anyhow, let’s look at an excerpt of one of Dan’s comments:
As I mentioned in the comments for the #A1 lens test reel, you cannot look at the RAW footage without grading and seem much as it looks like this:
1) Out of focus because there has not been anti-OLPF compensation applied.
2) Very dark almost black at the higher ISO because the camera records sensor linear data, most of the information is in the lower percent of the signal, with two stops of head room, 18% Gray is at about 5% signal depending on the black offset.
3) Almost no color at all, because the Bayer filter in Bayer sensors is de-saturated to allow higher ISO and luma detail in all pixels, the color needs to be increased through a chroma matrix tailored to the sensor type, light source, and subject matter.
4) EI/ISO adjustment curve is absent so the image brightness would vary from one shot to another. Part of the de-Bayer process is to softclip the highlight information to fit within the range of an 8 bit display. The camera shoots 12bits linear data which cannot be displayed on a computer monitor with the slope angle at the same angle as when shooting, to squeeze the tones into a viewable range the RAW data needs to be run through a LUT to correct for the EI/ISO used at the time the images were shot.
5) White balance, the RAW data has a green bias and not white balance, its True RAW data, so no matter what color light you shoot under the sensor is recorded in the same way, so the data levels vary depending on the K value of the light.
Dan is using his software at the moment for tests on a KineRAW-S8 prototype. And you must admit, it looks amazing.
A while back I found this tweet:
I’m no longer in love with the Ikonoskop A-Cam DII. That look can be achieved with any camera by lifting the blacks and dropping saturation
At first I just found it some funny statement and thought something like “well serves you right for falling in love with a tool”.
But then it started to annoy me. Because this shows once more the degree of misunderstanding about a camera that shoots in RAW. The point of a camera that shoots in RAW is, in my opinion and I’m sure most will agree, to have NO look, when the images leave the camera!
The point is you can practically go anywhere in developing and grading to get just the look you want in post. So if lifting the blacks and dropping saturation is what you want, you’ve got it! Same with an oversaturated 50’s kind of look or that washed out style that seems so hip in fashion these days. The dII is not about that. It’s about getting the high quality images that give depth to that look you want.
If you see any similarities in the footage that can be found today, it’s rather due to the software that applies the anti-OPLF compensation, does the de-Bayering, handles black off-set, etc.
Please visit this post (in Spanish) by Manuel López. He has made this great comparison of how the standard settings in different de-Bayer programs interpret the dII footage. Without any grading applied. Great stuff!
The still is a frame from Fabrizio Fracassi’s End Slate, a documentary-like short that you can download from Fabrizio’s site.