Should i transcode to prores




















The companion piece to this article is focused on Exports. Adobe helps you get to the finish line faster. See our Best Practice guides for video editing and production:. Legal Notices Online Privacy Policy. Buy now.

Best Practices: Working with native formats Search. Select an article: Select an article:. On this page Importing vs Ingesting Understanding your options How long does transcoding take? Ensuring performance for efficient editing Summing up About these Best Practice guides. Importing vs Ingesting.

There are two basic approaches for bringing footage into Premiere Pro: Import and Ingest. Understanding your options. Native formats If you could only edit native formats, you probably would. Transcoding Transcoding refers to the process of creating a copy of your file, generally for better playback. Proxies Proxies are lightweight copies of your source files. Rendering Rendering is when you combine Effects with your footage during editing.

Encoding Encoding is about compression for your final output. How long does transcoding take? Ensuring performance for efficient editing. There are two typical scenarios where transcoding may make sense:. If your system hardware struggles to decode your media responsively, without dropping frames. This varies, depending upon the particular codec, format, and overall system performance. The file size is often larger than long-GOP more compressed media.

Contrast this with camera recording codecs, which maximize the use of limited storage at the expense of playback performance. The benefits of transcoding. How to transcode. There are two primary workflows for transcoding: Transcoding in the background Use Adobe Media Encoder Watch folder to transcode. Background transcoding. Project settings. Media browser switch. Codec choice. Adobe Media Encoder progress panel. Adobe Media Encoder Watch folders. Adobe Media Encoder can be set up on a difference computer system.

The editor or assistant editor can then add that media to Adobe Premiere Pro. Or whether you want to transcode into another format. Well, it depends. Pretty much all of the major software packages can now edit any codec that your camera creates.

There are two main factors you need to consider when choosing your edit codec: compression type and bit rate. Most lower to mid-range cameras record with codecs that use temporal compression, also known as long-GOP compression. The simple explanation of a long-GOP is that, for each frame, the codec only captures what has changed between this frame and the previous frame. The difference between this frame and the last frame is just a few pixels. So all you need to store is a few pixels.

The issue, however, is that these codecs tend only to work well when played forward. And tt takes a lot more processing power to do those things quickly with a long-GOP codec. A high-end computer might have no trouble, but even a mid-range computer will lag and stutter when you skim through the footage quickly or jump around.

So even a mid-range computer can skip around very smoothly. The other thing that can cause issues with playback is raw video. Raw video needs to be converted before it can be displayed sort of like a codec does.

Ironically, both the low-end cameras and the highest-end cameras produce files that are hard to edit! The good news is that hard drives are getting faster every day. The average external hard drive is only just barely fast enough to play that back. Here are some rough guidelines for common data storage speeds. There will always be certain models that underperform or overperform.

Shooting in log is a way of preserving as much of your dynamic range as possible. This allows a scene that has bright highlights and dark shadows without blowing out the highlights or crushing the blacks. Blown-out highlights are a particularly nasty side-effect of shooting on video instead of film. So shooting in log can help make your footage feel more cinematic. The most common way to do that is to add a LUT to your footage. This means that your editor will need to apply the appropriate LUT to all of the clips when editing.

This can be annoying to manage, and it can also slow down the computer a bit. This is because it needs to first decode each frame and then apply the LUT before displaying it. The color of two shots may influence how you intercut them. That way, the editor is always working with footage that has good contrast and color and never has to bother with LUTs. Note that you should only do this if you are using a Proxy workflow, not a Direct Intermediate workflow described below.

The main downside of transcoding your footage before editing is simply the time it takes to do the transcode. When I worked at Khan Academy, our founder would regularly record short video messages to send out to people. Who were often on very tight schedules. Just a few cuts, maybe some music, a title, and I was done.

Generally, I would do most of the transcoding overnight, often with multiple machines running at the same time. There are two common ways of working with intermediate codecs:. You can optimize for editing speed and storage convenience instead. After the shoot, the raw files are backed up and put in storage. When choosing a proxy codec, you want to go for one that does not use temporal compression aka inter-frame compression or long-GOP compression , and you want to pick one that has a lower bitrate.

The good news is that most editing software today can switch between the camera files and the proxy files in just a couple clicks, so you can even go back and forth if you need to. Everyone knows how to handle them. That used to certainly be true, but nowadays both codecs work very smoothly on all modern editors including Premiere Pro. The only significant difference between the two for a proxy workflow is the fact that you may have trouble creating ProRes on a PC, while DNxHD is very easy to create cross-platform.

Regardless of which of the two codecs you pick, you also have to pick which flavor you want. Start off with the smallest ProRes or DNx codec in the same resolution as your capture codec. If you have lots of extra storage space, think about using the next largest flavor. This means that you transcode your camera files into a codec that is both good for editing and very high-quality not very lossy.

The key to picking a good Direct Intermediate codec is to make sure that you are preserving all of the information from your capture codec. An intermediate codec will never make your images better more detailed explanation below , but it can definitely make them worse if you choose the wrong codec.

The important thing is to understand the details of your original footage and make sure that your intermediate codec is at least as good as your capture codec in each area. You want an intermediate codec that is at least and 8-bit. Going beyond these values e. We have 4 options to choose from that are and bit.

You might think that all you need is to match the camera bitrate Mbps , but you actually need to greatly exceed the camera bitrate.

This is because h. Because h. In order for ProRes to match the image quality of h. ProRes will probably do just fine, but if you have lots of storage space, then going up to ProRes HQ will have a slight edge.

Part of the reason why the Direct Intermediate workflow is common is because it used to be a lot harder to use a proxy workflow. The main exception is when you have a lot of mixed footage types. If you have multiple frame rates and frame sizes in the same project, switching back and forth from the proxies to the capture codecs can be a headache.

If you are using some third-party tools to help prep and organize your footage before you start cutting, those can also make the relinking process more tricky. One common example might be software that automatically syncs audio tracks or multicam shoots. If you were to include the LUT in your transcode for Direct Intermediate workflow, you would be losing all of the benefits of recording in log in the first place.

This is very important, because it is very commonly misunderstood, and there is a lot of misinformation online. Transcoding your footage before you edit will never increase the quality of the output. There are some extra operations that you could do in the transcode process such as using sophisticated up-res tools that could increase the image quality in some cases, but a new codec by itself will never increase the quality of your image.

That includes going from h. It also includes going from 8-bit to bit. And going from to This is a photo of a rose reflected in a water droplet. Now what if I take a photo of my monitor with a Red Helium 8k camera.

This is a beast of a camera. The Red camera has more megapixels, right? I have a file that is technically higher-resolution, but it does not capture any more of my subject the rose than the first one did. Here, DaVinci just keep crashing and I had to do the a simple grading on premiere.

When I ask my teacher about this, he said that the problem was in the fact that I didn't transcode my native clips to ProRes before start cutting on Premiere cs6 The computer was a iMac from this year. I don't think your teacher was correct here. I have edited many different kinds of codecs nativley using premiere pro. I also use davinci to do my grading, I have never had davinci crash on me because I was using the native files. I have files like R3D. But I have never had any issue with it crashing.

I agree. While resolve is sluggish about h. XML between PP and resolve has been and on and off problem everytime there is a new release. So your teacher is correct in saying to transcode your native clips to ProRes.

Also while you can use a iMac, it is really not the best computer for the job. Resolve runs better when two GPUs are used. One for the rendering and one for the graphic interface.

Export subclips to Premiere Pro CC for editing. Re-link to the camera files as per the above video. Why this workflow? I use EDIUS because it doesn't need to transcode my 5D h videos and works them nearly as good as if they were transcoded to something else. Sony Vegas also works quite well with native camera files without the need to transcode. Transcoding takes time, and storage, unless I throw away the transcodes after I am done.

If two companies can create software that works with the native camera files efficiently, so can all the others. Also it's important to note that, when doing effects-heavy work, using a codec that does not degrade over generations such as ProRes or DNxHD is a must to preserve image quality. Of course, this is more of a problem in low-budget or bad-camera situations: last term I had to shoot a scifi-story on my 60D and before starting with grading or compositing, we converted all h.

Look it up. If you ever tried Edius, you'll never go back to Vegas and all the rest. No, I am not an Edius promoter, I just use it for decades because I don't have the time screw around with time consuming editing. Whether you play at half or a quarter resolution, your PC performance for a heavy 4k file will remain the same.

Because your PC has to anyway deal with the same file, read the entire information from that fat file and transcode it in realtime to show something on the screen at a quarter resolution. You may save some number crunching during displaying details, but file operation remains the same.

On the contrary, proxies save on the file size too. Additionally, if you choose a lighter codec then it become easy on the CPU too. Skip to main content. No Film School. Credit: Dollar Photo Club. By Rob Hardy. December 29, Post production is a tricky business. With all of the codecs, software options, and workflows available to us, it's hard to know if we're being as efficient as we can be. You Might Also Like. Leave this field blank. Reply Share Share this answer:.

Russell Anderson Editor, Programmer. Peter Staubs Camera Assistant. Steve Oakley Great information here! Curious if you've ever done an Avid conform from proxies to full-resolution for colouring? Illingworth Chris Tempel Director. Jerry Roe Indie filmmaker.



0コメント

  • 1000 / 1000