Copy link to clipboard
Copied
I´m scanning paper photographs for making parallax animation. Which image size and dpi resolution is enough for pictures that I´ll use on Ae to do animation that is shown with 1920x1080 video projector on a 3 metres high wall?
Is it better to use JPEG or TIFF files?
Thank you for your help.
Video is pixels not DIP or PPI so you want enough pixels.
If you are working 1920 X 1080 and you want to move in on the image so you see only half of it then the size of the scanned image should be 2X the width of the composition, or two times the height if the image is horizontal.
When doing 2.5d animation where you break up a still image into foreground, middle ground, and background elements and arrange them in 3D space you end up working with both scale and distance from the camera. Usually th
...Copy link to clipboard
Copied
Video is pixels not DIP or PPI so you want enough pixels.
If you are working 1920 X 1080 and you want to move in on the image so you see only half of it then the size of the scanned image should be 2X the width of the composition, or two times the height if the image is horizontal.
When doing 2.5d animation where you break up a still image into foreground, middle ground, and background elements and arrange them in 3D space you end up working with both scale and distance from the camera. Usually the camera moves are very small so most of the time 2X comp size is all you need unless the camera move is a straight push in.
Here's the math involved. You don't want any images in your composition that are at an effective scale of more than 100% but you also don't want images that are so large that they are not ever at an effective scale of something close to 100%. What is effective scale? If a camera is added to a scene at it's default position and footage is added to a scene at it's default position and made a 3D layer and the footage (image) is the same frame size as the composition, then footage layer will be at 100% scale and the distance between the camera will be equal to the zoom value of the camera. The effective scale is 100%. If you move the layer farther away from the camera and you still want it to fill the frame you increase the scale so that the layer still fills the frame. If you move the image a long way from the camera and have to adjust the scale value to 1000% to fill the frame the effective scale is still 100%.
Did you follow that? The same thing applies to moving the camera or the layer closer than the zoom value. If you reduce the distance between the camera and the layer you increase the effective scale and things start to fall apart fast. Adding a little motion blur to a move will help if you are moving way in but if you do things like fly through a window you have to be careful to make sure that you are close to an effective scale of no more than about 150% to keep things looking good with motion blur.
I hope this helps. Most of the time folks think that DPI or PPI is important but there are no inches in video so a 1 PPI image will be the same size as a 9999PPI image when loaded into a composition.
As for format, I would not use JPEG, I would use Tiff and even 16 or 32 bit if you have that option because the Photoshop work of separating the image into foreground, middle ground and background images will be easier. To properly set up your scanner you'll need to measure the images and adjust the scanning rate (the PPI) to the appropriate value. A 5 X 7 image scanned at 200 PPI will give you an image that is 1000 by 1400 pixels. If you have an photo and you want to move in on 1 square inch of the photo in an HD comp you'll need to set the smaller to about 2000 PPI. You can figure it out using these principals.
I hope this helps.
Copy link to clipboard
Copied
As for format, I would not use JPEG, I would use Tiff and even 16 or 32 bit if you have that option because the Photoshop work of separating the image into foreground, middle ground and background images will be easier.
Hi Rick, could you please explain? how is working with Tiff easier when separating the image?
Copy link to clipboard
Copied
My guess is because the TIFF image is uncompressed (or with lossless compression) and with that and higher bit depth if you were separating 1 source image and separating it by selecting areas based on colour rather than drawing a mask around everything, I assume it would be easier because those too things (no compression and higher bitdepth making separating the areas based on colour easier/more accurate/better PQ)?.
Copy link to clipboard
Copied
y guess is because the TIFF image is uncompressed (or with lossless compression) and with that and higher bit depth if you were separating 1 source image and separating it by selecting areas based on colour rather than drawing a mask around everything, I assume it would be easier because those too things (no compression and higher bitdepth making separating the areas based on colour easier/more accurate/better PQ)?.
photos can are usually RAW or Jpeg and I haven't found any difference in creating a selection between the two (unless it's a really compressed jpeg of course). 16 bit can help with color adjustments, not so much to selection. if it's a high quality jpeg, selection can work just fine with photoshop robust selection tools.
you should scale your image at the maximum resolution you would need and no more. the more unnecessary pixels, the slower it takes to do every stage in your production pipeline.
Copy link to clipboard
Copied
photos can are usually RAW or Jpeg
But the OP is talking about scanning photos not using ones from a digital camera.
you should scale your image at the maximum resolution you would need and no more. the more unnecessary pixels, the slower it takes to do every stage in your production pipeline.
Doesn't that disagree with the NyQuist theory etc? When they're scanning feature films they often scan at at eg. 6K even if what they want to end up with 4K (and technically if the scanners were capable enough (and could do so without noise etc.), 8K scanning to get a 4K result should in theory be even better).
They won't be unnecessary pixels if they help prevent aliasing. eg, if you want to move very slowly in to an image until it was 100% pixel size in the comp (and with no visible motion blur) at various points there will be a non-integer scaling down which could produce noticeable aliasing in theory (like more noticeable at lower resolutions). eg. it's easier to scale down 3840x2160 to 1920x1080 (it's a integer number of pixels it's scaling down). The simple scaling from/to 1920x1080 (because it's exactly double the pixel width and double the pixel height) is likely one of the reasons that the 3840x2160 res was chosen). It's harder to scale to resolutions that would be non-integer, that's why when scanning then scaling films for Blu-ray they often slightly crop rather than scale down (depending on the aspect ratio and source pixel dimensions)..
Copy link to clipboard
Copied
But the OP is talking about scanning photos not using digital ones.
I haven't found any selection easier by scanning different formats either. working in large TIffs at 32 bit sounds counter productive to me and I haven't seen any training suggesting otherwise. Op want's to output for HD. scanning more than what he needs is counter productive. scale 2 times, maybe 3 times - it all depends how zoomed is he going to need to go. if he doesn't need to zoom that much, he can leave it at what he needs..
I don't know about theory, I know about practice - and I haven't found this method of scanning in much higher resolution than you need to, productive or efficient in practice.
Copy link to clipboard
Copied
Here's the deal on resizing for video - as I said before your effective scale needs to be somewhere close to 100% at sometime in the composition. Over 120% and you start to loose detail, under 50% and you start to loose detail and increase the chance of developing moray patterns.
The only reason a major film production company would scan a negative at 8K when they were producing in 4K would be to make the roto, masking, and compositing easier to work with because they have more pixels to use and the image is bigger. It has nothing to do with getting a better image by scaling down 50%. For a lot of images containing subjects like shingled roofs or brick walls, or even venetian blinds in the window, scaling down, especially significant scaling down, can cause moray patterns to appear and fine detail that would normally be visible to blend into mush.
I also suggested using higher bit depth images like Tiff, TGA, even Jpeg 2000 or PSD if your scanner is capable of producing this kind of image because creating procedural mattes is easier because you have more color values to work with, color correction is easier because you have more color values to work with, AE decodes these kinds of images very quickly, and it you have to resize a higher bit depth image can give you a lot more to work with when you try and sharpen an image that is scaled up or filter an image that has been scaled down. It's just a far better practice than using 8 bit and maybe highly compressed JPEG images.
Copy link to clipboard
Copied
Thank you Rick.
I don't scan too much, but I do composite a lot of images based on selections in Photoshop in the past decade. I have found that photoshop robust selection tools work just fine on 8 bit and Jpeg and it's easier just the same to make excellent selections. I do work on 16 bit if I need to make color matching or color adjustments (or convert the image to 16 once I start to get heavy on color correction), I don't do that on 32 bit because I find it unnecessary. for the selection - you will work faster if your image is 8 bit than 16 or Jpeg vs Tiff. working with large image files can be counter productive (in photoshop and in Ae as well). video is another thing because creating procedural mattes is sometime crucial for your workflow process (because at least 23 frames per second) unlike photoshop where you work on still frames so using Quick selection or the pen tool is much faster and more accurate.
If I scan again, I will test some Tiff's at higher bit depth and see if I can get better selections in the future.
Copy link to clipboard
Copied
Not trying to nit pick here but scaling down by 50% without some careful filtering will not result in a higher quality image. The original question was about scanning images for use in video and the correct size without some very careful image by image based filtering when scaling down is exactly what I explained. If you really want to optimize each image by working at 2X or 4X comp size and then scaling to fill the composition frame and you want to avoid loosing detail and moray patterns in a lot of images you will need to carefully apply some filtering and not just grab the scale property and reduce it to 30 or 20%.
I never said that careful post production on oversampled scans cannot improve the image, it is just not automatic and every shot and the movement in that shot must be analyzed to make sure that you get the best possible image. I've been doing film to digital since the late 80's the only thing that has really changed is the speed of the scanners and the number of knobs available to fix problems when you need to decrease or increase (sub sample or super sample) the image to a new size. I wish it were automatic, but it's not.
Copy link to clipboard
Copied
https://forums.adobe.com/people/Rick+Gerard wrot
...
The only reason a major film production company would scan a negative at 8K when they were producing in 4K would be to make the roto, masking, and compositing easier to work with because they have more pixels to use and the image is bigger. It has nothing to do with getting a better image by scaling down 50%. For a lot of images containing subjects like shingled roofs or brick walls, or even venetian blinds in the window, scaling down, especially significant scaling down, can cause moray patterns to appear and fine detail that would normally be visible to blend into mush.
...
Yes it could help them with things like what you mention (roto etc.), but scanning at a sufficiently higher resolution than they need for the final result (eg. oversampling) will help improve the picture and avoid issues like aliasing and moray patterns provided that the downscaling is done correctly (obviously if a nearest neighbour type downscaling method was used you would get moire patters). If you have moire patterns in the final result it means it hasn't been sampled correctly at high enough frequency (resolution) and downscaled correctly.
See:
oversampling is the process of sampling a signal with a sampling frequency significantly higher than the Nyquist rate
...Oversampling improves resolution, reduces noise and helps avoid aliasing
https://www.filmlight.ltd.uk/products/northlight/range_nl/index_nl.php
With meaningful 8K fidelity like this, Northlight 2 ensures unsurpassable quality for its two-times oversampled 4K output.
Technical Details | The official site for the films SAMSARA and BARAKA
To get the film from 70mm to stunningly detailed 4k resolution for theaters, each frame of the negative was scanned at 8k resolution...This large file was then compressed into 4k to create the final DCP. The filmmakers extensively tested this method to confirm the benefits of oversampling in the scanning of film to digital
Copy link to clipboard
Copied
If you are working 1920 X 1080 and you want to move in on the image so you see only half of it then the size of the scanned image should be 2X the width of the composition, or two times the height if the image is horizontal.
In theory, due to things like NyQuist and sampling and quality of scaling (eg. scaling down) I assume it would be even better to scan or create the source imagery at an even higher pixel resolution than will ever end up in the finished video (especially if it won't be obscured by motion blur etc). eg. so you have simpler scaling down and a sufficient number of pixels to take the source samples from to prevent aliasing etc. eg. if at some point the source image was 1.3x or 1.5x the width in pixels that was required for the composition at that point, even though it is higher res, surely it should give better quality scaling if it was scaling it down from a source image at that point that was 2x the pixel width required at that point in the video.