Editorial - Bannerline Communications

 

Home
Journalist Corner
Client News
US Health Care
Meet John Batchelor
Cruise Ships
Crime and Corrections
Urban Affairs
Internet and Computer Talk
Transportation
Modern Media - Bannerline
A Screenplay

Sections

What does PPI have to do with camera resolution?

PPI, DPI, Dots In and Dots Out
A Primer on Digital Images for the Professional

Richard C. Pitt

The following is an answer to a question by my friend and associate, Gary Bannerman.

Richard:

Forgive me if this is an inane question, but it is becoming vital in every publication I do and getting an answer is not easy because:

1. Photography people know nothing about computers.
2. Computer people know nothing about photography.
3. While photographers know something about professional printing, computer people know zilch
4. As you are fond of telling people, web designers (computer) know nothing about graphic design (print)...and that PC/Mac web design - still more concerned with engineering than art - is 5 years behind print design.
5. The "help" menus on computers and the manuals accompanying cameras etc. are written by people with profound ignorance of one sector or the other.
6. The commercial world is obsessed with JPEGs ... without properly comprehending that JPEG is a compression format and that most of the software and Internet transmission modalities assume that compression is the intent (and like GIFs) ...the object is to achieve a pretty picture on a computer monitor in the smallest digital size possible...so every time anyone goes "save as" and requests JPEG, it compresses further.... unless specific contrary instructions are given.
7. The media are so different. In print, one can perfectly predict what the end user sees (and impact quality with choices: 80lb or 100lb paper, glossy or matte, a 4, 6 or 8 colour press, film to offset, or digitial to print). Those who prepare art for the Internet, don't control the display with the end user: they must anticipate the vagaries of "dial-up" versus high-speed, or variable age/speed computers, or 19-inch screens as well as 13-inch old monitors and avant-garde new-age laptops, Blackberries, cell phones et al. Computer folks sometimes assume - incompetently - that what they see on any screen is what they can get in print.

Here's the question:

When digital images directly downloaded from any digital camera show up in PC software such as Photoshop... the "image size" box shows this as 72 ppi ...   which, of course, is the optimum for a PC monitor.....   and it is easy to "save as" the image to 300 ppi or more...    but my question is whether the basic image we are starting with is 72 ppi.... or is that just a reading of how it shows on the monitor?

As you know, saving a truly inferior image to a 300 ppi or more resolution, merely demonstrates the pixelation in its worst form.

Bottom Line: what does 72 ppi mean in Photoshop after images are downloaded from digital cameras, because 72 ppi going to the printer invites disaster...

Gary hit the mark with the question - it is on a subject that has bugged me (when other people talk about it with what I consider obviously incorrect understanding) for years.

The quick answer is that resolution of an image is not the same as the resolution of the display you're viewing it on or the printing process used to produce printed output. If anything, it is only an indication of how large the image will be when displayed on the medium - a 720x576 pixel image will be 10" x 8" on a screen that is 72 PPI (pixels per inch).

The more complex answer is the deceptively simple statement "it depends."

The problem is one of resolving the differences between the various technologies you're dealing with in going from original subject to final product(s). We deal with the differences between continuous tone and filtered primary colours as well as the difference between additive and subtractive primary colours - then deal with pixelization, number of light levels recorded, compression, viewing distance and resolution of the various elements in the process. Along the way the professional will use this information to get the best reproduction of the original on the intended display medium. PPI/DPI is only one measurement of the technologies in the steps involved, and many use it completely wrongly.

The professional also has to deal with:

  • tonal range - pixel depth or number of steps in gradation from full on to full off for any given colour (of the three primaries)
  • palette choice - the choice of colours each tonal range number represents. Do the colours range consistently from saturated to unsaturated, or are there more choices in the midrange and less at the extremes, or does the range represent the specific colours of ink available for the printing stage (Pantone for example)
  • Compression technology and final image file size constraints - this can include a need to send files quickly via the Internet (need smaller file size) or more slowly by courier on a disk or tape (can tolerate large file size)
  • Mapping additive (created by emitting light - as the video screen does) to subtractive (created by selective absorption of incident light by inks) colour processes.

I'll deal with these in subsequent articles.

An Overview of the Process:

The original subject is lit by a continuum of light colors, typically daylight or flash.

From here, all reproductive technologies rely upon the fact that the human eye actually only sees 3 colors and uses blends of these to "see" the full spectrum.

The film camera uses 3 light sensitive emulsions to record the primary colors with all light for a single "spot" being picked up at that spot on the stacked emulsions. There is no averaging. Resolution is based upon the size of the silver grains in the emulsion; thus "fine grained" film is higher resolution than "normal grain". The tradeoff is that finer grains are less light sensitive.

The CCD digital camera/scanner on the other hand doesn't register the colors for any single spot, but instead takes groups of 3 spots and averages them, using colored filters over the sensors on the flat plane of the chip.

A new digital chip (Foveon) uses a digital process that closely matches the depth filtering of film to get the color right for each spot by using the depth of the silicon chip to filter the light and register the color for each spot correctly - no averaging.

As a counter point, NASA (and other space agencies) uses technologies that employ moveable filters and a single CCD chip to achieve very high resolution images - but this only works with static subjects (subjects that don't move during the time it takes to move in and out the various filters needed.) Also, in some professional video cameras (the technology has not yet made it to the mainstream still cameras) a prism and/or filter system is used to split the incoming image to 3 separate imaging chips in "real time" to achieve higher resolution (and higher capture speeds) than can currently be done with all the pixels on a single chip.

At this point (in the camera or scanner) we talk of resolution in pixels without regard to the size of the imaging technology. It matters little whether the imaging chip is 10 inches square with a huge lens and large points on the chip or 1 inch square with the imaging dots microscopic; if the resulting image contains 3.5 Million pixels then the resolution is the same. The PPI for the former would be on the order of 180 where the latter would be closer to 1800. The point is that in this case the PPI relates to the image capture technology.

Again as an aside, I have seen an interesting item where a camera was made from an old plate-camera body and a flat-bed scanner. The resulting images (of static items only due to the long time for the exposure) are incredible - every bit comparable to film of the same size (8" x 10")

PPI in the camera is roughly equivalent to the grain size in film. It is the smallest point that a CCD can register any useful image information. The problem is, with the typical still camera CCD they (the vendors) count all the dots when in fact they should count all the groups of 3 dots since that is actually what is needed to equal the single spot on a piece of film (or a dot on the foveon chip).

Note that with many of today's cameras, the size of the imaging chip is so small that, even though it may have millions of pixels, the lens in front of the camera may not be capable of resolving the image well enough to take advantage of this resolution. The whole camera system needs to be of consistent quality. The problem here is that the technology necessary to create an imaging chip with a size similar to a 35mm film frame is very expensive. It's not that it can't be done, but that the number of flawless chips of that size in a batch is lower than the yield of smaller sizes.

If we ignore for the moment the use of compression and take the image directly to the output medium, we deal only with the direct mapping of the pixels to the output.

If we are going to view the image on a computer screen and want it at full resolution, we'll need a screen with as many pixels on it as the original image has: at 3.5 Mpixels it would be about 1870x1870 for a square image and at 90 ppi (the resolution of an excellent monitor) it would be about 20" x 20" - a pretty large screen even by today's standards.

If we print it at full resolution on a 600dpi printer the image would be about 3" x 3"

If we print it using a 100 screen on newsprint we would get about 18" x 18"

The point is that we'd probably hold the print up close, the newspaper far away, and sit a couple of feet from the computer screen - adapting our viewing so that the important part of the image we are looking at will fill the center part of our field of vision - between 10 and 20 degrees on either side of directly in front of us.

In both the printing systems (the computer printer and the newspaper) the colors are built up one at a time in registration, whereas the computer screen uses side-by-side color dots the same as the CCD uses image elements - in groups of 3 to average the actual color. Note that the screen also uses additive color where the printing uses subtractive.

Now we come to why DPI, and PPI (and printing screen size) have any need to exist at all. It is the same reason why TV news anchors are told not to wear plaid suits or striped shirts - moiré patterns.

If you don't match the image to the viewing/printing technology, you may get visual artifacts that are displeasing to the eye.

Some of the best information on the web about the overall subject can be seen on the web site of the company Foveon. They released a new imaging system (competition to the CCD found in most digital cameras today) last year.

Note that with a CCD imaging system, the placement of the picture elements (pixels) is such that it only averages the color over a fairly large area (that of 3 actual sensors) instead of getting the color through the filtering of a single sensor spot (as in foveon) or the tight grain area of 3 layers of film emulsion in real film. If you were to focus a light beam such that it hit a spot 1 micron by 1 micron on each of the 3 types of image capture (film, CCD, foveon) the CCD might register it only on a single element of one color (green for example) where both film and foveon would register it on all 3 color layers/sensors and record its actual color which might be white (all 3 primary colors)

So the final answer to Gary's question:
"When digital images directly downloaded from any digital camera show up in PC software such as Photoshop... the "image size" box shows this as 72 ppi ...   which, of course, is the optimum for a PC monitor.....   and it is easy to "save as" the image to 300 ppi or more...    but my question is whether the basic image we are starting with is 72 ppi.... or is that just a reading of how it shows on the monitor?"

...   is that this matches the "grain" of the image to the "grain" of the monitor so you don't get image artifacts caused by the mismatching of dots. You should save an image at the resolution of the output medium you are going to use, or at some even fraction of that resolution so that the dots line up (if the printer can do 600DPI then you might save at 600, 300, 200, 150, 100 but not at 72 or at 720) and yes, this means that even saving a poor image at higher resolution may make a difference - but probably not enough to warrant it. Better to save it at a fractional PPI/DPI instead.

Note that all through this I've ignored compression for file size savings and re-sizing for image sizing on final output. These also can affect visual artifacts.

There are two distinct classes of compression: Lossy and Loss-less.

Lossless compression generally looks at the actual bits of the data file without regard to the fact that it contains an image. It looks for patterns that can be encoded using fewer bits, and does this in various ways depending on whose algorithm is being used. The most typical is called "LZW" for the initials of the name Lempel-Ziv Welch. See:  http://dictionary.reference.com/search?q=lempel-ziv%20welch%20compression for a description.

Lossy compression works by comparing the values in adjoining pixels and averaging them. The higher the compression ratio, the larger the area that is averaged, until the fact that it is averaged becomes obvious to the viewer. The most typical used today is JPEG. It also works by palette simplification - which can lead to bands of colour through areas of constant gradation; not as noticeable on things with lots of random detail. See: http://www.google.ca/search?q=define:JPEG+compression for more information.

The only compression the professional photographer should use at any time other than when creating a specific output image for an end use where file size is a concern is a loss-less one. JPEG includes loss-less to 25% compression (75% of "normal" file size) but it is better to stick to TIFF and use the compressed TIFF (LZW) which is loss-less - this avoids the potential of inadvertently going past 25% with JPEG and losing detail.

All in all, the image creator should know what the final output needs to be - and should have an understanding of the technologies involved along the way so that the final product doesn't suffer from the manipulation it undergoes in the process. There are all sorts of tools that can help but most degrade the original in some fashion if used improperly. You should always start from the highest resolution possible and enlarge as little as possible. More than any other format, this means doing your cropping in the camera as much as possible. Today's high quality zoom lenses on good digital cameras are a far cry from some of the original "variable focal length" lenses of 30-40 years ago. If you have one, use it to best advantage and get as much of your intended subject as possible in the frame.

If you are using a digital conversion and are unsure of your lenses, use as high an f-stop as possible to maximize sharpness; remember, the image area is smaller than you are used to and tests the resolution of the lenses.

Other pointers:

  • Know your tools and processes and examine the output of tests to get it right, then document what you've done and stick to the formulas that work.
  • Purchase big flash cards, big hard drives, lots of CDs or a DVD burner, and keep original images at full resolution.
  • Shoot at maximum "native" resolution at all times unless you know that your intended use allows lower quality; then ignore the urge and shoot at maximum resolution anyway, especially if the subject matter is of general interest and you might be able to resell the images for other uses.
  • Don't purchase a camera that forces you to store in JPEG with resolution loss.
  • Make sure the camera you are using has a lens equal to the task. A digital conversion of an otherwise normal 35mm camera typically uses less image area than the 35mm frame, so your lenses should be able to resolve detail better than average in order to take advantage of a chip with very high pixel count.
  • And finally: The resolution of the original image is not the defining factor in how a final image looks, it is the degree of enlargement that makes the difference. If you need to fill the side of a building, starting with a microscopic image will result in disappointment. This is why medium and large format cameras were created.

References and Links

  • http://www.bsu.edu/classes/milesii/portfolio/resume/basic_graphics.pdf Basics of Computer Graphics
  • http://kcbx.net/~mhd/2photo/digital/pixel.htm A bit about Pixels
    • http://home.kcbx.net/~mhd/ Making Black & White Photographic Images a journey through the Opto-Chemical Era into the Digital Age by D. Krehbiel
  •  http://www.sentex.net/~mwandel/tech/scanner.html making a camera from a flat-bed scanner
  • http://www.cs.ubc.ca/~szwang/Research/ScanCam/scancam.html scan camera 122 MPixel

top of page

 

Copyright© -2010 Bannerline