JPEG, TIFF, GIF, PNG. Hi-res. Lo-res. PPI. DPI. LPI. Who cares? All you want is to have pictures in your book.
Okay, I’m overdue for a blog post, and this is a topic I’ve wanted to tackle. But it’s a bit daunting, and it will require a clear head. So go have a cup of coffee and come back. It’s okay. I’ll wait.
“I found it on the Web” is insufficient justification for believing you have the right to use an image, nor is “I scanned it from a book” or “I paid the photographer for it.” There are exceptions, such as images that are voluntarily placed into the public domain by their creators (see Wikimedia Commons, for example) or that have aged into the public domain under copyright law. Otherwise, someone owns the rights to the image, and if that someone isn’t you, then you need to secure the right to use the image in the way you intend to use it. This may entail paying a license fee, purchasing the exclusive rights, or other arrangements. I’m not a lawyer, and I’m not going to advise you on the details. But if you send me images to put in your book or on your website, I want an explanation from you of why you have the right to use those images. “I created the picture myself” is the easiest explanation.
A further explanation of why “I paid the photographer for it” is insufficient: Studio photographers own the images they take. They sell you prints of those images, but you do not have the right to make copies of those prints unless you have a written agreement that allows you to.
You can hire a professional photographer on a work-for-hire basis (in most states). Or you can pay a one-time fee (to Olan Mills, for example) to obtain reproduction rights. So sometimes paying the photographer is sufficient. But it depends what you paid for, exactly.
The world as we experience it does not consist of pixels. When the artist brushes paint onto a canvas, the paint does not arrange itself into a rectangular array of dots. If you were to take a photograph on film, the image would not consist of a rectangular array of discrete dots. However, using modern technologies, we need dots to reproduce an image either on a monitor or on paper.
Whether you take a photo with a digital camera or scan a photographic print with a scanner, you end up with pixels. A pixel is a rectangle of a specific color that is the mathematical average of the colors found within that particular rectangular region of the original scene or photograph. Obviously, the size of the rectangle is important, because each rectangle is a solid color. An array of four pixels by four pixels might give you enough information to know that it represented a person’s face, but it would not give you enough information to identify the person with any degreee of certainty. From the point of view of wanting to represent the real world, therefore, more pixels is better than fewer pixels.
However, a pixel is also a physical region on the imaging surface of a digital camera and on your computer monitor, and there are physical and engineering limits to how small that region can be. So there is a practical limit to number of pixels in an image. A large scanner can generate a file containing a much larger number of pixels than your hand-held camera has room for or than you can display at one time on a monitor.
In any case, image resolution is the dimensions of an image in pixels. If the image is 1200 pixels high by 1600 pixels wide, then its resolution is 1200 × 1600. Notice that there are no dimensions associated with this expression.
3. Pixels per inch
Suppose your computer monitor has 100 pixels per inch (ppi) in each direction. A 1200 × 1600 image, viewed at its natural size (100%), fills an area 12 inches high by 16 inches wide (which may be larger than your monitor, of course). You can view the same image at a different scale, but when you do so, the software you are using resamples the image if you reduce the scale, and it dithers the image if you increase the scale. Dithering is the process of interpolating new pixels between existing pixels and calculating an average color of the neighboring pixels to apply to each new one. This results in loss of sharpness, of course.
To print an image in a book, a good practice is to provide 300 pixels per inch. So a 1200 × 1600 image, printed at its natural size, will cover four inches by five and one-third inches on the printed page. It can be reduced to cover less area without damage, but enlarging it to cover more area requires dithering to increase the number of pixels (the image resolution) and results in loss of sharpness.
By the way, 300 pixels per inch works well for photographs. If you were to scan a line drawing, though, 1200 pixels per inch would be needed to ensure smooth lines with clean edges. Line art works better if it is created from scratch in a vector drawing program, which eliminates any concerns about resolution.
If you want a larger image on the printed page, you have to start with more pixels, either by using a higher-resolution camera or scanning a larger print.
4. What color is your pixel?
On a monitor, a single pixel is composed of the three transmissive primary colors (red, green, and blue), each of which can be adjusted to any of (typically) 256 levels, giving you 16,777,216 possible colors. But printing doesn’t work that way. In printing, what varies is the size of the colored (or black) dot; and instead of the additive (transmissive) primaries, the colors are composed of the subtractive (reflective) primaries, cyan, magenta, and blue. To represent a small area of some color on the printed page, we use variously sized overlapping ovals of those three colors and black (abbreviated with K to avoid confusion with blue). So we talk about RGB images for the computer monitor and CMYK images for the printed page.
A monitor—even a well-calibrated high-end monitor—can only provide an approximation of the color that will print on paper. The reason is basic physics. Ink on paper gets the image to your eyes by reflection. The monitor gets the image to your eyes by transmission. Reflection is a subtractive process (all the wavelengths of incoming white light are absorbed by the ink pigments except the wavelengths for cyan, magenta, and yellow, which are reflected back to you; the white paper not covered by those three pigments reflects back whatever white—mixed wavelength—light not absorbed by the black ink). Transmission is an additive process (red, green, and blue wavelengths are added to create the impression of the various colors, including white). The full range of colors available in each system, called the gamut, is different for the CMYK subtractive system than for the RGB additive system. So there are colors that can be represented on the monitor that you will not see on paper.
5. Dots per inch
An output device like a desktop laser printer or (more important for this discussion) a filmsetter or a computer-to-plate (CTP) system is unlike a monitor. It can only paint a fixed dot black or white. There are no shades of gray. There are no colors. (That’s why four-color printing requires four printing plates, one black and white plate for each of the CMYK inks. It’s the ink that provides the color, not the plate.)
So how do we get from your pixel, with one of millions of colors, to a dot on an output device that can only be black or white? Well, suppose we have a 2400 dot per inch (dpi) output device (for simplicity—higher numbers are common). Each pixel of your original 300 ppi image is going to be represented by a square array of 64 dots (8 × 8) for each of the four printing inks. For a monochrome (black & white) image, this means there are 64 possible levels of gray for that 8 × 8 region. And the same would be true for each of the four inks for a color image. That gives us enough precision to represent the color of the pixel and to keep the image as sharp-looking as it was before.
6. Lines per inch
As I said above, an image is printed on paper using oval dots, in one color or four colors. The spacing of these dots is called the line screen. A typical monochrome image in a newspaper is printed at 85 lines per inch (lpi), and you can easily see the individual dots with the naked eye. Most book printing is done at 133 lpi, with better color printing done at 150 lpi or higher.
The name line screen comes from the way halftone images (those arrays of varying size ovals) were created photographically. A piece of film with ruled lines, placed between the image being photographed and a piece of unexposed lithographic film, created an interference pattern that resulted in the pattern of dots on the film when it was developed. The spacing of the ruled lines determined the spacing of the halftone dots.
It is not typical, at least with conventional printing methods, to put ink on paper at a halftone line screen of 300 lines per inch. So the imagesetter emulates an old halftone screen by averaging some number of your original pixels together. For example, to print at 150 lines per inch, a 16 × 16 dot region (four of your original 300 ppi pixels) would be averaged to create one halftone dot. The size of the dot, drawn as an oval on that 16 × 16 region for each of the four colors, would determine the apparent color of the printed image.
In this example, we’ve taken a fixed-resolution image (1600 × 1200), converted it to a 300 ppi rendering, sent it to a 2400 dpi output device, and converted it to a 150 lpi printing plate.
7. A word on image file formats
The native file format for most photographic images is JPEG. If you take a digital photo with your camera or if you purchase a royalty-free image from a website, you are going to be starting with a JPEG image.
What you need to understand about JPEG is that it is what is called a lossy format. This means that every time you resize it, crop it, adjust colors, or make any other changes and then save it, the compression algorithm is run again, resulting in the averaging of neighboring pixels and a loss of sharpness. When you save a JPEG, in most image processing software, you are afforded an opportunity to specify a quality level. At the highest level, you may not actually lose any sharpness. However, the safest way to handle a JPEG is to convert it at once to a lossless format.
The lossless format used for printing on paper is TIFF. You may be using some intermediate lossless format, such as Photoshop PSD, while you are working on the image, but when you are done, save it as a TIFF. You can always save a copy of the TIFF as a JPEG if you need a web image. But you cannot go the other direction. That’s what lossy means.
You may also encounter GIF and PNG images. They are typically not used for printing and should be converted to TIFFs as well.
All of the above is second nature to people who work in the graphic arts, but it is generally confounding for authors. I’ve simplified somewhat (intentionally) in trying to lay it out as clearly as I can, but there are sure to be questions. Please feel free to post questions to the comment stream or to email me directly.
This is the clearest explanation I've ever seen of these elements, and the first time I've been able to understand them!
Post a Comment