Camera System - AD Convertor
Sensors consist of pixels with photodiodes which convert the energy of the incoming photons into an electrical charge. That electrical charge is converted to a voltage which is amplified to a level at which it can be processed further by the Analog to Digital Converter (ADC). The ADC classifies ("samples") the analog voltages of the pixels into a number of discrete levels of brightness and assigns each level a binary label consisting of zeros and ones. A "one bit" ADC would classify the pixel values as either black (0) or white (1). A "two bit" ADC would categorize them into four (2^2) groups: black (00), white (11), and two levels in between (01 and 10). Most consumer digital cameras use 8 bit ADCs, allowing up to 256 (2^8) distinct values for the brightness of a single pixel.
This 8 bit Analog to Digital Converter (ADC) "samples" the analog voltages into 256 discrete levels which are assigned a binary label consisting of zeros and ones.
Digital SLR cameras have sensors with a higher dynamic range which are able to capture subtle tonal gradations in the shadow, midtone, and highlight areas of the scene. Because those sensors output pixel voltages with very minute voltage differences they are usually equipped with 10 or 12 bit ADCs which allow for more precise categorization into 1,024 or 4,096 discrete levels respectively. Normally such cameras offer the option to save the 10 or 12 bits of data per pixel in RAW because JPEG only allows 8 bits of data per channel.
Often, marketing material advertises the bitrate of the ADC to suggest the digital camera or scanner is able to output images with a high dynamic range. From the above it is easy to understand that this is only true IF the sensor itself has sufficient dynamic range. As shown in the table below, if the sensor has a low dynamic range to begin with, using an ADC with a higher bit rate will only help to increase sales to uninformed buyers.
|
ADC Type |
Image Tonal Range |
Low (e.g. around 256 tonal values) |
8 bit |
8 bit |
10 or 12 bit |
8 bit |
|
High (e.g. around 4,096 tonal values) |
8 bit |
8 bit |
10 or 12 bit |
10 or 12 bit if RAW |
Camera System - AF Assist Lamp
Some manufacturers fit their cameras with a lamp (normally located beside or above the lens barrel) which illuminates the subject you are focusing on when shooting in low light conditions. This lamp assists the camera's focusing system where other cameras autofocus will likely have failed. These lamps usually only work over a relatively short range, up to about 4 meters. Some lamps use infrared light instead of visible light which is better for "candid" shots where you don't want to startle the subject. Notable higher end external flash systems feature their own focus assist lamps with far greater range.
The focus assist lamp on this Canon PowerShot S50 is located above the lens and beside the flash. It serves a double purpose. Firstly it fires a beam of patterned white light in low light situations which helps the auto focus system to get a lock. Secondly, when the flash and anti-red- eye are enabled it remains lit for as long as you half-press the shutter release to reduce the size of the subject's pupils and thus reduce the chance of red eye.
Hologram AF found on some Sony cameras works by projecting a crossed laser pattern onto the subject. This bright laser pattern helps the camera's contrast detect AF system to lock on to the subject. The system works well as long as the subject is large enough to be covered by several laser lines.
Camera System - AF Servo
Autofocus Servo refers to the camera's ability to continuously focus on a moving subject, a feature normally only found on digital SLRs. It is generally used by sports or wildlife photographers to keep a moving subject in focus.
Autofocus Servo is normally engaged by switching focus mode to "AI Servo" (Canon) or "Continuous" (Nikon) followed by half-pressing the shutter release. The camera will continue to focus based on its own focus rules (and your settings) while the shutter release is half-pressed or fully depressed (actually taking shots). It is worth noting that Autofocus Servo normally also puts the camera into "release priority" mode so that the camera will take a shot when the shutter release is depressed, regardless of the current AF status (good lock or still searching).
Camera System - Autofocus
All digital cameras come with autofocus (AF). In autofocus mode the camera automatically focuses on the subject in the focus area in the center of the LCD/viewfinder. Many prosumer and all professional digital cameras allow you to select additional autofocus areas which are indicated on the LCD/viewfinder.
Example of a camera with a multi selector button (extreme right) to select the AF area spot. The selected area spot is indicated on the main LCD by a red bracket.
In "single AF" mode, the camera will focus when the shutter release button is pressed halfway. Some cameras offer "continuous AF" mode whereby the camera focuses continuously until you press the shutter release button halfway. This shortens the lag time, but reduces battery life. Normally a focus confirmation light will stop blinking once the subject in focus. Autofocus is usually based on detecting contrast and therefore works best on contrasty subjects and less well in low light conditions, in which case the use of an AF assist lamp is very useful. Some cameras also feature manual focus.
Camera System - Batteries
Most digital cameras use either rechargeable Lithium-ion batteries or rechargeable/disposable AAs.
Disposable AAs
Given the high power consumption of digital cameras, it is economically and environmentally unjustified to use disposable batteries other than in emergency situations when your rechargeables are depleted. Disposable Lithium AAs are more expensive than Alkalines, but having about three times the power packed in half the weight, they are ideal to carry with you as a backup.
Rechargeable AAs (NiCd and NiMH)
NiMH (Nickel Metal Hydride) rechargeable AA batteries are much better than the older NiCd (Nickel Cadmium) AAs. They have no "memory effect" (explained below) and are more than twice as powerful. Capacities are constantly improving and differ per brand.
Rechargeable Lithium-ion Batteries
Li-ion (Lithium-ion) rechargeable batteries are lighter, more compact, but more expensive than NiMH batteries. They have no memory effect and always come in proprietary formats (there are no rechargeable Li-ion AAs). Some cameras also accept disposable Lithium batteries, such as 2CR5s or CR2s via an adapter, ideal for backup purposes.
Example of Lithium-ion battery and adapter to accommodate three CR2 Lithium batteries.
Charging
Fully charged batteries will gradually lose their charge, even when not used. So if you have not used your camera for a few weeks, make sure you bring a freshly charged battery along on your shootout. Charging NiCD batteries before they are fully discharged will reduce the maximum capacity of subsequent charges. As the effect gets stronger when repeated often, it is called "memory effect". It is therefore recommended to recharge the batteries only after they are fully depleted. To a lesser extent, this is also useful for NiMH or Lithium-ion batteries, although they have virtually no memory effect. Doing so will also increase the life span of the battery which is determined by the number of "charge-discharge" cycles that depends on the type and brand.
Camera System - Buffer
After the sensor is exposed, the image data will be processed in the camera and then written to the storage card. A buffer inside a digital camera consists of RAM memory which temporarily holds the image information before it is written out to storage card. This speeds up the "time between shots" and allows burst (continuous) shooting mode. The very first digital cameras didn't have any buffer, so after you took the shot you HAD to wait for the image to be written to the storage card before you could take the next shot. Currently, most digital cameras have relatively large buffers which allow them to operate as quickly as a film camera while writing data to the storage card in the background (without interrupting your abi 717g68h lity to shoot).
The location of the buffer within the camera system is normally not specified, but affects the number of images that can be shot in burst mode. The buffer memory is located either before or after the image processing.
After Image Processing Buffer
With this method the images are processed and turned into their final output format before they are placed in the buffer. As a consequence, the number of shots which can be taken in a burst can be increased by reducing image file size (e.g. shoot in JPEG, reduce JPEG quality, reduce resolution).
Before Image Processing Buffer
In this method no image processing is carried out and the RAW data from the CCD is placed immediately in the buffer. In parallel to other camera tasks, the RAW images are processed and written to the storage card. In cameras with this type of buffer, the number of frames which can be taken in burst mode cannot be increased by reducing image file size. But the number of frames per second (fps) is independent of the image processing speed (until the buffer is full).
Smart Buffering
The "smart buffering" mentioned by Phil Askey in his Nikon D70 review, combines elements from the above two buffering methods. Just like in the "Before Image Processing Buffer" the unprocessed image data are stored into the buffer (1) allowing for a higher fps. They are then processed (2) and converted into JPEG, TIFF or RAW. But instead of writing the processed images to the storage card they are stored in the buffer (3). Therefore, the image processing is not bottlenecked by the writing to the storage card, which happens in parallel. Moreover, it constantly frees up buffer space for new images since (3) takes up less space than (2), especially in the case of JPEG. Just like in the "After Image Processing Buffer", the output images are then written from the buffer to the storage card (4). But an important difference is that here the image processing happens in parallel with writing to the storage card. So the image processing of new images can continue while the other images are being written to the storage card. This means that you do not necessarily have to wait for the entire burst of frames to be written to the CF card before there is enough space to take another full burst.
Camera System - Burst (Continuous)
Burst or Continuous Shooting mode is the digital camera's ability to take several shots immediately one after another, similar to a film SLR camera with a motorwind. The speed (number of frames per second or fps) and total number of frames differs greatly between camera types and models. The fps is a function of the shutter release and image processing systems of the camera. The number of frames that can be taken is defined by the size of the buffer where images are stored before they are processed (in case of a before image processing buffer) and written to the storage card.
The number of frames per second (fps) and total number of frames that can be shot in burst mode is continuously improving and is of course higher as you move from consumer and prosumer digital compacts to prosumer and professional digital SLRs. Digital compacts typically allow 1 to 3 fps with bursts of up to about 10 images while digital SLRs have fps of up to 7 or more and can shoot dozens of frames in JPEG and RAW. Some even allow an initial burst of higher fps followed by a slower but continuous fps until the storage card is full.
Camera System - Color Filter Array
Each "pixel" on a digital camera sensor contains a light sensitive photo diode which measures the brightness of light. Because photodiodes are monochrome devices, they are unable to tell the difference between different wavelengths of light. Therefore, a "mosaic" pattern of color filters, a color filter array (CFA), is positioned on top of the sensor to filter out the red, green, and blue components of light falling onto it. The GRGB Bayer Pattern shown in this diagram is the most common CFA used.
Mosaic sensors with a GRGB CFA capture only 25% of the red and blue and just 50% of the green components of light.
Red channel pixels Green channel pixels Blue channel pixels Combined image
(25% of the pixels) (50% of the pixels) (25% of the pixels)
As you can see, the combined image isn't quite what we'd expect but is sufficient to distinguish the colors of the individual items in the scene. If you squint your eyes or stand away from your monitor your eyes will combine the individual red, green, and blue intensities to produce a (dim) color image.
Red, Green, and Blue channels after interpolation Combined image
The missing pixels in each color layer are estimated based on the values of the neighboring pixels and other color channels via the demosaicing algorithms in the camera. Combining these complete (but partially estimated) layers will lead to a surprisingly accurate combined image with three color values for each pixel.
Many other types of color filter arrays exist, such as CYGM using CYAN, YELLOW, GREEN, and MAGENTA filters in equal numbers, the RGBE found in Sony's DSC-F828, etc.
Camera System - Connectivity
A digital camera's connectivity defines how it can be connected to other devices for the transfer, viewing, or printing of images, and to use the camera for remote capture.
Image Transfer
Early digital
cameras used slow RS232 (serial) connections to transfer images to your
computer. Most digital cameras now feature USB 1.1 connectivity, with higher
end models offering USB 2.0 and FireWire (IEEE 1394) connectivity.
Manufacturers generally bundle such cameras with cables and driver software.
Note that real transfer rates are always lower than the theoretical transfer
rates indicated in the table below. Practical transfer speeds depend on your
computer hardware and software configuration, the type of camera or reader, the
type and quality of the storage card, whether you are reading or
writing (reading is faster than writing), the average file size (a few large
files transfer faster than many small ones), etc.
Instead of connecting the camera with a cable to your computer you can also
insert the storage card into the PC Card slot of your notebook or a dedicated
card-reader.
Theoretical Transfer Speeds |
Transfer Rate |
|
USB 2.0 - Low-Speed = USB 1.1 Minimum |
1.5 Mbps |
|
USB 2.0 - Full-Speed = USB 1.1 Maximum |
12 Mbps |
|
USB 2.0 - High-Speed |
480 Mbps |
|
FireWire/IEEE1394 |
100-400 Mbps |
|
Practical Transfer Speeds |
Approx. Transfer Rate |
|
Digital Camera USB 1.1 |
~ 350 KB/s |
|
Digital Camera FireWire |
~ 500 KB/s |
|
USB 1.1 Card Reader |
~ 900 KB/s |
~ 7 Mbps |
PC/PCMCIA Card Slot on notebook |
~ 1,300 KB/s |
~ 10 Mbps |
USB 2.0 or FireWire Card Reader |
~ 3,200 KB/s |
~ 25 Mbps |
A transfer rate of 1 Megabit per second (Mbps) equals 128 Kilobytes per second (KB/s) and is able to transfer 7.5 Megabytes of information per minute or about four 5 megapixel JPEG images.
Remote Capture
On some cameras, the connection to transfer images can also be used for remote capture and time lapse applications.
Video Output
Most digital cameras also provide video (and sometimes audio) output for connection to a TV or VCR. More flexible cameras allow you to switch output between the PAL and NTSC video standards. Cameras with infrared remote controls make it easy to do slideshows for friends and family from the comfort of your armchair.
Print Output
Some digital cameras, e.g. those with PictBridge and USB Direct Print support, allow you to print images directly from the camera to an enabled printer via a USB cable without the need for a computer. Although printing directly from a digital camera is convenient, it eliminates one of the key benefits of digital imaging-the ability to edit and optimize your images.
Camera System - Effective Pixels
Effective Number of Pixels
A distinction should be made between the number of pixels in a digital image and the number of sensor pixel measurements that were used to produce that image. In conventional sensors, each pixel has one photodiode which corresponds with one pixel in the image. A conventional sensor in for instance a 5 megapixel camera which outputs 2,560 x 1,920 images has an equal number of "effective" pixels, 4.9 million to be precise. Additional pixels surrounding the effective area are used for demosaicing the edge pixels, to determine "what black is", etc. Sometimes not even all sensor pixels are used. A classical example was Sony's DSC-F505V which effectively used only 2.6 megapixel (1,856 x 1,392) out of the 3.34 megapixel available on the sensor. This was because Sony fitted the then new 3.34 sensor into the body of the previous model. As the sensor was slightly larger, the lens was not able to cover the whole sensor.
So the total number of pixels on the sensor is larger than the effective number of pixels used to create the output image. Often this higher number is preferred to specify the resolution of the camera for marketing purposes.
Interpolated Number of Sensor Pixels
Normally, each pixel in the image is based on the measurement in one pixel location. For instance, a 5 megapixel image is based on 5 million pixel measurements, give and take the use of some pixels surrounding the effective area. Sometimes a camera with, for instance, a 3 megapixel sensor, is able to create 6 megapixel images. Here, the camera calculates, or interpolates, 6 million pixels of information based on the measurement of 3 million effective pixels on sensor. When shooting in JPEG mode, this in-camera enlargement is of better quality than those performed on your computer because it is done before JPEG compression is applied. Enlarging JPEG images on your computer also makes the undesirable JPEG compression artifacts more visible. However, the quality difference is marginal and you are basically dealing with a slower 3 megapixel camera which fills up your memory cards twice as fast-not a good trade-off. It is similar to what happens when you use a digital zoom. Interpolation cannot create detail you did not capture.
Fujifilm's Super CCD Sensors
Normally sensor pixels are square. Fujifilm's Super CCD sensors have octagonal pixels, as shown in this diagram. Therefore, the distance "d2" between the centers of two octagonal pixels is smaller than the distance "d1" between two conventional square pixels, resulting in larger (better) pixels.
However, the information has to be converted to a digital image with square pixels. From the diagram you can see that, for a 4 x 4 area of 16 square pixels, only 8 octagonal pixel measurements were used: 2 red pixels, 2 blue pixels, and 4 green pixels (1 full, 4 half, and 4 quarter green pixels). In other words, 6 megapixel Super CCD images are based on the measurement by only 3 million effective pixels, similar to the above interpolated example, but with the advantage of larger pixels. In practice the resulting image quality is equivalent to about 4 megapixel. The drawback is that you have to deal with double the file size (leading to more storage and slower processing), while enjoying a quality improvement equivalent to only 33% more pixels.
Camera System - EXIF
Besides information about the pixels of the image, most cameras store additional information such as the date and time the image was taken, aperture, shutterspeed, ISO, and most other camera settings. These data, also known as "metadata" are stored in a "header". A common type of header is the EXIF (Exchangeable Image File) header. EXIF is a standard for storing information created by JEIDA (Japan Electronic Industry Development Association) to encourage interoperability between imaging devices. EXIF data are very useful because you do not need to worry about remembering the settings you used when taking the image. Later you can then analyze on your computer which camera settings created the best results, so you can learn from your experience.
Most current image editing and viewing programs are able to display, and even edit the EXIF data. Note that EXIF data may be lost when saving a file after editing. It's one of the many reasons you should always preserve your original image and use "Save As" after editing it.
Example of EXIF 2.2 information extracted with ACDSee 6.0.3 which allows the data preceded by the "pencil" icon to be edited.
Camera System - Fill Factor
The fill factor indicates the size of the light sensitive photodiode relative to the surface of the pixel. Because of the extra electronics required around each pixel the "fill factor" tends to be quite small, especially for Active Pixel Sensors which have more per pixel circuitry. To overcome this limitation, often an array of microlenses is placed on top of the sensor.
Camera System - Lag Time
Lag time is the time between you pressing the shutter release button and the camera actually taking the shot. This delay varies quite a bit between camera models, and used to be the biggest drawback of digital photography. The latest digital cameras, especially the prosumer and professional SLR's have virtually no lag times and react in the same way as conventional film cameras, even in burst mode.
In our reviews we record "Lag Time" and define it as three distinct timings:
Autofocus Lag (Half-press Lag) Many digital camera users prime the autofocus (AF) and autoexposure (AE) systems on their camera by half-pressing the shutter release. This lag is the amount of time between a half-press of the shutter release and the camera indicating an autofocus and autoexposure lock on the LCD/viewfinder (ready to shoot). This timing is normally the most variable as it is affected by the subject matter, current focus position, still or moving subject, etc. |
(Prime AF/AE) |
Shutter Release Lag (Half to Full-press Lag) The amount of time it takes to take the shot (assuming you have already primed the camera with a half-press) by pressing the shutter release button all the way down to take the shot. |
|
Total Lag (Full-press Lag) The amount of time it takes from a full depression of the shutter release button (without performing a half-press of the shutter release) to the image being taken. This is more representative of the use of the camera in a spur of the moment "point and shoot" situation. The Total Lag is not equal to the sum of the Autofocus and Shutter Release Lags. |
|
Camera System - LCD
LCD as Viewfinder
Digital compact cameras allow you to use the LCD as a viewfinder by providing a live video feed of the scene to be captured. The LCDs normally measure between 1.5" and 2.5" diagonally with typical resolutions between 120,000 and 240,000 pixels. The better LCDs have an anti-reflective coating and/or a reflective sheet behind the LCD to allow for viewing in bright outdoor daylight. Some LCDs can be flipped out of the body or angled up or down to make it easier to take low angle or high angle shots. The main LCD is sometimes supplemented by an electronic viewfinder which uses a smaller 0.5" LCD, simulating the effect of a TTL optical viewfinder. LCDs on digital SLRs normally do not support live previews and are only used to review images and change the camera settings.
Digital compact with a twist LCD Fixed LCD on a digital SLR
LCD to Play Back Images
The LCD screen delivers one of the key benefits of digital photography: the ability to play back your images immediately after shooting. However, since only about 120,000 to 240,000 pixels are used to represent several millions of pixels in the original digital image, further magnification is needed to determine whether the image is sufficiently sharp and needs reshooting. Not all cameras offer magnification and the magnification factor differs per model. Some cameras allow basic editing functions such as rotating, resizing images, trimming video clips, etc. In playback mode you can also select an image from the thumbnail index.
Besides playback, many cameras allow you to "scroll" through the EXIF data, view the histogram, and even show areas with potential for overexposure, as shown in this animation.
LCD Used as Menu
The LCD is also used to change the camera settings via the camera buttons, often allowing to adjust the brightness and color settings of the LCD itself. The main LCD is frequently supplemented by one or more monochrome LCDs (which use less battery power) on top and/or at the rear of the camera showing the most important camera and exposure settings.
Menu system displayed Example of a monochrome status LCD providing
by the LCD information such as battery and storage card
status, exposure, focus mode, white balance, etc. Often a backlight can be activated via a button.
Camera System - Manual Focus
Manual focus disables the camera's built-in automatic focus system so you can focus the lens by hand (*). Manual focus is useful for low light, macro or special effects photography. It is very important when the autofocus system is unable to get a good focus lock, e.g. in low light situations. Note that some digital cameras allow you to manually focus only to a few preset distances. Higher-end digital cameras allow focusing using the normal focus ring on the attached lens, just like in conventional photography.
(*) In digital cameras, manual focus is often implemented on a fly-by-wire basis, whereby the manual inputs to focus in or out are relayed to the autofocus system which effects the change in focus.
Camera System - Microlenses
To overcome the limitations of a low fill factor, on certain sensors an array of microlenses is placed on top of the color filter array in order to funnel the photons of a larger area into the smaller area of the light sensitive photodiode.
Microlens funnels the light of a Electron microscope image of
larger area into the photodiode real microlenses
(indicated in red) of the pixe
Camera System - Pixel Quality
The marketing race for "more megapixels" would like us to believe that "more is better". Unfortunately, it's not that simple. The number of pixels is only one of many factors affecting image quality and more pixels is not always better. The quality of a pixel value can be described in terms of geometrical accuracy, color accuracy, dynamic range, noise, and artifacts. The quality of a pixel value depends on the number of photodetectors that were used to determine it, the quality of the lens and sensor combination, the size of the photodiode(s), the quality of the camera components, the level of sophistication of the in-camera imaging processing software, the image file format used to store it, etc. Different sensor and camera designs make different compromises.
Geometrical Accuracy
Geometrical or spatial accuracy is related to the number of pixel locations on the sensor and the ability of the lens to match the sensor resolution. The resolution topic explains how this is measured at this site. Interpolation will not improve geometrical accuracy as it cannot create what was not captured.
Color Accuracy
Conventional sensors using a color filter array have only one photodiode per pixel location and will display some color inaccuracies around the edges because the missing pixels in each color channel are estimated based on demosaicing algorithms. Increasing the number of pixel locations on the sensor will reduce the visibility of these artifacts. Foveon sensors have three photodetectors per pixel location and create therefore a higher color accuracy by eliminating the demosaicing artifacts. Unfortunately their sensitivities are currently lower than conventional sensors and the technology is only available in a few cameras.
The size of the pixel location and the fill factor determine the size of the photodiode and this has a big impact on the dynamic range. Higher quality sensors are more accurate and will be able to output a larger dynamic range which can be preserved when storing the pixel values into a RAW image file. A variant of the Fujifilm Super CCD, the Super CCD SR uses two photodiodes per pixel location with the objective to increase the dynamic range. A more sensitive photodiode measures the shadows, while a less sensitive photodiode measures the highlights.
Noise
The pixel value consists of two components:
(1) what you want to see (the actual measurement of the value in the scene)
(2) what you do not want to see (noise).
The higher (1), and the lower (2), the better the quality of the pixel. The quality of the sensor and the size of its pixel locations have a great impact on noise and how it changes with increasing sensitivity.
Artifacts
Besides noise, there are many other types of artifacts that determine pixel quality.
Conclusion
Unfortunately there is no single standard objective quality number to compare image quality across different types of sensors and cameras. For instance, a 3 megapixel Foveon type sensor uses 9 million photodetectors in 3 million pixel locations. The resulting quality is higher than a 3 megapixel but lower than a 9 megapixel conventional image and it also depends on the ISO level you compare it at. Likewise, a 6 megapixel Fujifilm Super CCD image is based on measurements in 3 million pixel locations. The quality is higher than a 3 megapixel image but lower than a 6 megapixel image. A 6 megapixel digital compact image will be of lower quality than a 6 megapixel digital SLR image with larger pixels. To determine an "equivalent" resolution is tricky at best.
End of the day, the most important thing is that you are happy with the quality level that comes out of your camera for the purpose that you need it for (e.g. website, viewing on computer, printing, enlargements, publishing, etc.). I strongly recommend that you look beyond megapixels when purchasing a digital camera.
Camera System - Pixels
Sensor Pixels
Similar to an array of buckets collecting rain water, digital sensors consist of an array of "pixels" collecting photons, the minute energy packets of which light consists. The number of photons collected in each pixel is converted into an electrical charge by the light sensitive photodiode. This charge is then converted into a voltage, amplified, and converted to a digital value via the analog to digital converter, so that the camera can process the values into the final digital image.
As explained in the sensor sizes topic, sensors of digital compact cameras are substantially smaller than those of digital SLRs with a similar pixel count. As a consequence, the pixel size is substantially smaller. This explains the lower image quality of digital compact cameras, especially in terms of noise and dynamic range.
Typical sensor size of 3, 4, Typical sensor size of 6
and 5 megapixel digital megapixel digital SLRs
compact cameras
Typical pixel size of 4 megapixel compacts and 6 megapixel SLRs
Digital Image Pixels
A digital image is similar to a spreadsheet with rows and columns which stores the pixel values generated by the sensor. Pixels in a digital image have no size until they are displayed on a monitor or printed. For instance, on a 4" x 6" print, each pixel in a 5 megapixel image would only measure 0.01mm, while on an 8" x 10" print, it will measure 0.05mm.
Camera System - Sensor Size
Typical sensor size of 3, 4, Typical sensor size of 6
and 5 megapixel digital megapixel digital SLRs
compact cameras
This diagram shows the typical sensor sizes compared to 35mm film. The sensor sizes of digital SLRs are typically 40% to 100% of the surface of 35mm film. Digital compact cameras have substantially smaller sensors offering a similar number of pixels. As a consequence, the pixels are much smaller, which is a key reason for the image quality difference, especially in terms of noise and dynamic range.
Sensor Type Designation
Sensors are often referred to with a "type" designation using imperial fractions such as 1/1.8" or 2/3" which are larger than the actual sensor diameters. The type designation harks back to a set of standard sizes given to TV camera tubes in the 50's. These sizes were typically 1/2", 2/3" etc. The size designation does not define the diagonal of the sensor area but rather the outer diameter of the long glass envelope of the tube. Engineers soon discovered that for various reasons the usable area of this imaging plane was approximately two thirds of the designated size. This designation has clearly stuck (although it should have been thrown out long ago). There appears to be no specific mathematical relationship between the diameter of the imaging circle and the sensor size, although it is always roughly two thirds.
Common Image Sensor Sizes
In the table below "Type" refers to the commonly used type designation for sensors, "Aspect Ratio" refers to the ratio of width to height, "Dia." refers to the diameter of the tube size (this is simply the Type converted to millimeters), "Diagonal / Width / Height" are the dimensions of the sensors image producing area.
Sensor (mm) |
|||||
Type |
Aspect Ratio |
Dia. (mm) |
Diagonal |
Width |
Height |
1/3.6" | |||||
1/3.2" | |||||
1/3" | |||||
1/2.7" | |||||
1/2.5" | |||||
1/2" | |||||
1/1.8" | |||||
2/3" | |||||
1" |
| ||||
4/3" | |||||
35 mm film |
n/a |
Implementation Examples
Below is a list of a few digital cameras (as examples) and their sensor size.
Camera |
Sensor Type |
Pixel count |
Sensor size |
Konika Minolta DiMAGE Xg |
1/2.7" CCD |
3.3 million |
5.3 x 4.0 mm |
PowerShot S500 |
1/1.8" CCD |
5.0 million |
7.2 x 5.3 mm |
Nikon Coolpix 8700 |
2/3" CCD |
8.0 million |
8.8 x 6.6 mm |
|
2/3" CCD |
8.0 million |
8.8 x 6.6 mm |
Sony DSC-828 |
2/3" CCD |
8.0 million |
8.8 x 6.6 mm |
Konica Minolta Dimage A2 |
2/3" CCD |
8.0 million |
8.8 x 6.6 mm |
Nikon D70 |
CCD |
6.1 million |
23.7 x 15.6 mm |
Canon EOS-1Ds |
CMOS |
11.4 million |
36 x 24 mm |
Kodak DSC-14n |
CMOS |
13.8 million |
36 x 24 mm |
Camera System - Sensors
The New Foveon Sensors
The cone-shaped
cells inside our eyes are sensitive to red, green, and blue-the "primary
colors". We perceive all other colors as combinations of these primary
colors. In conventional photography, the red, green, and blue components of
light expose the corresponding chemical layers of color film. The new Foveon
sensors are based on the same principle, and have three sensor layers that
measure the primary colors, as shown in this diagram. Combining these color
layers results in a digital image, basically a mosaic of square tiles or "pixels"
of uniform color which are so tiny that it appears uniform and smooth.
As a relatively new technology, Foveon sensors are currently only available in
the Sigma SD9 and SD10 digital SLRs and have drawbacks such as relatively
low-light sensitivity.
The Current Color Filter Array Sensors
All other digital camera sensors only measure the brightness of each pixel. As shown in this diagram, a "color filter array" is positioned on top of the sensor to capture the red, green, and blue components of light falling onto it. As a result, each pixel measures only one primary color, while the other two colors are "estimated" based on the surrounding pixels via software. These approximations reduce image sharpness, which is not the case with Foveon sensors. However, as the number of pixels in current sensors increases, the sharpness reduction becomes less visible. Also, the technology is in a more mature stage and many refinements have been made to increase image quality.
Active Pixel Sensors (CMOS, JFET LBCAST) versus CCD Sensors
Similar to an
array of buckets collecting rain water, digital camera sensors consist of an
array of "pixels" collecting photons, the minute energy packets of
which light consists. The number of photons collected in each pixel is
converted into an electrical charge by the photodiode. This charge is then
converted into a voltage, amplified, and converted to a digital value via the analog
to digital converter, so that the camera can process the values into
the final digital image.
In CCD (Charge-Coupled Device) sensors, the pixel measurements are processed
sequentially by circuitry surrounding the sensor, while in APS (Active Pixel
Sensors) the pixel measurements are processed simultaneously by circuitry
within the sensor pixels and on the sensor itself. Capturing images with CCD
and APS sensors is similar to image generation on CRT and LCD monitors
respectively.
The most common type of APS is the CMOS (Complementary Metal Oxide
Semiconductor) sensor. CMOS sensors were initially used in low-end cameras but
recent improvements have made them more and more popular in high-end cameras
such as the Canon EOS D60 and 10D. Moreover, CMOS sensors are faster, smaller,
and cheaper because they are more integrated (which makes them also more
power-efficient), and are manufactured in existing computer chip plants. The
earlier mentioned Foveon sensors are also based on CMOS technology. Nikon's new
JFET LBCAST sensor is an APS using JFET (Junction Field Effect Transistor)
instead of CMOS transistors.
Camera System - Thumbnail Index
When in playback mode, most digital cameras allow you to access the images and video clips on the storage card via a thumbnail index, an interactive contact sheet. Mostly a 2 x 2 or 3 x 3 grid of images is used, and sometimes this can be specified by the user. Buttons on the camera allow you to navigate through the thumbnails or select them and, depending on the camera, perform basic operations such as hiding, deleting, organizing them into folders, view them as a slideshow, print directly from the camera, etc. Clicking on the thumbnails will allow you to see a larger thumbnail of the image that fills the whole LCD. Read more in the LCD topic of this glossary.
Typical 3 x 3 thumbnail index on a Menu that allows you
digital camera. to choose what you
want to do with the selected image(s).
Camera System - Viewfinder
The viewfinder is the "window" you look through to compose the scene. We will discuss the four types of viewfinder commonly found on digital cameras.
Optical Viewfinder on a Digital Compact Camera
The optical viewfinder on a digital compact camera consists of a simple optical system that zooms at the same time as the main lens and has an optical path that runs parallel to the camera's main lens. These viewfinders are small and their biggest problem is framing inaccuracy. Since the viewfinder is positioned above the actual lens (often there is also a horizontal offset), what you see through the optical viewfinder is different from what the lens projects onto the sensor. This "parallax error" is most obvious at relatively small subject distances. In many instances the optical viewfinder only allows you to see a percentage (80 to 90%) of what the sensor will capture. For more accurate framing, it is recommended to use the LCD instead. For those who wear corrective glasses it's worth checking to see if the viewfinder has any diopter adjustment.
Because the optical path of the Sometimes optical viewfinders
viewfinder runs parallel to the have parallax error lines on them
camera's main lens, what you to indicate what the sensor will
see is different from what the see at relatively small subject
lens projects onto the sensor. distances (e.g. below 1.5 meter or 5 feet).
LCD on a Digital Compact Camera (TTL)
The LCD on a digital compact camera shows in real time what is projected onto the sensor by the lens and therefore avoids the above parallax errors. This is also called "TTL" or "Through-The-Lens" viewing. Using the LCD for framing will shorten battery life and it may be difficult to frame accurately in very bright sunlight conditions, in which case you will have to resort to the optical or electronic viewfinder (see below). The LCDs on virtually all digital SLRs will only show the image after it is taken and give no live previews.
Example of digital compact with a twist LCD
Optical Viewfinder on a Digital SLR Camera (TTL)
The optical viewfinder of a digital SLR shows what the lens will project on the sensor via a mirror and a prism and has therefore no parallax error. When you depress the shutter button, the mirror flips up so the lens can expose the sensor. As a consequence, and due to sensor limitations, the LCD on most digital SLRs will only show the image after it is taken and give no live previews. In some models this is resolved by replacing the mirror by a prism (at the expense of incoming light). The optical viewfinder normally also features an LCD "status bar" along the bottom of the viewfinder relaying exposure and camera setting information.
The optical TTL viewfinder allows Optical TTL viewfinder on SLR
you to look "through the lens" with diopter adjustment (slider
on the right side)
Electronic Viewfinder (EVF) on a Digital Compact Camera (TTL)
An electronic viewfinder (EVF) functions like the LCD on a digital compact camera and shows in real time what is projected onto the sensor by the lens. It is basically a small LCD (typically measuring 0.5" diagonally and 235,000 pixels) with a lens in front of it, which allows you to frame more accurately, especially in bright sunlight. It simulates in an electronic way the effect of the (superior) optical TTL viewfinders found on digital SLRs and doesn't suffer from parallax errors. Cameras with an EVF have an LCD as well, but no true optical viewfinder.
Example of an electronic viewfinder
Camera System - Storage Card
Storage cards are to digital cameras what films are to conventional cameras. They are removable devices which hold the images taken with the camera. Storage cards are keeping up with the rapidly changing digital camera market and are trending in the following direction:
The only downside of all this good news is a proliferation of storage card formats, making it more difficult to use cards across different cameras, card readers, and other devices (such as PDAs, MP3 players, etc). The image and table below give you an idea of how the sizes of typical formats compare:
Card Type |
Dimensions in mm |
Volume in mmł |
CompactFlash II / Microdrive |
42.8 x 36.4 x 5.0 | |
CompactFlash I |
42.8 x 36.4 x 3.3 | |
Memory Stick |
50.0 x 21.5 x 2.8 | |
Secure Digital |
32.0 x 24.0 x 2.1 | |
SmartMedia |
45.0 x 37.0 x 0.8 | |
MultiMediaCard |
32.0 x 24.0 x 1.4 | |
Memory Stick Duo |
31.0 x 20.0 x 1.6 | |
xD Picture Card |
25.0 x 20.0 x 1.7 | |
Reduced Size MultiMediaCard |
18.0 x 24.0 x 1.4 |
CompactFlash
CompactFlash is a proven and reliable format compatible with many devices and generally ahead of other formats in terms of storage capacity. Capacities above 2.2 GB require that your camera supports "FAT32". CompactFlash comes in Type I and II which only differ in thickness (3.3mm and 5.0mm) with Type I being the most popular for flash memory, while Type II is used by microdrives.
Microdrives
Pioneered by IBM, microdrives are minute hard disks that come in CompactFlash Type II format and typically offer larger storage capacities at a cheaper cost per megabyte. However, CompactFlash has been catching up with higher capacity cards. Microdrives use more battery power, create more heat (which can result in more noise) and have a higher risk of failure because they contain moving parts.
SmartMedia
Bigger in surface than CompactFlash but much thinner, they are more fragile and known to be less reliable. This format is gradually being phased out of the market with virtually no new cameras being announced supporting this format.
Sony Memory Stick
Yet another standard, set by Sony but now also manufactured by others such as Lexar Media. The main drawback is that there are fewer cameras using this type of memory, although their number is gradually increasing. So if you buy another brand of camera later on, you may not be able to use your memory sticks. Memory sticks are more expensive per megabyte because there is less competition in the market. Although their capacity continues to increase, they tend to lag behind CompactFlash in terms of maximum capacity. Several variants exist such as Sony Memory Stick with Select Function, Sony Memory Stick Pro, Sony Memory Stick Duo, and Sony MagicGate.
Secure Digital (SD)
Supported by the SD Card Association (SDA), this compact type of memory card allows for fast data transfer and has built-in security functions to facilitate the secure exchange of content and includes copyright (music) protection which makes them more expensive than the similar MultiMediaCards which we will discuss next. SD cards have a small write-protection switch on the side, similar to what floppy disks have.
MultiMediaCard/SecureMultiMediaCard/Reduced Size MultiMediaCard (MMC/SecureMMC/RS-MMC)
Supported by the MultiMediaCard Association (MMCA), MultiMediaCards have the same surface but are 0.7mm thinner than SD cards and have two pins less. Hardware-wise MMC cards fit in SD card slots and many, but not all, SD devices and cameras will accept MMC cards as well. Check out the specs before you buy. Two variants are SecureMMC, similar to SD, and Reduced Size MMC.
xD Picture Card
Another format aimed at very small digital
cameras, developed by
Other Formats
Older formats include floppy disks and PCMCIA cards. A few models support writing on to 3-inch CD-R/RW discs. Some low-end cameras don't have removable storage cards but instead have built-in flash RAM memory.
Digital Imaging - Aliasing
Aliasing refers to the jagged appearance of diagonal lines, edges of circles, etc. due to the square nature of pixels, the building blocks of digital images.
Term |
|
Enlarged View (4X) |
Comment |
Aliased |
|
|
Steps or "jaggies" are visible, especially when magnifying the image. |
Anti-aliased |
|
|
Anti-aliasing makes the edges look much smoother at normal magnifications. |
Anti-aliasing
Anti-aliasing makes the edges appear much smoother by averaging out the pixels around the edge. In this example some blue is added to the yellow edge pixels and some yellow is added to the blue edge pixels, thereby making the transition between the yellow circle and the blue background more gradual and smooth. Most image editing software packages have "anti-aliasing" options for typing fonts, drawing lines and shapes, making selections, etc. Anti-aliasing also occurs naturally in digital camera images and smoothens out the "jaggies".
Digital Imaging - Artifacts
Artifacts refer to a range of undesirable changes to a digital image caused by the sensor, optics, and internal image processing algorithms of the camera. The table below lists some of the common digital imaging artifacts and links to the corresponding glossary items.
|
Blooming |
|
Maze Artifacts |
|
Chromatic Aberrations |
|
Moiré |
|
Jaggies |
|
Noise |
|
JPEG Compression |
|
Sharpening Halos |
Digital Imaging - Blooming
A pixel on a digital camera sensor collects photons which are converted into an electrical charge by its photodiode. As explained in the dynamic range topic, once the "bucket" is full, the charge caused by additional photons will overflow and have no effect on the pixel value, resulting in a clipped or overexposed pixel value. Blooming occurs when this charge flows over to surrounding pixels, brightening or overexposing them in the process. In the example below, the charge overflow of the overexposed pixels in the sky causes the dark pixels at the edges of the leaves and branches to be brightened and overexposed as well. As a result detail is lost. Blooming can also increase the visibility of chromatic aberrations.
Some sensors come with "anti-blooming gates" which drain away the overflowing charge so it does not affect the surrounding pixels, except for extreme exposures (very bright edge against a virtually black edge).
Digital Imaging - Color Spaces
The Additive RGB Colors
The cone-shaped cells inside our eyes are sensitive to red, green, and blue. We perceive all other colors as combinations of these three colors. Computer monitors emit a mix of red, green, and blue light to generate various colors. For instance, combining the red and green "additive primaries" will generate yellow. The animation below shows that if adjacent red and green lines (or dots) on a monitor are small enough, their combination will be perceived as yellow. Combining all additive primaries will generate white.
|
|
The Additive RGB Color Space |
The Subtractive CMYk Colors
A print emits light indirectly by reflecting light that falls upon it. For instance, a page printed in yellow absorbs (subtracts) the blue component of white light and reflects the remaining red and green components, thereby creating a similar effect as a monitor emitting red and green light. Printers mix Cyan, Magenta, and Yellow ink to create all other colors. Combining these subtractive primaries will generate black, but in practice black ink is used, hence the term "CMYk" color space, with k standing for the last character of black.
|
|
The Subtractive CMYk Color Space |
The LAB and Adobe RGB (1998) Color Spaces
Due to technical limitations, monitors and printers are unable to reproduce all the colors we can see with our eyes, also called the "LAB" color space, symbolized by the horseshoe shape in the diagram below. The group of colors an average computer monitor can replicate is called the (additive) sRGB color space. The group of colors a printer can generate is called the (subtractive) CMYk color space. There are many types of CMYk, depending on the device. From the diagram you can see that certain colors are not visible on an average computer monitor but printable by a printer and vice versa. Higher-end digital cameras allow you to shoot in Adobe RGB (1998), which is larger than sRGB and CMYk. This will allow for prints with a wider range of colors. However, most monitors are only able to display colors within sRGB.
Digital Imaging - Compression
Image files can be compressed in two ways: lossless and lossy.
Lossless Compression
Lossless compression is similar to what WinZip does. For instance, if you compress a document into a ZIP file and later extract and open the document, the content will of course be identical to the original. No information is lost in the process. Only some processing time was required to compress and decompress the document. TIFF is an image format that can be compressed in a lossless way.
Lossy Compression
Lossy compression reduces the image size by discarding information and is similar to summarizing a document. For example, you can summarize a 10 page document into a 9 page or 1 page document that represents the original, but you cannot create the original out of the summary as information was discarded during summarization. JPEG is an image format that is based on lossy compression.
A Numerical Example
The table below shows how, on average, a five megapixel image (2,560 x 1,920 pixels) is compressed using the various image formats which are discussed in this glossary. Please note that in reality, the compressed file sizes will vary significantly with the amount of detail in the image. For example, the table shows 1.3 MB as file size for an 80% Quality JPEG five megapixel image. However, if the image has a lot of uniform surfaces (e.g. blue skies), it could be only 0.8 MB at 80% JPEG quality, and if it has a lot of fine detail, it could be 1.7 MB. The purpose of this table is to give a ballpark estimate.
Image Format |
Typical File Size in MB |
Comment |
Uncompressed TIFF |
3 channels of 8 bits |
|
Uncompressed 12-bit RAW |
1 channel of 12 bits |
|
Compressed TIFF |
Lossless compression |
|
Compressed 12-bit RAW |
Lossless compression |
|
100% Quality JPEG |
Hard to distinguish from uncompressed |
|
80% Quality JPEG |
Sufficient quality for 4" x 6" prints |
|
60% Quality JPEG |
Sufficient quality for websites * |
|
20% Quality JPEG |
Very low image quality |
For the web you would of course downsample the image to a lower resolution.
Digital Imaging - Digital Zoom
Optical zoom is the magnification factor between the minimum and maximum focal lengths of the lens. Consumer and prosumer cameras often come also with a digital zoom, which we will discuss based on an example of a 5 megapixel prosumer camera.
|
|
A. Scene shot with a 31mm lens |
B. Scene shot with a 50mm lens |
Changing the focal length from 31mm to 50mm (50/31=1.6X optical zoom) reduces the field of view. In image B, the sensor captures the red zone indicated in image A. In both cases the camera will store 5 megapixel of information into a 5 megapixel image. |
|
|
|
C. 1.6X Digital
Zoom |
D. 1.6X Digital
Zoom |
A 1.6X digital zoom will only use the information of a 1,600 x 1,200 crop and discard the rest (2,560/1.6=1,600 and 1,920/1.6=1,200). In image C, the camera has captured the same field of view as in image B but only uses 2 megapixel out of the 5 megapixel resolution! If the digital camera has the option to output 1,600 x 1,200 images, the crop will be saved as a 2 megapixel image. In most cases, the 1,600 x 1,200 crop will be upsampled to the full resolution of the camera as indicated in image D. No additional information is created in the process and the quality of image D is clearly lower than image B. |
To Use Or Not to Use Digital Zoom
So what is the best thing to do? If your purpose is to capture the information shown in image B, using a lens with focal length of 50mm is of course the best option. If you only have a 31mm lens available (or in general, if you reached the maximum optical zoom and need to zoom in more) there are three things you can do:
* If for some reason your intention is to upsample and you are shooting in JPEG, one benefit of digital zoom is that the upsampling in the camera is done before JPEG compression. If you shoot A, crop the 1,600 x 1,200 area, and then upsample to 2,560 x 1,920 on your computer, you will magnify the JPEG compression artifacts and the upsampled image will look not as good as image D. Because not all digital zooms are created equally, you may want to verify the quality differences with your particular digital camera before using digital zoom for this purpose.
Digital Imaging -
If you come out of a dark room and suddenly face bright sunlight, your eyes "hurt" initially. Or if you suddenly enter a dark room, it will take a while before you start to see anything as it takes some time for your eyes to adjust their sensitivity. Similarly a camera sensor has difficulty capturing bright and dark areas at the same time. Cameras with a large dynamic range are able to capture subtle tonal gradations in the shadow, midtone, and highlight areas of the scene. In technical terms, dynamic range is defined by the ratio of the highest non-white value and smallest non-black value a sensor can capture at the same time.
Pixel Size and
We learned earlier that a digital camera sensor has millions of pixels collecting photons during the exposure of the sensor. You could compare this process to millions of tiny buckets collecting rain water. The brighter the captured area, the more photons are collected. After the exposure, the level of each bucket is assigned a discrete value as is explained in the analog to digital conversion topic. Empty and full buckets are assigned values of "0" and "255" respectively, and represent pure black and pure white, as perceived by the sensor. The conceptual sensor below has only 16 pixels. Those pixels which capture the bright parts of the scene, get filled up very quickly.
Once they are full, they overflow (this can also
cause blooming). What flows over gets lost, as
indicated in red, and the values of these buckets all become 255, while they
actually should have been different. In other words, detail is lost. This
causes "clipped highlights" as explained in the histogram
section. On the other hand, if you reduce the exposure time to prevent further
highlight clipping, as we did in the above example, then many of the pixels
which correspond to the darker areas of the scene may not have had enough time
to capture any photons and might still have value zero (hence the term
"clipped shadows" as all the values are zero, while in reality there
might be minor differences).
It is easy to understand that one of the reasons digital SLRs have a larger
dynamic range is that their pixels are larger. Larger pixels do not
"fill up" so quickly, so there is more time to capture the dark
pixels before the bright ones start to overflow.
Some
|
|
The dynamic range of the camera was able to capture the dynamic range of the scene. The histogram indicates that both shadow and higlight detail is captured. |
|
|
|
Here the dynamic range of the camera was smaller than the dynamic range of the scene. The histogram indicates that some shadow and highlight detail is lost. |
|
|
|
The limited dynamic range of this camera was used to capture highlight detail at the expense of shadow detail. The short exposure needed to prevent the highlight buckets from overflowing gave some of the shadow buckets is insufficient time to capture any photons. |
|
|
|
The limited dynamic range of this camera was used to capture shadow detail at the expense of highlight detail. The long exposure needed by the shadow buckets to collect sufficient photons resulted in overflowing of some of the highlight buckets. |
|
|
|
Here the dynamic range of the scene is smaller than the dynamic range of the camera, typical when shooting images from an airplane.The histogram can be stretched to cover the whole tonal range with a more contrasty image as a result, but posterization can occur. |
Summary
Cameras with a large dynamic range are able to capture shadow detail and highlight detail at the same time.
Digital Imaging - Gamma
Each pixel
in a digital image has a certain level of brightness ranging from black (0) to
white (1). These pixel values serve as the input for your computer monitor. Due
to technical limitations, CRT monitors output these values in a nonlinear way:
Output = Input Gamma
When unadjusted, most CRT monitors have a "gamma" of 2.5 which means
that pixels with average brightness of 0.5, will be displayed with a brightness
of 0.5^2.5 or 0.18, much darker. LCDs tend to have rather irregularly shaped
output curves. Calibration via software and/or hardware ensures that the
monitor outputs the image based on a predetermined gamma curve.
When gamma=1, the monitor respond in a linear way (Output = Input), but images will appear "flat" and overly bright, the other extreme. In reality, a gamma of around 2.0 will create a more desirable output which is neither too bright nor too dark and pleasing to our vision (which is non-linear as well). Windows and Mac computers use gammas of 2.2 and 1.8 respectively.
Linear Gamma 1.0 |
Nonlinear Gamma 2.2 |
Nonlinear Gamma 2.5 |
|
|
|
Input 0.5 -> Output 0.5 |
Input 0.5 -> Output 0.22 |
Input 0.5 -> Output 0.18 |
|
|
|
Image looks too bright and "flat" |
Image looks contrasty and pleasing to the eye |
Image looks too dark (exaggerated example) |
Digital Imaging - Histogram
Histograms are the key to understanding digital images. This 10x4 mosaic contains 40 tiles which we could sort by color and then stack up accordingly. The higher the pile, the more tiles of that color in the mosaic. The resulting "histogram" would represent the color distribution of the mosaic.
In the sensor topic we learned that a digital image is basically a mosaic of square tiles or "pixels" of uniform color which are so tiny that it appears uniform and smooth. Instead of sorting them by color, we could sort these pixels into 256 levels of brightness from black (value 0) to white (value 255) with 254 gray levels in between. Just as we did manually for the mosaic, an imaging software automatically sorted the pixels of the image below into 256 groups (levels) of "brightness" and stacked them up accordingly. The height of each "stack" or vertical "bar" tells you how many pixels there are for that particular brightness. "0" and "255" are the darkest and brightest values, corresponding to black and white respectively.
On this histogram each "stack" or "bar" is one pixel wide. Unlike the mosaic histograms, the 256 bars are stacked side by side without any space between them, which is why for educational purposes, the vertical bars are shown in alternating shades of gray, allowing you to distinguish the individual bars. There are no blank spaces between bars to avoid confusion with blank spaces caused by missing tones in the image. Normally all bars will be black as indicated in the second histogram.
Typical Histogram Examples
|
|
Correctly exposed image |
This is an example of a correctly exposed image with a "good" histogram. The smooth curve downwards ending in 255 shows that the subtle highlight detail in the clouds and waves is preserved. Likewise, the shadow area starts at 0 and builds up gradually. |
|
|
Underexposed image |
The histogram indicates there are a lot of pixels with value 0 or close to 0, which is an indication of "clipped shadows". Some shadow detail is lost forever as explained in the dynamic range topic. Unless there is a lot of pure black in the image, there should not be that many pure black pixels. There are also very few pixels in the highlight area. |
|
|
Overexposed image |
The histogram indicates there are a lot of pixels with value 255 or close to 255, which is an indication of "clipped highlights". Subtle highlight detail in the clouds and waves is lost. There are also very few pixels in the shadow area. |
|
|
Image with too much contrast |
This image has both clipped shadows and highlights. The dynamic range of the scene is larger than the dynamic range of the camera. |
|
|
Image with too little contrast |
This image only contains midtones and lacks contrast, resulting in a hazy image. |
|
|
Image with modified contrast |
When "stretching" the above histogram via a Levels or Curves adjustment, the contrast of the image improves, but since the tones are redistributed over a wider tonal range, some tones are missing, as indicated in this "combed" histogram. Too much combing can lead to posterization as shown in the example below. |
|
|
Image with posterization |
Too much combing can lead to "posterization" as shown in this exaggerated conceptual example. |
Keeping an Eye on the Histograms when Taking Pictures
|
Example of camera histogram review with overexposure warning |
Most prosumer cameras and all professional cameras allow you to view the histogram on the camera's LCD so you can adjust the exposure and take the shot again if necessary. Some cameras come with an overexposure warning, whereby the overexposed areas blink, as indicated in this animation. In certain cameras the blinking areas are not necessarily overexposed, but an indication of potential overexposure.
Keeping an Eye on the Histograms when Editing
When editing images, it is important to keep an eye on the histogram to avoid the above mentioned shadow and highlight clipping and posterization. The new Adobe Photoshop CS now comes with a live histogram palette, as stated in my Photoshop CS review.
Summary
It is essential to keep an eye on the histogram when taking pictures and when editing them to ensure proper exposure and avoid losing shadow and highlight detail.
Digital Imaging - Interpolation
Interpolation (sometimes called resampling) is an imaging method to increase (or decrease) the number of pixels in a digital image. Some digital cameras use interpolation to produce a larger image than the sensor captured or to create digital zoom. Virtually all image editing software support one or more methods of interpolation. How smoothly images are enlarged without introducing jaggies depends on the sophistication of the algorithm.
The examples below are all 450% increases in size of this 106 x 40 crop from an image.
Nearest Neighbor Interpolation
Nearest neighbor interpolation is the simplest method and basically makes the pixels bigger. The color of a pixel in the new image is the color of the nearest pixel of the original image. If you enlarge 200%, one pixel will be enlarged to a 2 x 2 area of 4 pixels with the same color as the original pixel. Most image viewing and editing software use this type of interpolation to enlarge a digital image for the purpose of closer examination because it does not change the color information of the image and does not introduce any anti-aliasing. For the same reason, it is not suitable to enlarge photographic images because it increases the visibility of jaggies.
Nearest Neighbor Interpolation
Bilinear Interpolation
Bilinear Interpolation determines the value of a new pixel based on a weighted average of the 4 pixels in the nearest 2 x -2 neighborhood of the pixel in the original image. The averaging has an anti-aliasing effect and therefore produces relatively smooth edges with hardly any jaggies.
Bilinear Interpolation
Bicubic interpolation
Bicubic interpolation is more sophisticated and produces smoother edges than bilinear interpolation. Notice for instance the smoother eyelashes in the example below. Here, a new pixel is a bicubic function using 16 pixels in the nearest 4 x 4 neighborhood of the pixel in the original image. This is the method most commonly used by image editing software, printer drivers and many digital cameras for resampling images. As mentioned in my review, Adobe Photoshop CS offers two variants of the bicubic interpolation method: bicubic smoother and bicubic sharper.
Bicubic Interpolation
Bicubic Smoother Bicubic Bicubic Sharper
Fractal interpolation
Fractal interpolation is mainly useful for extreme enlargements (for large prints) as it retains the shape of things more accurately with cleaner, sharper edges and less halos and blurring around the edges than bicubic interpolation would do. An example is Genuine Fractals Pro from The Altamira Group.
Fractal Interpolation
There are of course many other methods of interpolation but they're seldom seen outside of more sophisticated image manipulation packages.
Digital Imaging - Jaggies
Hardly a technical term, jaggies refer to the visible "steps" of diagonal lines or edges in a digital image. Also referred to as "aliasing", these steps are simply a consequence of the regular, square layout of a pixel.
Increasing Resolution Reduces the Visibility of Jaggies
Jaggies become less visible as the sensor or image resolution increases. The crops below are from pictures of a flower against a blue sky taken with digital cameras with different resolutions*. The low resolution cameras show very visible jaggies. As we increase the camera resolution from A to D, the steps become almost invisible in crop D. But they are still present when the image is enlarged, as shown in crop E.
|
|
|
|
|
A. |
B. |
C. |
D. |
E. Red zone in D |
* Simulated results, only crop D is from a real camera.
Anti-aliasing Reduces the Visibility of Jaggies
Digital camera images undergo natural anti-aliasing because the pixels that measure the edges receive information from both sides of the edge. In this example the pixels that measure the yellow edge of the flower will also measure some blue sky resulting in values that are somewhere between yellow and blue. This makes the edges softer than in theoretical example F which has no anti-aliasing.
|
|
E. Red zone in D 8X enlarged |
F. No anti-aliasing |
If the sensor has a color filter array, the interpolation of the missing information (demosaicing) uses information of surrounding pixels and will therefore cause additional anti-aliasing.
Sharpening Increases the Visibility of Jaggies
Sharpening will increases edge-contrast (reduce anti-aliasing) and make jaggies more visible, as shown in the sharpening topic. For the same reason, the jaggies in this rooftop against a bright sky are visible because the contrast of the image made the edge sharper.
Digital Imaging - Jpeg
The most commonly used digital image format is JPEG (Joint Photographic Experts Group). Universally compatible with browsers, viewers, and image editing software, it allows photographic images to be compressed by a factor 10 to 20 compared to the uncompressed original with very little visible loss in image quality.
The Theory in a Nutshell
In a nutshell, JPEG rearranges the image information into color and detail information, compressing color more than detail because our eyes are more sensitive to detail than to color, making the compression less visible to the naked eye. Secondly, it sorts the detail information into fine and coarse detail and discards the fine detail first because our eyes are more sensitive to coarse detail than to fine detail. This is achieved by combining several mathematical and compression methods which are beyond the scope of this glossary but explained in detail in my e-book.
A Practical Example
JPEG allows you to make a trade-off between image file size and image quality. JPEG compression divides the image in squares of 8 x 8 pixels which are compressed independently. Initially these squares manifest themselves through "hair" artifacts around the edges. Then, as you increase the compression, the squares themselves will become visible, as shown in the examples below, which are magnified by a factor 2.
|
100% Quality JPEG is very hard to distinguish from the uncompressed original which would typically take up 6 times more storage space. |
||||||||
|
80% Quality JPEG looks still very good, especially when bearing in mind that this crop is 2 times enlarged and that the file size is typically 10 times smaller than the uncompressed original. Notice some deterioration along the edges of the yellow crayon. Most digital cameras will use a higher quality level than 80% as their highest quality JPEG setting. |
||||||||
|
|
||||||||
|
10% Quality JPEG shows serious image degradation with very visible 8 x 8 JPEG squares. The only benefit of this low quality level is that it illustrates what JPEG is doing in a more subtle way at higher quality levels. It is unlikely you will ever compress this aggressively. The example also shows that compression is most visible around the edges. |
Practical Tips
Cameras usually have different JPEG quality
settings, such as FINE,
The compression topic shows some numerical examples of file sizes.
Digital Imaging - Moiré
If the subject has more detail than the resolution of the camera*, a wavy moiré pattern can appear as shown in crop A. There is no moiré in crop B of an image of the same scene taken with a camera with a higher resolution. Anti-alias** filters reduce or eliminate moiré but also reduce image sharpness.
|
|
A. Example of moiré waves. |
B. No moiré in this crop taken with a higher resolution camera. |
Maze Artifacts
Sometimes, moiré can cause the camera's internal image processing to generate "maze" artifacts.
|
Example of maze artifacts |
Technical footnotes for advanced users:
In technical terms this means that the spatial frequency of the subject is higher than the resolution of the camera which we defined by the Nyquist frequency. This causes lower harmonics to appear (frequency aliasing) in the form of moiré waves.
They are named anti-alias filters because they reduce "frequency aliasing" mentioned in the above footnote. Because anti-alias filters tend to soften images, they incidentally have an indirect "image anti-aliasing" effect, but that is not the reason they are named this way.
Digital Imaging - Noise Reduction
For the past 4 years I have spent hundreds of hours researching methods to reduce noise from digital camera images. The key to noise reduction is to reduce or eliminate the noise without deteriorating other aspects of the image. Many freeware and even paid solutions negatively affect image sharpness, introduce wavy patterns in uniform surfaces and/or make them look "too uniform" (a bit like in a water painting).
These crops below out of a prosumer image illustrates the problem of edge sharpness and wavy patterns typical to a lot of noise reduction methods and compares the results with methods described in my e-book. The results are shown both for the color image and in the red channel. The areas indicated by the red squares are 4 times enlarged in the row below. On some monitors, the noise may not be very visible in the original. In that case, look at the red channel crops instead.
Original |
Bad Noise Reduction |
Good Noise Reduction (123di) |
|
Original crops (1X) and enlarged red squares (4X) below |
|||
|
|
|
|
|
|
|
|
Notice the red color noise in the blue sky of the original, more visible in the red channel (*). |
Bad noise reduction methods remove noise but blur the edge as shown in the 4X crop. |
Good noise reduction methods remove noise but preserve edge sharpness. |
|
(*) This type of noise is also more visible in the ISO 800 example in the sensitivity topic. |
|||
Original |
Bad Noise Reduction |
Good Noise Reduction (123di) |
|
Red channel - original crops (1X) and enlarged red squares (4X) below |
|||
|
|
|
|
|
|
|
|
The noise in the blue sky of the original is very visible in the red channel. |
Bad noise reduction methods replace the noise by a wavy pattern in uniform surfaces, visible in the 1X crop. |
Good noise reduction methods do not introduce wavy patterns but at the same time preserve some natural "grain" and image sharpness. |
|
JPEG Compression and Noise Reduction
JPEG compression squares are normally hard to notice in uniform surfaces at high quality levels. Since noise introduces (unwanted) detail, the JPEG squares will become more visible which further deteriorates the image. Working in RAW overcomes this problem. However, as stated in my Photoshop CS review and on my personal website, the appearance of noise can vary depending on which software you used to open the image.
Long Exposure ("stuck pixel") Noise Reduction
|
|
|
Original image |
Dark frame |
Manually cleaned image |
The effect of long exposure stuck pixels can be reduced to a great extent by taking a "dark frame" (with lens cap on) either before or after the main shot and subtracting this from the original shot, as explained in my e-book. Many newer digital cameras have built-in long exposure noise reduction and take a "dark frame" with the shutter closed for the same amount of time as the main image. This dark frame is then used to identify and subtract the "stuck pixels". But even with noise reduction OFF, newer cameras will show fewer stuck pixels than in the above example which was taken with an older generation digital camera.
Digital Imaging - Noise
The Cause: Sensor Noise
Each pixel in a camera sensor contains one or more light sensitive photodiodes which convert the incoming light (photons) into an electrical signal which is processed into the color value of the pixel in the final image. If the same pixel would be exposed several times by the same amount of light, the resulting color values would not be identical but have small statistical variations, called "read noise". Even without incoming light, the electrical activity of the sensor itself will generate some signal, the equivalent of the background hiss of audio equipment which is switched on without playing any music. This additional signal is "noisy" because it varies per pixel (and over time) and increases with the temperature. Called "dark current noise", it will add to overall image noise.
The Effect: Image Noise
Noise in digital images is most visible in uniform surfaces (such as blue skies and shadows) as monochromatic grain, similar to film grain (luminance noise) and/or as colored waves (color noise). As mentioned earlier, noise increases with temperature. It also increases with sensitivity, especially the color noise in digital compact cameras (example D below). Noise also increases as pixel size decreases, which is why digital compact cameras generate much noisier images than digital SLRs. Professional grade cameras with higher quality components and more powerful processors that allow for more advanced noise removal algorithms display virtually no noise, especially at lower sensitivities. Noise is typically more visible in the red and blue channels than in the green channel. This is why the unmagnified red channel crops in the examples below are better at illustrating the differences in noise levels.
Blue Sky Crop |
A |
B |
C |
D |
E |
RGB |
|
|
|
|
|
Red Channel |
|
|
|
|
|
Camera Grade |
Professional |
Prosumer |
Prosumer |
Prosumer |
Crop C after 123di noise reduction. |
Camera Type |
SLR |
SLR |
Compact |
Compact |
|
Pixel Size |
Large |
Large |
Small |
Small |
|
ISO | |||||
|
The standard deviation measured in a uniform area of an image (in the above examples measured in the red channel) is a good way to quantify image noise as it is an indication of how much the pixels in that area differ from the average pixel value in that area. The standard deviation in the noisy examples C and D is much larger than A, B, and E. Crop E shows that noise reduction can go a long way.
Long Exposure "Stuck Pixels" Noise
Another type of noise, often referred to as "stuck pixels" or "hot pixels" noise, occurs with long exposures (1-2 seconds or more) and appears as a pattern of colored dots (slightly larger than a single pixel). As explained in the noise reduction topic, long exposure noise is much less visible in the latest digital cameras.
Digital Imaging - RAW
Unlike JPEG and TIFF, RAW is not an abbreviation but literally means "raw" as in "unprocessed". A RAW file contains the original image information as it comes off the sensor before in-camera processing so you can do that processing afterwards on your PC with special software.
The RAW Storage and Information Advantages
In the Color Filter Array topic, we explained that each pixel in a conventional sensor only captures one color. This data is typically 10 or 12 bits per pixel, with 12 bits per pixel currently being most common. This data can be stored as a RAW file. Alternatively, the camera's internal image processing engine can interpolate the RAW data to determine the three color channels to output a 24 bit JPEG or TIFF image.
RAW (10 or 12 bit)
Red Channel (8 bit) Green Channel (8 bit) Blue Channel (8 bit) JPEG or TIFF (24 bit)
Even though the TIFF file only retains 8 bits/channel of information, it will take up twice the storage space because it has three 8 bit color channels versus one 12 bit RAW channel. JPEG addresses this issue by compression, at the cost of image quality. So RAW offers the best of both worlds as it preserves the original color bit depth and image quality and saves storage space compared to TIFF. Some cameras even offer lossless compressed RAW.
The Flexibility of RAW
n addition, many of the camera settings which were applied to the RAW data can be undone when using the RAW processing software. For instance, sharpening, white balance, levels and color adjustments can be undone and recalculated based on the RAW data. Also, because RAW has 12 bits of available data, you are able to extract shadow and highlight detail which would have been lost in the 8 bits/channel JPEG or TIFF format.
Disadvantages of RAW
The only drawback is that RAW formats differ between camera manufacturers, and even between cameras, so dedicated software provided by the manufacturer has to be used. Furthermore, opening and processing RAW files is much slower than JPEG or TIFF files. To address this issue, some cameras are offering the option to shoot in RAW and JPEG at the same time. As cameras become faster and memory cards cheaper, this option has no longer performance or storage issues. It allows you to organize and edit your images in a faster way with regular software using the JPEGs. But you retain the option to process in RAW those critical images or images with problems (e.g. white balance or lost shadow and highlight detail). Another trend is that third party image editing and viewing software packages are becoming RAW compatible with most popular camera brands and models. An example is Adobe Photoshop CS. However, as stated in my Photoshop CS review, the way Photoshop processes RAW files can be different from the way the camera manufacturer's software does it and you may have less options.
Digital Imaging - Resolution
Sensor Resolution
The number of effective non-interpolated pixels on a sensor is discussed in the topic about pixels.
Image Resolution
The resolution of a digital image is defined as the number of pixels it contains. A 5 megapixel image is typically 2,560 pixels wide and 1,920 pixels high and has a resolution of 4,915,200 pixels, rounded off to 5 million pixels. It is recommended to shoot at a resolution which corresponds with the camera's effective pixel count. As explained in the pixels topic, shooting at higher (interpolated) resolutions (if available as an option) creates only marginal benefits but takes up more card space. Shooting at lower resolutions only makes sense if you are running out of card space and/or image quality is not important.
Resolution Charts at dpreview.com: Horizontal and Vertical LPH
We measure resolution using the widely accepted PIMA/ISO 12233 camera resolution test chart. This chart is excellent, not only for measuring pure horizontal and vertical resolution, but also to test the performance of the sensor with frequencies at various angles. It also offers a good reference point for comparison of resolution between cameras. The chart is available for every camera which comes through our test labs, both in the camera reviews and our extensive camera database.
Resolution test chart from the Nikon Coolpix 8700 review. The areas indicated in red are shown as crops below.
|
|
Crop A. The black and white lines can be distinguished from one another until position "16", so the Horizontal LPH is 1,600, as explained below. |
Crop B. The black and white lines can be distinguished from one another until position "15", so the Vertical LPH is 1,500. |
Horizontal LPH refers to the number of (vertical) lines measured along the horizontal (x) axis or width of the image. Crop A shows a test pattern consisting of 9 black lines with 8 black white lines in between. From the crop you can see that below label "16" the 17 lines start to merge and become hard to distinguish from one another. The crop shows that at label "16", the 17 lines cover a horizontal distance of 26 pixels. Since this sample image is 2,448 pixels high, the horizontal number of lines per pixel height is therefore 2,448/26*17 or 1,600 LPH. So in general a value of "16" on the resolution chart equates to 1,600 lines per picture height (LPH).
Likewise, the Vertical LPH refers to the number of (horizontal) lines measured along the vertical (y) axis or height of the image. Crop B shows that in this example the vertical LPH is around 1,500 LPH.
Because the resolution is "normalized" to the picture height, the results of cameras with different aspect ratios can be compared easily.
Since we normalize on picture height, the absolute number of (horizontal) lines the camera is able to resolve along the vertical axis (image height) is equal to the vertical LPH. The absolute number of (vertical) lines the camera is able to resolve along the horizontal axis (image width) is equal to the horizontal LPH multiplied by the aspect ratio. In this example, this would work out to be 1,600 x 1.333 = 2,133 since the camera has an aspect ratio of 4:3.
You will immediately notice that 2,133 x 1,500 or 3,200,000 is much lower than the image resolution of 8,000,000 (3,264 x 2,448). This difference is due to limitations of the optics required to create an incredibly sharp image on such a small sensor area and because the sensor data need to be interpolated because of the use of a color filter array.
Resolution Charts at dpreview.com: 5° Diagonal Lines LPH
Our reviews also state the 5° Diagonal Lines LPH, measured in crop C in this example. Since the chart only goes up to 1,000 LPH for this camera, the review states 1,000+ as LPH.
|
Crop C.The black and white 5° diagonal lines can be distinguished from one another until position "10", the maximum of the chart, so the 5° Diagonal Lines LPH is 1,000+ for this camera. |
Resolution Charts at dpreview.com: Absolute and Extinct LPH
The above explanations refer to "Absolute LPH" which is an LPH with clearly defined detail*. Our reviews also state the "Extinct LPH". This is the LPH at which the lines become solid gray. The detail at that level is beyond the camera's definition. Between the Absolute and Extinct LPHs only some detail can be captured.
|
Crop D. Around label "18", the black and white lines merge into solid gray, so the Vertical Extinct LPH is 1,800 LPH. |
* Below the Nyquist frequency. Nyquist frequency is defined as the highest spatial frequency where the CCD can still faithfully record image detail. Beyond the Nyquist frequency, aliasing occurs.
Digital Imaging - Sensitivity (ISO)
Conventional film comes in different sensitivities
(ASAs) for different purposes. The lower the sensitivity, the finer the grain,
but more light is needed. This is excellent for outdoor photography, but for
low-light conditions or action photography (where fast shutterspeeds are
needed), more sensitive or "fast" film is used which is more "grainy".
Likewise, digital cameras have an ISO rating indicating their level of
sensitivity to light. ISO 100 is the "normal" setting for most
cameras, although some go as low as ISO 50. The sensitivities can be increased
to 200, 400, 800, or even 3,200 on high-end digital SLRs. When increasing the
sensitivity, the output of the sensor is amplified, so less light is needed.
Unfortunately that also amplifies the undesired noise.
Incidentally, this creates more grainy pictures, just like in conventional
photography, but because of different reasons. It is similar to turning up the
volume of a radio with poor reception. Doing so will not only amplify the
(desired) music but also the (undesired) hiss and crackle or "noise".
Improvements in sensor technology are steadily reducing the noise levels at
higher ISOs, especially on higher-end cameras. And unlike conventional film
cameras which require a change of film roll or the use of multiple bodies, digital
cameras allow you to instantly and conveniently change the sensitivity
depending on the circumstances.
|
|
ISO 100 |
ISO 800 |
|
|
ISO 100 - Red Channel |
ISO 800 - Red Channel |
The above unmagnified crops of prosumer digital camera images show high levels of color noise at higher sensitivities. Noise is usually most visible in the red and blue channels.
Digital Imaging - Sharpening
There are two types of sharpness and it is important not to mix them up. Optical sharpness is defined by the quality of the lens and the sensor. Software sharpness will create an "optical illusion" of sharpness by making the edges more contrasty. Software sharpening is of course unable to create detail beyond the camera's resolution, it will only help to bring out captured detail.
Original |
|
|
|
Magnified crop (2X) |
|
|
|
Comment |
Soft edges before sharpening |
Sharper edges after sharpening |
Over sharpening results in halos |
This simple example shows that normal sharpening creates cleaner edges than the original. Over sharpening makes the circle look artificially sharp. This is achieved by creating a white external halo (making the light gray of the background brighter around the circle's edge) and an internal black halo (making the darker gray of the circle darker around the circle's edge). Because the difference between the white and black halos is larger than between the gray of the circle and the background, the edge contrast has been increased, creating the illusion of enhanced sharpness. But the halos are undesirable in photographic images and are extremely hard to undo, unless you shoot in RAW (see below).
In-camera Sharpening
Digital cameras will, as a part of their default image processing, apply some level of sharpening, to counteract the effects of the interpolation of colors during the color filter array decoding process (which will soften detail slightly). Note however that too much in-camera sharpening will create sharpening halos and increase the visibility of jaggies, noise, and other image artifacts. Prosumer digital cameras and digital SLRs allow users to control the amount of sharpening applied to an image, or even disable it completely.
Sharpening with Software
If the camera
allows you to shoot in RAW, the in-camera sharpening can be undone
via software afterwards on your computer. You can then decide the level of sharpening
you want to apply in order to avoid the above sharpening halos and depending on
the purpose. For instance for web or monitor viewing purposes you may want to
apply some sharpening to "pull out" fine details of downsampled
images. For printing, sharpening should be applied with caution to avoid the
image looking fake and over-processed.
If you shoot in JPEG it is recommended to apply some
in-camera sharpening (e.g. "Low" or "Normal") because with
regular software, it is not so easy to achieve the same sharpening quality
level of in-camera sharpening. One of the reasons is that in-camera sharpening
is applied before JPEG compression, while sharpening on your computer is done
after JPEG compression, thereby making the edges of the JPEG compression
squares more visible. If the in-camera sharpening was insufficient, you can
still apply some additional sharpening with software. This is much easier than
to undo the effects of over sharpening.
Digital Imaging - TIFF
TIFF (Tagged Image File Format) is a universal image format that is compatible with most image editing and viewing programs. It can be compressed in a lossless way, internally with LZW or Zip compression, or externally with programs like WinZip. While JPEG only supports 8 bits/channel single layer RGB images, TIFF also supports 16 bits/channel multi-layer CMYK images in PC and Macintosh format. TIFF is therefore widely used as a final format in the printing and publishing industry.
Many digital cameras offer TIFF output as an uncompressed alternative to compressed JPEG. Due to space and processing constraints only the 8 bits/channel version is used in digital cameras. Higher-end scanners offer a 16 bits/channel TIFF option. If available, RAW is a much better alternative for digital cameras than TIFF.
Digital Imaging - White Balance
Color Temperature
Most light sources
are not 100% pure white but have a certain "color temperature",
expressed in Kelvin. For instance, the
Type of Light |
Color Temperature in K |
Candle Flame | |
Incandescent | |
| |
| |
Bright Sun, Clear Sky | |
Cloudy Sky, Shade | |
Blue Sky |
White Balance
Normally our eyes compensate for lighting conditions with different color temperatures. A digital camera needs to find a reference point which represents white. It will then calculate all the other colors based on this white point. For instance, if a halogen light illuminates a white wall, the wall will have a yellow cast, while in fact it should be white. So if the camera knows the wall is supposed to be white, it will then compensate all the other colors in the scene accordingly.
Most digital cameras feature automatic white balance whereby the camera looks at the overall color of the image and calculates the best-fit white balance. However these systems are often fooled especially if the scene is dominated by one color, say green, or if there is no natural white present in the scene as show in this example.
|
|
The auto white balance was unable to find a white reference, resulting in dull and artificial colors. |
The auto white balance got it right this time in a very similar scene because it could use the clouds as its white reference. |
Most digital cameras also allow you to choose a white balance manually, typically sunlight, cloudy, fluorescent, incandescent etc. Prosumer and SLR digital cameras allow you to define your own white balance reference. Before making the actual shot, you can focus at an area in the scene which should be white or neutral gray, or at a white or gray target card. The camera will then use this reference when making the actual shot.
Exposure - AE Lock
Automatic Exposure lock is the ability to lock exposure settings (aperture and shutterspeed) calculated by the camera over a series of images. This setting is useful when shooting images which will be stitched together into a panorama because stitching is much easier if each image has the same exposure.
Exposure - Aperture Priority
In "Aperture Priority" mode, the camera allows you to select the aperture over the available range and have the camera calculate the best shutter speed to expose the image correctly. This is important if you want to control depth of field or for special effects. Note that because of their high focal length multiplier, a shallow depth of field is often very hard to achieve with digital compact cameras, even at the largest aperture.
Exposure - Aperture
Aperture refers to the size of the opening in the lens that determines the amount of light falling onto the film or sensor. The size of the opening is controlled by an adjustable diaphragm of overlapping blades similar to the pupils of our eyes. Aperture affects exposure and depth of field.
Just like successive shutter speeds, successive apertures halve the amount of incoming light. To achieve this, the diaphragm reduces the aperture diameter by a factor 1.4 (square root of 2) so that the aperture surface is halved each successive step as shown on this diagram.
Because of basic optical principles, the absolute aperture sizes and diameters depend on the focal length. For instance, a 25mm aperture diameter on a 100mm lens has the same effect as a 50mm aperture diameter on a 200mm lens. If you divide the aperture diameter by the focal length, you will arrive at 1/4 in both cases, independent of the focal length. Expressing apertures as fractions of the focal length is more practical for photographers than using absolute aperture sizes. These "relative apertures" are called f-numbers or f-stops. On the lens barrel, the above 1/4 is written as f/4 or F4 or 1:4.
We just learned that the next aperture will have a diameter which is 1.4 times smaller, so the f-stop after f/4 will be f/4 x 1/1.4 or f/5.6. "Stopping down" the lens from f/4 to f/5.6 will halve the amount of incoming light, regardless of the focal length. You now understand the meaning of the f/numbers found on lenses:
Because f-numbers are fractions of the focal length, "higher" f-numbers represent smaller apertures.
Maximum Aperture or Lens Speed
The "maximum aperture" of a lens is also called its "lens speed". Aperture and shutter speed are interrelated via exposure. A lens with a large maximum aperture (e.g. f/2) is called a "fast" lens because the large aperture allows you to use high (fast) shutter speeds and still receive sufficient exposure. Such lenses are ideal to shoot moving subjects in low light conditions.
Zoom lenses specify the maximum aperture at both the wide angle and tele ends, e.g. 28-100mm f/3.5-5.6. A specification like 28-100mm f/2.8 implies that the maximum aperture is f/2.8 throughout the zoom range. Such zoom lenses are more expensive and heavy.
Exposure - Auto Bracketing
Bracketing is a technique used to take a series of images of the same scene at a variety of different exposures that "bracket" the metered exposure (or manual exposure). "Auto" simply means the camera will automatically take these exposures as a burst of 2, 3 or 5 frames with exposure settings of anything between 0.3 and 2.0 EV difference. This can be useful if you're not sure exactly how the shot will turn out or are worried that the scene has a dynamic range which is wider than the camera can capture. On a digital camera this can also be used to combine under- and overexposed images together to produce an image with more dynamic range than the camera can capture, as shown in the example below.
When setting up for bracketing you can usually select the number of frames to be taken (typically 2, 3 or 5), the exposure setting and the order in which to take the shots (eg. 0,-,+ or -,0,+ etc.). It is important to note that the values are exposure compensation values.
The extreme example below was taken with auto bracketing of 5 frames at 1.0 EV in the -,0,+ order. Thus, in this case without bracketing the camera would simply have shot the frame with an aperture of f/4.0 and a shutter speed of 1/160s. The +2.0 EV image was not used in the combination image.
|
|
|
f/7.1, 1/306s, -2.0 EV |
f/5.6, 1/224s, -1.0 EV |
f/4.0, 1/160s, 0 EV |
|
|
|
f/3.1, 1/71s, +1.0 EV |
f/2.8, 1/39s, +2.0 EV |
Combination of -2,-1, 0, +1 EV |
Some digital cameras also allow white balance auto bracketing.
Exposure - Exposure Compensation
The camera's metering system will sometimes determine the wrong exposure value needed to correctly expose the image. This can be corrected by the "EV Compensation" feature found in prosumer and professional cameras. Typically the EV compensation ranges from -2.0 EV to +2.0 EV with adjustments in steps of 0.5 or 0.3 EV. Some digital SLRs have wider EV compensation ranges, e.g. from -5.0 EV to +5.0 EV.
It is important to understand that increasing the EV compensation by 1 is equivalent to reducing EV by 1 and will therefore double the amount of light. For instance if the camera's automatic mode determined you should be using an aperture of f/8 and a shutterspeed of 1/125s at ISO 100 (13 EV) and the resulting image appears underexposed (e.g. by looking at the histogram), applying a +1.0 EV exposure compensation will cause the camera to use a shutterspeed of 1/60s or an aperture of f/5.6 to allow for more light (12 EV).
Of course, as you become more familiar with your camera's metering system, you can already apply an EV compensation before the shooting. For instance if your camera tends to clip highlights and you are shooting a scene with bright clouds, you may want to set the EV compensation to -0.3 or -0.7 EV.
Exposure - Exposure
The exposure is the amount of light received by the film or sensor and is determined by how wide you open the lens diaphragm (aperture) and by how long you keep the film or sensor exposed (shutterspeed). The effect an exposure has depends on the sensitivity of the film or sensor.
The exposure generated by an aperture, shutterspeed, and sensitivity combination can be represented by its exposure value "EV". Zero EV is defined by the combination of an aperture of f/1 and a shutterspeed of 1s at ISO 100. Each time you halve the amount of light collected by the sensor (e.g. by doubling shutterspeed or by halving the aperture), the EV will increase by 1. For instance, 6 EV represents half the amount of light as 5 EV. High EVs will be used in bright conditions which require a low amount of light to be collected by the film or sensor to avoid overexposure.
To get a feel, just select the aperture, shutterspeed, and ISO in the interactive EV calculator below, and click on the "Calculate EV" button to determine the corresponding exposure value.
|
Aperture |
Shutterspeed |
Sensitivity |
Exposure Value |
Effect of |
doubles aperture |
doubles |
doubles the effect |
-1 EV |
Effect of |
halves aperture |
halves |
halves the effect |
+1 EV
|
From the above it is clear that a certain exposure value can be achieved by a variety of combinations of aperture, shutterspeed and sensitivity. For instance if you are shooting at ISO 100 with an aperture of f/8 and a shutterspeed of 1/125s, doubling the shutterspeed to 1/250 (halving the exposure time) and reducing the f-number one stop to f/5.6 (doubling the aperture) will lead to the same exposure of 13 EV. Or if you double the shutterspeed to 1/250s (halve the exposure time) while keeping the aperture unchanged at f/8, you could double the effect of the incoming light by doubling the sensitivity to ISO 200, thereby keeping the EV constant at 13 EV. Note that doing so will increase noise levels in digital cameras and film grain in conventional cameras.
In automatic mode, the camera determines the optimal combination of aperture, shutterspeed, and sensitivity based on the exposure value determined by the light metering system. A high EV indicates bright conditions, hence the need for high shutterspeeds, high f-numbers, and/or low sensitivities, to avoid overexposure. When you change the aperture in aperture priority mode, the camera will adjust the shutterspeed to keep the EV constant. In shutter priority mode, the camera will adjust the aperture to keep the EV constant.
Exposure - Flash Output Compensation
Flash output compensation is similar to exposure compensation and allows you to preset an adjustment value for the flash output power. Some digital cameras allow you to set this value using the familiar EV range (+/-2 EV), others simply have "high, normal, and low" settings. This feature is useful to compensate when the cameras flash metering was not perfect and caused under- or overexposure.
Exposure - Manual
In "Full Manual" mode, you can set both the aperture and the shutterspeed. This gives you ultimate control over the exposure. This can be useful to ensure that the same exposure is used for a sequence of shots or when shooting in special circumstances, e.g. shooting in direct sunlight. Higher-end prosumer digital cameras and all digital SLRs feature full manual exposure. When in full manual exposure mode, the camera will often display a simulated exposure meter which will indicate how far over- or underexposed the image is compared to the exposure value calculated by the camera's metering system. Prosumer digital cameras with live LCD preview will often simulate the effects of the exposure on the live preview.
Exposure - Metering
The metering system in a digital camera measures the amount of light in the scene and calculates the best-fit exposure value based on the metering mode explained below. Automatic exposure is a standard feature in all digital cameras. All you have to do is select the metering mode, point the camera and press the shutter release. Most of the time, this will result in a correct exposure.
The metering method defines which information of the scene is used to calculate the exposure value and how it is determined. Metering modes depend on the camera and the brand, but are mostly variations of the following three types:
This is probably the most complex metering mode, offering the best exposure in most circumstances. Essentially, the scene is split up into a matrix of metering zones which are evaluated individually. The overall exposure is based on an algorithm specific to that camera, the details of which are closely guarded by the manufacturer. Often they are based on comparing the measurements to the exposure of typical scenes.
Probably the most common metering method implemented in nearly every digital camera and the default for those digital cameras which don't offer metering mode selection. This method averages the exposure of the entire frame but gives extra weight to the center and is ideal for portraits.
Spot metering allows you to meter the subject in the center of the frame (or on some cameras at the selected AF point). Only a small area of the whole frame is metered and the exposure of the rest of the frame is ignored. This type of metering is useful for brightly backlit, macro, and moon shots.
Exposure - Remote Capture
Remote capture software allows a computer to remotely fire a digital camera connected to it. Two key benefits are that images can be stored directly onto the computer's hard disk and that images can be immediately previewed on the computer monitor instead of on the small LCD of the camera.
Exposure - Shooter priority
In "Shutter Priority" mode, you can select the shutterspeed over the available range and have the camera calculate the best aperture to expose the image correctly. Shutter speed priority is often used to create special effects such as blurred water on a river/waterfall or to freeze action in action scenes as illustrated in the shutterspeed topic of this glossary.
Exposure - Shutterspeed
The shutterspeed determines how long the film or sensor is exposed to light. Normally this is achieved by a mechanical shutter between the lens and the film or sensor which opens and closes for a time period determined by the shutterspeed. For instance, a shutter speed of 1/125s will expose the sensor for 1/125th of a second. Electronic shutters act in a similar way by switching on the light sensitive photodiodes of the sensor for as long as is required by the shutterspeed. Some digital cameras feature both electronic and mechanical shutters.
Shutterspeeds are expressed in fractions of seconds, typically as (approximate) multiples of 1/2, so that each higher shutterspeed halves the exposure by halving the exposure time: 1/2s, 1/4s, 1/8s, 1/15s, 1/30s, 1/60s, 1/125s, 1/250s, 1/500s, 1/1000s, 1/2000s, 1/4000s, 1/8000s, etc. Long exposure shutterspeeds are expressed in seconds, e.g. 8s, 4s, 2s, 1s.
The optimal shutterspeed depends on the situation. A useful rule of thumb is to shoot with a shutterspeed above 1/(focal length) to avoid blurring due to camera shake. Below that speed a tripod or image stabilization is needed. If you want to "freeze" action, e.g. in sports photography, you will typically need shutterspeeds of 1/250s or more. But not all action shots need high shutterspeeds. For instance, keeping a moving car in the center of the viewfinder by panning your camera at the same speed of the car allows for lower shutterspeeds and has the benefit of creating a background with a motion blur.
This image was shot at 1/500s, freezing the splashing of the waves.
Motion blur created by tracking the car with the camera and shooting at 1/125s.
Prosumer and professional cameras provide shutter priority exposure mode, allowing you to vary the shutterspeed while keeping exposure constant.
Exposure - Time Lapse
Cameras with a time lapse feature can be programmed to automatically shoot a number of frames over a period of time or with a certain time interval between each frame. For instance, a camera on a tripod in time lapse mode could be set up to shoot frames of a flower opening or a bird building a nest. Some cameras feature a built-in time lapse mode; others allow you to set up time lapse as part of a Remote Capture application. This requires the camera to be connected to a computer.
Optical - Anti-shake
Another approach to image stabilization is to make the CCD move so that it compensates for the camera movement as implemented in the Konica Minolta DiMAGE A2. The sensor is mounted onto a platform which moves in the opposite way as the movement of the camera, which is determined by motion detectors. According to Konica Minolta, this "anti-shake" system gives you an additional 3 stops. For example if you would require a shutterspeed of 1/1000s to shoot a particular scene, you should be able to shoot at only 1/125s (8 times slower) with anti-shake enabled. This is very useful when shooting moving subjects in low light conditions by panning and/or when using long focal lengths.
Anti-shake system implemented in the Konica Minolta DiMAGE A2
Optical - Aspect Ratio
The width divided by the height of an image or "aspect ratio" is usually expressed as two integers, e.g. width/height = 1.5 is expressed as width:height = 3:2.
|
|
3:2 aspect ratio of 35mm film, 6"x4" prints, and most digital SLRs |
4:3 aspect ratio of most computer monitors and digital compact cameras |
Optical - Barrel Distortion
Barrel distortion is a lens effect which causes images to be spherised or "inflated". Barrel distortion is associated with wide angle lenses and typically occurs at the wide end of a zoom lens. The use of converters often amplifies the effect. It is most visible in images with perfectly straight lines, especially when they are close to the edge of the image frame. See also the opposite effect, pincushion distortion.
|
|
Barrel distortion inflates the square |
Example of Barrel Distortion |
We measure barrel distortion in our reviews as the amount a reference line is bent as a percentage of picture height. For most consumer digital cameras this figure is normally around 1%.
Barrel distorted images from virtually any digital camera can easily be corrected using a set of free tools for Photoshop. Click here for my article in the Image Techniques section.
Optical - Chromatic Aberrations
"Purple fringing" is the most common type of chromatic aberration in digital cameras. Edges of contrasty subjects suffer most, especially if the light comes from behind them as shown in the example below. Other types exist as well, such as the cyan/green and red fringing shown in the second example. Chromatic aberrations occur more in consumer and prosumer digital compact cameras, but SLRs can suffer from it too.
|
|
|
Longitudinal or Axial Chromatic Aberration |
Lateral or Transverse Chromatic Aberration |
|
From
an optical point of view, chromatic aberrations are caused by the camera lens
not focusing different wavelengths of light onto the exact same focal plane
(the focal length for different wavelengths is different) and/or by the lens
magnifying different wavelenghts differently. These types of chromatic
aberration are referred to as "Longitudinal Chromatic Aberration" and
"Lateral Chromatic Aberration" respectively and can occur
concurrently. The amount of chromatic aberration depends on the dispersion of
the glass.
In digital cameras, microlenses can also be a source of
chromatic aberration and the visibility of chromatic aberrations is sometimes
amplified by blooming.
Achromatic / Apochromatic Doublets
Special lens systems (achromatic or apochromatic doublets) using two or more pieces of glass with different refractive indexes can reduce or eliminate this problem. However, not even these lens systems are completely perfect and still can lead to visible chromatic aberrations, especially at wide angles.
Optical - Converters
Prosumer cameras typically allow the zoom range to be extended via converters. Converters are add-on lens adapters which expand the picture angle or make it more narrow. For instance, fitting a 0.8X wide angle converter on a 35mm lens will result in a 28mm picture angle. A 2.0X telephoto converter on a 100mm lens will give the picture angle of a 200mm lens. Converters often cannot be used across the whole range of a zoom lens and sometimes only at the end of the zoom range because they would introduce vignetting. Also, the internal flash may no longer work properly because the converter will cast a shadow and/or the flash sensor is covered by the converter.
Optical - Depth of Field
Depth of field (DOF) is a term which refers to the areas of the photograph both in front and behind the main focus point which remain "sharp" (in focus). Depth of field is affected by the aperture, subject distance, focal length, and film or sensor format.
A larger aperture (smaller f-number, e.g. f/2) has a shallow depth of field. Anything behind or in front of the main focus point will appear blurred. A smaller aperture (larger f-number, e.g. f/11) has a greater depth of field. Objects within a certain range behind or in front of the main focus point will also appear sharp.
|
As you can see, at a large aperture of f/2.4 only the first card is in focus, while at f/8 the middle card is sharp and the distant card is almost sharp. Click on the image for a larger version. |
Coming closer to
the subject (reducing subject distance) will reduce depth of
field, while moving away from the subject will increase depth of field.
Lenses with shorter focal lengths produce images with larger
DOF. For instance, a 28mm lens at f/5.6 produces images with a greater depth of
field than a 70mm lens at the same aperture.
Optical - Focal Length Multiplier
|
The FLM of a typical 6 megapixel digital SLR is 43.3/28.1 or 1.54X |
As a consequence, a sensor smaller than a 35mm film frame captures only the middle portion of the information projected by the lens into the 35mm film frame area, resulting in a "cropped field of view". A 35mm film camera would require a lens with a longer focal length to achieve the same field of view. Hence the term Focal Length Multiplier (FLM). The FLM is equal to the diagonal of 35mm film (43.3mm) divided by the diagonal of the sensor. Let's now discuss two cases.
Case 1 - Digital SLR and 35mm film camera use a lens with the SAME focal length.
|
|
Information projected into the 35mm film frame area by a 200mm lens (1). |
The sensor with FLM of 1.5 captures only part of the information projected by the 200mm lens into the 35mm film area. This results in a "cropped field of view", equivalent to the field of view of a 200 x 1.5 = 300mm lens on a 35mm film camera. The absolute size of the moon is unchanged as the focal length is still 200mm. |
Another way of looking at the same thing is that for the digital SLR to have the same field of view as the 35mm lens, it would need to be fitted with a lens with shorter focal length as explained in Case 2 below.
Case 2 - Digital SLR uses lens with SHORTER focal length than a 35mm film camera.
|
|
Information projected onto the 35mm film frame area by a 200mm lens (1). |
The smaller sensor with FLM of 1.5 captures the same field of view as the 200mm lens on a 35mm camera by using a lens with a shorter focal length of 133mm (200mm/1.5). The absolute size of the moon is now smaller as a lens with shorter focal length is used (different magnification). |
Practically speaking this means that a 19mm lens fitted onto a digital SLR with FLM of 1.5X will generate the field of view of a 28mm lens fitted on a 35mm film camera. This disadvantage on the wide angle end becomes a benefit on the tele end. For instance, a 200mm lens on a digital SLR with FLM of 1.5X will have the field of view of a 300mm lens on a 35mm film camera which would be heavier and more expensive. Also because the 35mm equivalent field of views are achieved with shorter focal lengths, the depth of field is larger ( ).
Most digital SLRs
are able to use conventional 35mm lenses. However, such lenses are designed to
create an image circle that covers a 35mm film frame and are therefore larger
and heavier than necessary for sensors which are smaller than a 35mm film
frame. "Digital" lenses (e.g. Canon Short Back Focus Lenses, Nikon DX
Lenses,
Digital compact cameras are fitted with lenses with short focal lengths to create 35mm equivalent field of views on their small sensor surfaces. Typically the sensor diagonal is 4 times smaller than the diameter of 35mm film. A 7mm lens fitted on such a camera will have the same field of view of a 7mm x 4 or 28mm lens on a 35mm film camera. Just like the digital lenses for digital SLRs, these lenses are designed to generate image circles to cover the smaller sensor. This allows these lenses to be much smaller and cheaper to manufacture. Because of the very small focal lengths used, the depth of field is much larger ( ) than digital SLRs or 35mm film cameras with the same field of view.
Technical footnotes (only relevant to advanced users):
) For the purpose
of this conceptual example we used a 200mm lens, in reality a lens with a much
longer focal length would be needed for the moon to be this large on the
sensor.
)
Assuming the aperture and subject distance remain
constant, the increase in depth of field (DOF) due to the reduction in focal
length is partially offset by the reduction in the maximum permissible Circle
of Confusion (CoC). For a smaller format, the maximum permissible CoC is
smaller, so DOF will be smaller. However, this reduction in DOF is smaller than
the increase in DOF caused by the reduction in focal length, so overall DOF
will increase, and more so with larger FLMs. You can verify this by using the depth of field calculator on
this site.
Optical - Focal Length
The focal length of a lens is defined as the distance in mm from the optical center of the lens to the focal point, which is located on the sensor or film if the image is "in focus". The camera lens projects part of the scene onto the film or sensor. The field of view (FOV) is determined by the angle of view from the lens out to the scene and can be measured horizontally or vertically. Larger sensors or films have wider FOVs and can capture more of the scene. The FOV associated with a focal length is usually based on the 35mm film photography, given the popularity of this format over other formats.
In 35mm photography, lenses with a focal length of 50mm are called "normal" because they work without reduction or magnification and create images the way we see the scene with our naked eyes (same picture angle of 46°).
Wide angle lenses (short focal length) capture more because they have a wider picture angle, while tele lenses (long focal length) have a narrower picture angle. Below are some typical focal lengths:
Typical focal lengths and their 35mm format designations |
|
< 20mm |
Super Wide Angle |
24mm - 35mm |
Wide Angle |
50mm |
Normal Lens |
80mm - 300mm |
Tele |
> 300mm |
Super Tele |
A change in focal length allows you to come closer to the subject or to move away from it and has therefore an indirect effect on perspective. Some digital cameras suffer from barrel distortion at the wide angle end and from pincushion distortion at the tele end of their zoom ranges.
Focal lengths of digital cameras with a sensor smaller than the surface of a 35mm film can be converted to their 35mm equivalent using the focal length multiplier.
Optical zoom =
maximum focal lenght / minimum focal length
For instance, the optical zoom of a 28-280mm zoom lens is 280mm/28mm or 10X.
This means that the size of a subject projected on the film or sensor surface
will be ten times larger at maximum tele (280mm) than at maximum wide angle
(28mm). Optical zoom should not be confused with digital
zoom.
Optical - Image Stabilization
Higher-end
binoculars and zoom or telephoto lenses for SLR cameras often come with image
stabilization. It is also available in digital video cameras with large zooms.
Digital cameras with large zoom lenses also come with image stabilization or
variants such as anti-shake.
Image stabilization helps to steady the image projected back into the camera by
the use of a "floating" optical element-often connected to a fast
spinning gyroscope-which helps to compensate for high frequency vibration (hand
shake for example) at these long focal lengths. Canon EF SLR lenses with image
stabilization have a IS suffix after their name, Nikon uses the VR
"Vibration Reduction" suffix on their image stabilised Nikkor lenses.
Typically,
image stabilization can help you take handheld shots almost two stops slower
than with image stabilization off. For example if you would require a
shutterspeed of 1/500s to shoot a particular scene, you should be able to shoot
at only 1/125s (4 times slower) with image stabilization. This is very useful
when shooting moving subjects in low light conditions by panning
and/or when using long focal lengths.
Important footnote: The above "optical" image stabilization is
different from the "digital" image stabilization found in some
digital video cameras. "Digital" image stabilization only makes sense
for digital video as it pixel shifts the image frames to create a more stable
video image.
Optical - Lenses
Most digital compact cameras have non-interchangeable zoom lenses which have been designed to work with a specific sensor size. Some prosumer models allow to extend the zoom range via converters. Because of the small sensor sizes, the lenses used in digital compact cameras have to be of much higher optical quality than glass which would be "acceptable" on a 35mm camera. This is less of an issue with digital SLRs with because their sensors are much larger.
|
|
Typical sensor size of 3, 4, and 5 megapixel digital compact cameras |
Typical sensor size of 6 megapixel digital SLRs |
Optical - Macro
In strict
photographic terms, "macro" means the optical ability to produce a
1:1 or higher magnification of an object on the film or sensor. For instance if
you photograph a flower with an actual diagonal of 21.6 mm so that it fills the
35mm film frame (43.3mm diagonal), the flower gets magnified with a ratio of
43.3 to 21.6 or 2:1, or with a magnification of 2X. Macro photography typically
deals with magnifications between 1:1 and 50:1 (1X to 50X), while close up
photography ranges from 1:1 to
From
the above it is easy to understand that digital cameras with sensors smaller
than 35mm film have better macro capabilities. Indeed, a digital compact camera
with a focal length multiplier of 4X can capture
the above flower of 21.6mm diameter with a magnification of only 1:2 (close-up)
instead of the 2:1 (macro) required with the 35mm camera. In other words, macro
results are achieved with (easier) close-up photography.
On digital cameras there is often a Macro Focus mode which switches the auto
focus system to attempt to focus on subjects much closer to the lens.
We measure macro ability (of cameras with non-interchangeable lenses) in our
reviews as the ability of the lens to get the best possible frame coverage. So
a camera which can fill the frame with a subject that is 20mm wide has better
macro capabilities than one which can only capture a 40mm wide subject.
Generation after generation, Nikon Coolpix digital cameras delivered the 'best in class' macro performance without add-on lenses.
Optical - Perspective
If you photograph a subject with a tele lens and want it to have the same size on the film or sensor when photographing it with a wide angle lens, you would have to move closer to the subject. Because this would cause the perspective to change, lenses with different focal lengths are said to "have" a different perspective. Note however that changing the focal length without changing the subject distance will not change perspective, as shown in the example below.
|
A. |
|
B. |
|
C. |
|
D. The perspective is clearly different and the distance between the subjects appears larger than in image C. |
Images B and C show that changing the focal length while keeping the subject distance constant has-just like cropping-no effect on perspective.
Image D shows that changing the subject distance while holding the focal length constant will change perspective.
Images C and D show that a tele compresses perspective (makes subjects look closer to one another), while a wide angle exaggerates perspective (makes subjects look more separated) compared to the "normal" way we see things with the naked eye. As mentioned earlier, this change in perspective is a direct consequence of the change in subject distance and thus only an indirect consequence of the change in focal length. Indeed, a wide angle lens allows you to capture subjects from nearby, while a tele lens allows you to capture distant subjects.
Optical - Picture Angle
The picture angle is measured diagonally
The example below shows the difference between two focal lengths, 30mm and 100mm.
|
|
|
|
30mm wide angle |
100mm tele has more narrow field of view (indicated in red in the wide angle image) |
Optical - Pincushion Distortion
Pincushion distortion is a lens effect which causes images to be pinched at their center. Pincushion distortion is associated with tele lenses and typically occurs at the tele end of a zoom lens. The use of converters often amplifies the effect. It is most visible in images with perfectly straight lines, especially when they are close to the edge of the image frame. See also the opposite effect, barrel distortion.
|
|
Pincushion distortion deflates the square |
Example of |
We measure pincushion distortion in our reviews as the amount a reference line is bent as a percentage of picture height. For most consumer digital cameras, pincushion distortion is lower than barrel distortion with 0.6% being a typical value.
Pincushion distorted images from virtually any digital camera can be corrected easily using a set of free tools for Photoshop.
Optical - Subject Distance
Subject distance is the distance between the camera (lens) and the main subject. Varying the subject distance will change perspective. Also, varying the subject distance with the same aperture will produce a different depth of field.
Optical - Vignetting
Zoom lenses, especially the lower end ones, can sometimes suffer from vignetting. The barrel or sides of the lens become visible, resulting in dark corners in the image as shown in this example. The use of converters can also result in vignetting.
Example of vignetting
Color Mode
|