Photography 101 – Part II
The Digital System
The Digital system – Digital
photography introduced a system that never before existed in the world of film
photography with many advantages such as practically endless photo number,
immediate feedback on the photo taken, and lately even higher quality than film
based cameras. in this part we will walk through the digital system and
understand its parts.
The Sensor – That's actually our
"film". Its job is to capture light and convert it from photons
(light particles) into electrons by photoelectric cells. Each cell in the
matrix transfers electrons depending on the amount of light that enters it. A
proper analogy is buckets filling up with rain water (buckets being the sensor
and rain is photons), each cell has a maximum capacity of "water" and
if it fills up the pixel is white and can also affect the cells around it, and
this effect is called "blooming". In modern sensor we can find a
"drain" system that helps prevent this phenomenon.
The size of the sensor – size does
matter. The bigger the sensor per given resolution, the larger surface area
each photoelectric cell (pixel) is given, which eventually means each cell can
gather more photons with less interference from other cells. This mainly means
less "noise" in the image. A smaller sensor also requires a
higher lens resolution since the pixels are smaller and denser. Another thing
affected by sensor size is the Depth of Field, a smaller sensor means Greater
DOF and larger sensor is shallower DOF. Smaller sensor also has a "crop
factor multiplier" because it has a smaller surface than a 35mm camera
film, it takes a small portion of the projected image (crops the image) and produces
a normal frame, hence practically is multiplying the focal length covrage, for
example Nikon DSLR has a X1.5 crop factor and Canon DSLRs have X1.6 crop factor
And 50mm lens on a DSLR will produce similar frame coverage as 75mm lens on
CMOS – Complementary Metal Oxide
Semiconductor – now that's a long name… in short, CMOS were originally used in
computer chips, it's much cheaper to produce and the main difference between
CMOS and CCD is that in CMOS each photodiode (pixel) holds several transistors
that read and amplify the signal. Because light also hits these transistors and
not only the photoelectric cells, these sensor are traditionally less sensitive
to light. (I suggest describing CMOS before CCD)
CCD – Charge-Coupled Device – one
of the two common types of sensors. In CCD sensors, the photoelectric cells on
the sensor don’t have a designated transistor, but rather deliver the electrons
flow to the side of the sensor, where circuitry surrounding the sensor
processes the output and sends it to an A/D converter. CCD sensors produce high
image quality, yet are more expensive to manufacture and consume more energy than
CMOS. CCDs are more common than CMOS sensors.
ISO – International Standard
Organization a.k.a. ASA American Standard Association. The sensor's
sensitivity is actually the amount of amplification of the signal from the
sensor. Every sensor has a base sensitivity in which it provides the 'cleanest'
photo, and each increase in that sensitivity will shorten the exposure and
produce a noise side affect. In low light conditions it's recommended to use
higher ISO, but in general lower ISO is recommended for avoiding this
Electronic Shutter – we've already
talked about regular shutters, now we'll talk about the electronic one. Some
cameras only posses a mechanic shutter, some only an electronic shutter and
some have both. Electronic shutter defines the length of exposure by stopping
the readout from the photoelectric cells after the specified period of time.
Noise – just as there are interferences
when we listen to the radio, there are interferences in digital photography. In
the sensor, as we amplify slightly the sensor's signal in low ISO, we will find
an image clean of noise. But as we raise the ISO, more aggressive amplification
is occurring and static interference between the photoelectric cells is also
amplified, producing random pixels in different colors. It's recommended to
shoot bright images in high ISO in order to reduce noise visibility. Another
kind of noise is produced in long exposure shots, which is due to sensor
heating. Try avoiding long exposures that produce this noise by taking several
shorter exposures and stitching them together via software.
Noise Reduction – every camera applies
some sort of noise reduction. On the computer you can use software such as
Noise, Ninja, or Neat Image to clean this noise. In long exposure shots, you
may apply the camera’s noise reduction method, entitled "dark image
subtraction" - in which the camera takes another frame after the original
frame, this time with a closed shutter (resulting in a dark frame), and then
reduces the "hot pixel" in the dark frame from the original one.
A/D Converter – we mentioned that the
sensor receives photons and emits electrons in reaction, but the processor in
the camera doesn't know how to read this information and that's why we have the
A/D converter which reads the voltage output for each pixel from the sensor and
attaches a value between 0 and 255 (in case of 8 bitcolor depth) and then gives
it a binary sequence so that the CPU can process it. In 8 bit depth, 0 is black
and 255 is totally white. In 12 bit system the pixel is given a value between 0
and 4096 (2^12) , resulting in a greater dynamic range and more details in
Color – as we said - each cell can
distinguish 256 levels of brightness, so how does color get into the picture?
By using a Color Filter Array of RGBG (Red-Green-Blue-Green) we have a quarter
of the pixels red, a quarter blue and half of them are green. The reason for
that is because the human eye has different sensitivity for each color. After
the readout, an interpolation called Bayer's interpolation, is being done in
the processor, which cleverly calculates the colors for each pixel in
consideration with its surrounding pixels, This creates a 24-bit color-depth
image (2^8 x 2^8 x 2^8 = 2^24). A different kind of color rendering is the
Foveon way, which uses three different layers of color for each pixel in the
sensor thus creating the most reliable color.
Buffer – after the image is created, it
is stored in a temporary memory section called "buffer" until it has
finished its processing and is written to the memory card. The size and
efficiency of the buffer affects the speed of photography and how many photos
can be taken before we need to wait for the camera to take the following shot.
The Processor – one the most important
parts of the digital camera. The manufacturers invest many resources in
developing the main image processor and the software that creates the image and
determines the final image quality. These processors are very powerful and can
process tens of millions of pixels a second.
Memory Cards - at the end of the
photography process, the image is saved on a flash memory card. It's very
durable and very small and can hold thousands of photos (on an 8 GB card for
example). The speed of the card determines how fast the buffer will empty
itself and shortens the delay between each photo.
I Hope you enjoyed part II
of Photography 101.
Join me in the next episode
– The Lens System
Previous Parts – The Camera, The Digital
For more photography articles – composition, Depth of field, Filters, Flash Photography, infrared photography, Sunset photography,
© All rights reserved www.galitz.co.il. Please do not use without explicit and written permission.