How a digital camera sensor works

You don’t have to be a particle physicist to take a photo. Still, I recommend that you read this information to get a basic idea of ​​how the sensor of a digital camera works, its limitations, and the parameters that a photographer plays to take advantage of the camera.

Sensor operation

The sensor of a digital camera is an actual work of engineering. It is made up of millions of photosensitive cells, each of them microscopic:

When we talk about megapixels, it corresponds to the millions of pixels (cells) that are part of our camera’s sensor.

For example, a camera with a 20Mpx APS-C sensor ( approx. 23 x 15 mm ) has 20 million photosensitive cells with a size of about 4 micrometers (1 micrometer is one-thousandth of 1 millimeter).

When we take a photo, each cell of the sensor analyzes the light that reaches it: a tiny part of the image of the scene we want to photograph.

Each cell includes a photodiode that converts light into electricity.

It also includes the electronics necessary for each element to work independently and to be able to read the information of each pixel each time we take a photo.

In most current sensors, each cell also includes a small individual lens to focus light on the sensitive surface. Can you imagine the size of those micro-lenses?

Each photodiode (photosensitive element of the cell) works as a solar panel: it receives photons that, when interacting with the atoms of the material, generate electrons (they convert light into electricity, as we have mentioned)

Currently, most sensors are based on CMOS ( Complementary Metal-Oxide-Semiconductor ) technology.

The circuitry is added to the photosensitive material, made up of insulating zones (oxides) and metal.

The traditional manufacturing method consists of making the circuitry ‘grow’ upwards on the silicon substrate. The RGB filter is placed (from Bayer, for example, below I talk about color) and the microstructure. Lenses.

The electronic part occupies a minimal surface, but the light is reflected or absorbed in these layers and does not reach the photosensitive material.

BSI sensors

The BSI ( Back-Side Illuminated sensor ) sensors are based on a different manufacturing method: all the CMOS circuitry and structure are placed in the lower part of the photosensitive material.

We turned the sensor around and illuminated it from the back, although the name is possibly not very lucky and often misleading.

In any case, this structure achieves an appreciable performance improvement since no photons are lost in the upper layers. This manufacturing method is more expensive and only applied to small sensors, but as costs get cheaper, larger sensors are becoming available.

Colour

Photosensitive cells can only detect light intensity (number of photons over a specific time), not color.

The sensors include optical filters (RGB filters) that break light down into three components: red, green, and blue. In most sensors, a Bayer filter or mosaic is used so that some cells receive only the light corresponding to the red component, others only the blue part, and others only the green component. An RGB filter variant is Fuji’s X-Trans. In Foveon sensors, the layout is different, but the principle of operation is the same.

If you look at the Bayer / X-Trans sensors, the sensor collects only partial color information. Each point’s real color must be interpreted a posteriori with chromatic interpolation algorithms (demosaicing) and white balance.

We have commented that each photodiode receives one of the components (red, green, or blue), but from the photodiode’s point of view, it is merely light (photons), so for the working examples that follow, it does not matter if it is white light or filtered. Imagine that we remove the Bayer filter from the top and are left with a black and white image of the scene, where each sensor cell corresponds to a pixel in the picture.

How is the image captured on the sensor?

The sensor cell works in the following way: when we press the camera shutter to take a photo, we open the shutter and let photons pass through to the photodiode.

The photodiode converts them into electrons, which accumulate in a small deposit (capacitor).

When the camera shutter closes (light stops passing through), each sensor cell will have a certain level of electrons, depending on the number of photons that that bit of the image has received.

If a cell does not have electrons, it has not received any photons (dark area of ​​the image). If a cell has its deposit full of electrons, it will correspond to a white area of ​​the image.

The electronics of the camera are in charge of reading all the sensor cells one by one.

And each of these levels is assigned a numerical value. For example, in an 8-bit sensor, a value between 0 (black) and 255 (white) will be assigned. In a 12-bit sensor, there would be about 4,000 different levels for each pixel of the sensor.

Finally, the camera’s processor uses all this information to generate the image file and saves it on the memory card.

Notice that in the example above, some ‘extra’ electrons have not been generated from the photons, but have been developed by other effects, for instance, due to the thermal impacts for simplicity.

At the moment, he thinks that these invited electrons (noise) are very few compared to the electrons that have been generated from the light of the scene. We will take them into account later.

Now let’s imagine that the cell represents the average light of the scene we want to photograph.

The cameras have two parameters that allow controlling the amount of light reaching the sensor:

  1. the aperture of the diaphragm
  2. the exposure time.

The diaphragm is like a window; it can be adjusted to let in a lot of light or little light.

Imagine that there is a lot of light in the scene, there are many photons (examples a and b ), and we want to obtain a certain level of ‘clarity’ in our photo.

We can close the diaphragm more and leave the shutter open for a specific time, or we can open the diaphragm and open the shutter only for an instant.

In the end, the important thing is the number of photons that reach the photodiode.

If we want the same level of clarity, we will have to leave the shutter open for more time so that the identical photons arrive as before.

If we leave the shutter open long enough (even if there is very little light), there will come a time when all the cells will be full. It would correspond to a burned, totally white photo.

You can play with the aperture and the exposure time in different combinations to give the same result when it comes to the photo’s brightness levels.

When the scene is brightly or dimly lit, combinations of aperture and exposure time will not be suitable. Both the diaphragm and the shutter are physical elements that have their limits. In those situations, we would get burned photos – overexposed – or very dark – underexposed.

Now imagine that we want to take a photo in a scene with very little light.

If it is a static scene, we can increase the exposure time (minutes, even hours) so that the few outside photons enter little by little.

But if we want to photograph a moving scene with very little light, we have a problem:

  • If we leave the shutter open for a long time, the photo will be blurred (because the scene changes throughout that time)
  • If we program a short shooting time, very few photons will enter: we will have a very dark photo (underexposed)

To solve these situations, sensor manufacturers give the option of forcing the typical sensitivity of the cells.

It is what is known in cameras as a parameter or ISO value.

As we change the ISO parameter in our camera and we go up in value, we increase an internal multiplication factor.

Although we have drawn a single cell in the example, the multiplier effect applies to all compartments at once.

In other words, the light of the entire scene is ‘amplified,’ but not by optical methods but by electronic processes. The values ​​of each cell are scaled.

Do you remember the invited electrons that we mentioned above? Yes, you probably already suspected they would have a role in this story, and it is the role of ‘bad guys.’

The process of capturing an image appears several sources of noise: noise photonic, thermal noise, noise sampling analog / digital converters, etc.

You have to stay with the idea that electrons will be generated that have nothing to do with the information in the scene.

When many photons reach the sensor, the ratio between information and noise is very high (the amount of noise is negligible compared to the amount of signal or data).

When few photons arrive from the scene, the amount of noise is proportionally larger.

What happens when we force the sensitivity (we set a high ISO)?

We multiply the information that comes to us from the scene, but we also multiply the noise.

If the ratio between signal (information) and noise is not negligible, by amplifying everything (signal + noise), we make that ratio more visible. As we raise ISO, the resulting images will appear increasingly grainy and with colored dots that do not correspond to the scene.

Keep in mind that raising ISO does not generate noise; the noise was already there. We do this by increasing the sensitivity ‘artificially’ because the noise becomes more evident concerning the scene’s useful information.

Sizes of the sensors most used in SLR cameras and EVIL cameras

Professional range cameras usually use Full Frame sensors, similar in size to analog film (35mm).

Entry-level and mid-range cameras typically include APS-C sensors, which are about half the surface area of ​​a Full Frame sensor (about 40%)

The cameras are standard Micro Four Thirds ( Micro Four Thirds ) using sensors 17.3 x 13.8 mm, with about 25% of a full-frame sensor’s catchment area.

Here you can also see a size comparison between these sensors and the typical sensors of mobile cameras.

Noise, ISO, and sensor characteristics

We call image quality an indicator of the degree of fidelity of that image concerning the real scene we are photographing (the same applies to video).

Parameters that intervene in the quality of an image:

  • Resolution and sharpness: being able to appreciate more details
  • Color: faithfully reproduce the colors of the scene
  • Absence of artifacts: The image does not contain elements that are not part of the real scene ( digital noise, aliasing / Moiré, etc.)

If we talk about digital noise:

  • Noise is part of any image since the light itself, the photons that reach the sensor (or the photographic film), do not follow a continuous and stable pattern.
  • Assuming completely stable and homogeneous light sources, the number of photons reaching each point on the sensor has statistical fluctuations that follow a Poisson distribution.
  • These fluctuations are known as photonic noise (shot noise)

Besides, when the sensor converts photons into electrons (analog signal), other sources of electronic noise appear ( thermal noise )

Finally, when the electronic signal is converted to digital, some noise is also added due to rounding: continuous values ​​(with decimals) are converted to discrete values ​​(whole numbers)

Signal vs. Noise

We will never have a spotless image; it will always include noise.

The parameter that tells us objectively how clean an image is is the signal-to-noise ratio ( SNR – Signal to Noise Ratio ).

In this case, the signal is the scene’s information, and the noise is all those small random variations that are mixed with it: photonic noise, thermal noise, etc.

An image with a high SNR is a spotless, right-quality image.

An image with a low SNR is a poor quality image, with noticeable noise in grains and colored dots that are not part of the actual scene.

Number of photons vs. SNR

Photon noise follows a Poisson distribution.

The variability (fluctuations concerning the average number of photons arriving) is not proportional to the number of photons but the number of photons’ square root.

This means that when we have few photons, the variability is very high (low SNR), but when we have many photons, the variability, compared to the total number, is very low (high SNR)

On the other hand, thermal noise and any other noise source that appears when processing the electronic part of the signal are independent of the number of photons.

What does this mean?

The more photons we have to generate the image, the better its signal-to-noise ratio, its quality.

Cell size, noise, and SNR

Each sensor cell can be seen initially as an independent element, as a small sensor itself.

At the same exposure times, a larger cell (with more capture surface) will collect more photons than a smaller cell.

Larger cells will have a better individual performance for noise, generate a cleaner, more accurate pixel, and be more faithful to the image’s corresponding point/area.

Sensor size, noise, and SNR

The thing is, we can’t have a single cell sensor. The resolution also determines the quality of the image.

The sensor is made up of many, many cells.

Given a sensor size, the size of each cell is determined by the resolution:

  • More resolution: smaller cells
  • Less resolution: larger cells

We have seen that from the point of view of the individual cell, it is vital that it be as large as possible to maximize SNR.

However, if we consider the sensor as a whole, we could evaluate the overall quality of the image by applying the same criteria:

To get the highest signal-to-noise ratio in the whole image, I need to capture as many photons as possible.

As with cells, if we have two sensors with different sizes, for a particular exposure time:

  • The larger sensor will collect more photons in total ( better SNR )
  • The smaller sensor will collect fewer photons ( worse SNR )

What is more critical, the cell size or sensor size?

In general, the size of the sensor is more important.

An image generated by a large sensor will have higher quality (higher SNR) than the same image generated by a small sensor (lower SNR).

Think that we are talking in statistical terms.

There will be specific scenes in which a sizeable high-resolution sensor (small cells) does not take advantage of this advantage; for example, it occurs to me in images in which dark tones and few textures dominate.

A smaller sensor but larger cells (lower resolution) can achieve a more uniform and cleaner image in those scenes.

Another common mistake is to analyze or compare images at the pixel level (pixel peeping), especially when comparing images with different resolutions.

At that level of detail, the image with lower resolution (larger cells) will look more homogeneous locally: less tonal variability between nearby points.

The nearby points will appear more significant tonal variability for the image with higher resolution (smaller cells). Still, on the other hand, more points give more information about the scene.

To compare correctly, you always have to normalize: resize the images to have the same resolution or print them on photographic paper.

Noise vs. ISO

What is known as sensitivity or simply ISO value is a ‘trick’ that gives us a different degree of flexibility in digital sensors.

Raising ISO in a chamber corresponds to an amplification of the electrical signal collected by each cell.

You can also see it as a scale change of the container that stores those electrons generated by the cell (electrons generated by photons)

Let’s assume that we have the camera in base ISO, for example, ISO 100.

Imagine that each cell’s reservoir can store 60,000 electrons.

We are going to simplify, and we are going to work with 8 bits. The filled cell would correspond to a pure white: value 255. The empty cell is value 0.

We go up to ISO 200 (one light step, we multiply x2)

Now, when we reach a light level that corresponds to half the deposit (30,000 electrons), that point will appear as pure white in the image (255)

Another way of looking at it is to think that we have rescaled the deposit; we have replaced it with a deposit of 30,000 electrons.

We went up to ISO 400 (an extra light pass)

Now it is as if we have a deposit of 15,000 electrons capacity.

And so on.

The noise is the same in all cases; it does not depend on raising or lowering the ISO value.

But when scaling or amplifying, we scaled the signal and the noise that was already there. We make the noise more visible.

If you want to see it in another way: we are using fewer photons to generate the image. Therefore the SNR will be lower. The image will have more inferior quality.

Sensor resolution, optical resolution, and sharpness

Sharpness is a subjective characteristic of the image. An image seems clear to us when we can appreciate the small details, the textures, the edges of the perfectly delimited objects.

The sharpness that we perceive depends on the acuity (an equivalent but quantifiable, more scientific term) and the image’s resolution.

As we discussed earlier, we have to assess the image’s sharpness in its final support: monitor screen, printed photograph, etc.

But let’s start with the image as it comes out of the camera.

Let’s assume that we have a perfect focus on the scene, that the lighting is sufficient to neglect the effect of noise, with the camera on a perfectly stable tripod, etc.

The sharpness of the image will be determined by the optical quality of the lens and the sensor’s resolution. The quality of the optics can be simplified in terms of equivalent resolution.

Imagine a camera with a 24Mpx sensor in which we mount a lens with an equivalent resolution of 12Mpx.

The final image obtained will have 24Mpx of resolution but a sharpness that corresponds to those 12Mpx of the optical part.

It has happened to many of us that we buy a new camera, take a series of photos, and when we go to edit them and see them at 100% enlargement, we realize that the points appear ‘blurred’ at that level of detail.

To take full advantage of the sensor’s resolution, we will need lenses with a sufficient resolution equal to or greater than that of the camera’s sensor.

Be careful; it does not mean that we have to buy high-end equipment to take good photos; with those 12Mpx that this hypothetical objective of the example provides us, we would have enough sharpness to print our images.

Always try to evaluate or analyze the image as a whole, at the final resolution with which it will be used: for example, printed on paper or published on the web, in a social network, etc.

Also, keep in mind that many other factors can influence the blurring of an image:

  • Small camera shake
  • Image slightly blurred due to movement in the scene
  • A slight blur ( the focus system is taking as a reference an area of ​​the location that is in front of or behind the plane that we want to focus on)
  • Very narrow depth of field

The higher the sensor’s resolution, the more you can see these small effects at the pixel level, such as noise.

Quick summary regarding sensor performance

  • In good light, all sensors will generate high-quality images: From a certain signal-to-noise ratio, the noise itself is negligible, and we will not see differences in the images (rescaled to compare, etc.). There will be other effects that are much more relevant: optical quality, internal image processing.
  • The differences between sensors can be seen when there is not as much light in the scene, or it is a scene with high contrast between light and dark areas (dynamic range). For example, indoors, at dusk, at night, scenes with high dynamic range (tonal mapping/upload shadows in editing), situations where we need very high shutter speeds (and we have to raise ISO), etc.
  • The technological evolution of sensors is a significant factor.
  • In general, the larger the sensor, the higher the signal-to-noise ratio: Larger sensors generally have better noise performance (for technological equality, exposure, etc. compared to a smaller sensor).
  • The cell size influences the performance, but its effect is not so crucial except in very specialized sensors or for particular types of photography.
  • Raising the ISO value does not increase noise: Noise is related to the amount of light that the cell receives, the total amount of light that the sensor gets, and the sensor electronics (thermal noise, etc.). Raising the ISO makes the noise more visible because we scale it together with the scene information. We use fewer photons to generate the image: lower SNR.
  • The maximum ISO value of a camera is an irrelevant characteristic: The critical parameter is up to what ISO I can raise in a camera to obtain images with an acceptable quality, which is subjective. SNR> 20dB is taken as a reference for adequate quality and SNR> 30dB for outstanding quality.
  • Comparing the performance of cameras with different sensors is not easy: Many parameters are involved. You can use sites like DXOMARK or photonstophotos.net that use scientific methods to compare cameras.

Also, keep in mind that these data only give you a global idea. In any photographer’s day-to-day life, those maximum yields that each camera would theoretically take are hardly used.

Finally, the most critical conclusion would be that the key is above all in the amount of light that reaches the sensor.

If you can control or maximize the light that the sensor receives, you will be optimizing its performance and the quality of the photos or video: lighting, choosing the right moments and places.

In this sense, the objectives also play a crucial role. The lens aperture determines the maximum amount of light (per unit area) to reach the sensor for a given scene.