What is a RAW file (RAW image)

Sometimes the analogy is made with the world of analog photography and is colloquially called digital negative, referring to the opposite of the film.

I’m not a big fan of this analogy, but in a way, RAW and negative share some similarities that we will see later.

How does a digital image sensor work?

In summary:

  • The sensor consists of millions of light-sensitive cells.
  • Each cell transforms the photons of light that come from the scene into an electrical signal (transforms photons into electrons)
  • That signal is measured, amplified if necessary (ISO), and converted to an integer (in the analog-digital converter)
  • That integer associated with each cell is what is known as the RAW value.

This RAW value associated with each cell is on a scale that depends on the number of bits with which the analog signal is encoded.

For example, if we have an 8-bit sensor, each cell can take a value between 0 and 255.
0 corresponds to a completely black point in the scene.

The 255 corresponds to a completely white point in the scene.

In a 12-bit sensor, each cell can take a value between 0 (black) and 4095 (pure white). In a 14 bit sensor, the scale goes from 0 to 16383.

A very intuitive way to see the RAW value is by imagining that each of these units is generated from a specific number of photons.

For example, imagine that the signal corresponding to 10 photons is encoded with a value one at ISO base. If the cell receives 1,000 photons, we would have a RAW value of 100 for that cell. If the cell gets 10,000 photons, we would have a value of 1,000.

When we raise ISO, we reduce the scale. The previous example presents an ISO step that implies that a RAW 1 value is encoded for every five photons. With 1000 photons, we would have a RAW value of 200… and so on.

Sensors see in black and white.

A typical CMOS sensor in any camera is sensitive to light in a range from approximately 350nm to 1000nm (the human eye 400 to 700nm).

The sensor’s sensitivity is not constant in that range, but it will convert photons from that entire spectrum of light.

Once the photon generates an electron, that electron does not retain any information about the original photon.

The ‘color’ information is lost.

If we had a monochrome sensor, like the ones used in astronomy, and we wanted a color image, we would have to take at least three photos of the same scene: one putting a red optical filter in front of it, another putting a green filter in front of it, and another with a blue filter.

And in editing, we could merge those three ‘channels’ of color to generate the final image.

To achieve the three versions simultaneously, we could have cameras with three sensors with a different color optical filter. But this would be very expensive and impractical due to size and complexity.

In the cameras we are used to, a single sensor is used, but each cell is covered with a small color filter.

These filters form a mosaic that covers the entire sensor:

This mosaic is known as an RGB filter or RGB pattern.

There may be different distributions or patterns. The most commonly used are the Bayer pattern, used by most sensors, and the X-Trans pattern, used by some Fujifilm sensors.

More importance is given to green in these patterns, usually the wavelength where the sensor’s efficiency (its sensitivity) is highest. That information from the greens better represents the brightness of the scene.

When we take a photo with a sensor of this type, it is as if we had taken three pictures in black and white from the light filtered by the color filters.

For example, if the sensor is 24Mpx and uses a Bayer mosaic, we will have:

  • A 6MP black and a white image corresponding to the blue channel
  • A 6MP black and a white image corresponding to red tones
  • A 12 Mpx (2 x 6 Mpx) black and a white image corresponding to greensThe three images are overlaid on the sensor surface but do not overlap each other.

Each of these 3 ‘images’ is often called a channel: red channel, green channel, blue channel.

But you have to take the context into account. These channels are not the same as the color channels of a final image (JPEG, TIFF)

raw file

What is the RAW file?

The RAW file contains an exact copy of the RAW values ​​that the sensor has generated.

Remember: the cell converts photons to electrons, the accumulated electrons generate a measurable voltage (analog signal), and that voltage level is converted to an integer: RAW value.

The word RAW is not an acronym; it comes directly from the English ‘ raw, ‘which means raw, without processing.

And the RAW file contains that: an exact copy, without processing, of the values ​​that the sensor has generated at the output of the analog-digital converter.

Is the RAW file an image?

For cameras that have an RGB filter sensor: the RAW file contains the information of an image, but it is not an image as such.

Most devices and programs that work with digital images are based on a representation in which each point of the image contains three colors: red, green, and blue.

The representation of an image is a matrix (x, y) of vectors (r, g, b)

For example, a point whose coordinates are (12, 20) has color components (20, 45, 250)

The RAW file is a matrix (x, y) of scalars (an integer value, a number)

For example, a sensor cell whose coordinates are (12, 20) has a value of (250)

To convert RAW information into an image, a process known as color interpolation (chromatic interpolation) must be carried out, or by its English name: demosaicing.

Other corrections and transformations must be made to obtain the final image in a standard format understood by devices (screens, monitors) and editing programs.

RAW file, RAW image, or RAW format?

RAW is not a format as such.

Each manufacturer packages RAW information in its formats; there is no single standard or design.

The universal RAW format would be Adobe’s DNG.

We have already seen that the RAW file of most of our cameras does not contain an image as such, but the information necessary to build it a posteriori (in the development process)

In any case, all these terms are usually used to refer to RAW: RAW file, RAW image, or RAW format would be equivalent forms.

What additional information do RAW files contain?

As we have mentioned, each manufacturer has its RAW formats. The same manufacturer can have several different forms or versions of the same design.

But generally speaking, what does the RAW file usually contain?

  • The matrix with the RAW data from the sensor
  • Metadata with information about the camera, lens, and its configuration parameters
  • Sensor related metadata
  • An image in JPEG format is embedded.
    This image is used for previewing the file on the camera or in editing programs, etc.

developing

What is developing a RAW?

The development process consists of converting the data matrix with sensor information into an image with RGB information.

A basic development would consist of:

  • Chromatic interpolation, using the most appropriate algorithm for the type of scene or the results we want to achieve
  • Initial white balance, either from the information configured in the camera when taking the photo or from a neutral white balance associated with that sensor (each sensor has a different response. Therefore, the balance neutral white is extra).
  • Apply a gamma correction (usually associated with a color space) to see the image correctly in terms of brightness on a monitor or screen.

With that, we would already have a ‘standard’ image in which each point of the image has its three color components.

From here, we could edit the image or save it in some universal format, for example, JPEG or TIFF.

That initial image will generally be very flat: low contrast and low color saturation.

Most development programs apply a base curve to contrast and make the image look more like what we saw on the camera.

It should be borne in mind that the color interpolation process and white balance involve decision-making, a transformation in which we lose some of the original information in a certain way.

If we save the resulting image in TIFF format, for example, 16 bits per channel TIFF, the loss of information is very small, practically negligible at a practical level.

If we save the image in JPEG (8 bits per channel), the information loss is much more critical.
JPEG applies lossy compression. It is a final format for consumption.

Advanced RAW development

In many programs, the necessary development is transparent to the user. We open the RAW file and click it, and we are already viewing the developed image.

From that moment on, what we do is the process the image to make it more attractive or to better match the memory we had of the real scene.

Anyway, all this processing that we do within a development program: Lightroom, Dark Table … is often known as ‘development.’

Most development programs use a non-destructive workflow. The changes we make are saved as a sequence of commands, a recipe to ‘cook’ the RAW data, and generate the final image.

Shooting in RAW vs. shooting in JPEG

Before going into more detail, let’s see how a camera generates the image in JPEG:

  • The whole process is identical until generating the matrix with the RAW values.
  • The camera’s processor performs the color interpolation (demosaicing) with the algorithm that it has programmed internally.
  • The processor makes the white adjustment based on the information provided by the user (if a specific white balance is configured) or using some algorithm to estimate the proportion (when the automatic balance option is configured)
  • The processor applies the contrast and saturation curves from the image profile configured by the user (neutral, vivid custom)
  • Gamma correction is applied, and the image is associated with color space, for example, sRGB or Adobe RGB.
  • Optionally, a noise reduction algorithm can be applied in some parts of the process.
  • Optionally, some geometric correction algorithms can be applied.
  • The JPEG file is generated with the corresponding compression. You can choose the quality of the JPEG (low, high, fine, superfine)
  • The JPEG file is saved on the memory card.
  • If we have configured the camera to shoot only in JPEG: RAW information is removed.

As you can see, based on the information captured by the sensor, the camera makes many decisions over which the user only has partial control.

Information contained in JPEG vs. RAW

Let’s talk a bit about bits and information.

Today most sensors are 12 or 14 bit.

For example, on a 12-bit sensor, each RAW level is on a scale with a total resolution of about 4000 possible values.

When we do the chromatic interpolation of a RAW ‘image,’ we will have. As a result, an RGB image in which each color (each channel) is encoded with the same number of bits as the initial RAW.

That is, at each point of the image, we will have the color information encoded with 12 or 14 bits per channel:

In a 12-bit sensor, each point in the RGB image would be encoded with 12 x 3 = 36 bits.

In a 14-bit sensor, each point would be encoded with 14 x 3 = 42 bits.

This does not mean that all this volume of information corresponds to information from the real scene.

The interpolation process ‘invents’ an essential part of the color of each point. We cannot generate information from where there is none. But once the interpolation process is finished, we could say that we have an RGB matrix with high tonal resolution.

jpeg

Jpeg

The JPEG format, for its part, works with 8 bits per channel (8 x 3 = 24 bits per pixel )

Going from the RGB matrix to the JPEG matrix is ​​equivalent to sampling. We lose resolution of the tones of each point.

This is noticeable in images that contain very smooth gradients.

The most typical example is a clear blue sky. The human eye can see the sky as a continuous and subtle gradient of shades of blue.

In a digital image, there is no continuous gradient. Still, if the tonal resolution (and spatial resolution) is adequate, the human eye cannot distinguish these ‘jumps’ between very similar tones.

When the tonal resolution (number of bits) is not high enough, there comes a time when some jumps within the gradient are no longer seen as continuous: an effect is known as banding (posterization) occurs.

Furthermore, to generate the final JPEG file, a series of lossy compression algorithms are applied.

IMPORTANT: JPEG compression algorithms consider how human eyes work and how the brain processes that information. Unless the image is compressed a lot, we will perceive it as remarkably faithful to the real scene that our own eyes would see.

Although we might see some banding/posterization indirect camera JPEG images, this is not the norm.

Tiff

It is a widely used format to save images preserving all their information, or as a format for information exchange between image processing applications.

We are merely going to comment on it as an example of what it would be like to store the image in an RGB format without compression or loss of information.

To store an image from the RGB matrix of 12 or 14 bits per channel, we would need to work with a TIFF of 16 bits per channel (16 x 3 = 48 bits per pixel)

Note that we have 4 or 2 bits left per channel. Those extra bits of the TIFF take up space but don’t include any image information. They are filler.

The problem with the TIFF format is that it generates enormous files.

Doing a quick napkin calculation: 48 bits per pixel is 6 bytes per pixel. For a 20Mpx sensor, a file of at least 120MB (megabytes) would be generated. A JPEG file with the same image and minimal compression could be stored at about 10MB.

Going back to JPEG vs. RAW

How is the RAW file generated on the camera?

  • This information is taken and packed in the file for the RAW part (the matrix with the RAW values).
  • In most RAW formats, a JPEG file is also generated, applying the processes that we have seen in the previous section. And that JPEG image is embedded within the RAW file.
  • All metadata about the camera, lens, and sensor settings are also saved.
  • The RAW file is sent to the memory card.

In some cameras, the RAW file, the data matrix, can take some initial processing.

For example, some cameras can make corrections to compensate for geometric aberrations of a particular lens.

It is said that a slightly ‘cooked’ RAW is generated.

 

What are the advantages of using JPEG files directly from the camera?

  • It saves us a lot of time.
  • Working with RAW means that you have to do at least a necessary development before you have a usable image
  • In many situations, the JPEG image already has excellent final processing that it would make us work and time to replicate in the development and editing process.
  • If we use the image profiles correctly and can customize them at will, we will have final images to our liking.
  • The internal noise reduction algorithms of cameras are usually very efficient.
  • They are algorithms specifically designed (or adapted) to work with a particular sensor.
  • Most monitors and devices are 8 bits per channel.

Therefore, on your 8-bit monitor, you will not be able to directly see that tonal resolution that your camera sensor can generate. You always see a simplified 8-bit version.

Disadvantages of JPEGs

  • We have to make sure at the time of shooting to configure the exposure as exact as possible.
  • We have to set the correct color balance suitable for each scene.
  • We have to choose a priori the final style of the image: contrast, saturation, unique profiles …
  • If we are going to do a later edition, the JPEG will limit us more. It contains much less information than the original RAW.
  • There are many situations in which we can significantly improve the image from RAW, for example, with tonal mapping (raising shadows, recovering highlights) compared to what we can do from a JPEG.
  • Although we key in the exposure, with JPEG, we have much less room to make these kinds of selective adjustments in editing.
  • Editing on JPEG is always destructive.
  • Modifying a JPEG image and saving it again means that all lossy compression processes are reapplied to it.

raw

What are the advantages of using RAW?

They are related to the disadvantages of JPEGs.

The main advantages from my point of view:

  • We can choose the color interpolation algorithm that interests us the most (not all development programs offer this option)
  • We can choose the white balance at the time of development.
  • We work with all the information that the sensor provided us.
    This gives us much more margin if we have to adjust the exposure, do a tonal mapping (raise shadows, recover lights), change the color.
  • We can adjust the contrast and saturation at will from the initial information (not from information that has already been altered when generating the JPEG)
  • We can use different noise reduction algorithms if necessary.
  • The developing process is a non-destructive
    From the same RAW can get many different versions of the final image revealed once and processed. We always have that original RAW that is not touched or modified internally.

The most obvious disadvantage is that to get the final image, we have to do the development, which takes time, and we need to have some basic knowledge.

Sometimes, especially if we are not very experienced, it is not easy to improve or even match the JPEG version that the camera itself has generated.

It can be a bit frustrating for a beginning user to see that, for example, the JPEG from the camera or the one generated by a mobile phone seems visually more attractive than the images that he tries to get from RAW.

In other words, working with RAW requires extra effort and knowledge.

But once you get the hang of it a little, the results can be spectacular, and with a bit of experience, the workflow can be speedy.

How about RAW + JPEG?

Suppose you are always going to edit your photos: advanced development + image editing. Then always choose RAW.

If you hardly ever edit or do very light editing and have much experience with your camera settings, direct JPEGs from your camera ( SOOC – Straight out of camera) may be the most convenient option.

Most cameras allow the option to save RAW + JPEG.

I usually use this configuration.

Many of the photos I can use directly with minimal further editing.

And in the most complicated situations or photos for which I want a deeper and more personalized edition, I use the RAW file to get the full potential.

RAW files also serve as backup copies.

The development programs use a non-destructive workflow. The RAW information is always there, preserved in the file.

choosing

Final summary: choose JPEG or RAW

When we shoot in JPEG, we are delegating a good part of the decisions to the camera. We lose control over the process.

The JPEG image contains considerably less information about the scene than the RAW file: fewer bits per color channel and also the loss generated by the compression process.

On the other hand, cameras have advanced algorithms and a ‘science of color’ that have evolved and improved a lot over the last few years.

Direct camera JPEG images are generally of excellent quality.
If you plan to process the images: adjust contrast, saturation, color, etc. -> Use RAW.

Suppose you photograph scenes with a high dynamic range (much difference in brightness between the brightest and darkest areas). In that case, you are interested in using RAW since you will have more information to recover highlights and raise shadows (tonal mapping).

If you notice that the final photo generated by the camera (JPEG) does not describe exactly what your eyes saw in the scene: for example, duller colors, no contrasts between colors, lack of sense of depth …

Then use RAW and try to make a development that reflects those feelings you lack in JPEG.

From RAW, you can get the most out of your camera’s sensor.

And the development process will also help you understand how your camera works, its limitations, and how to improve your technique when taking photos.

I recommend configuring the camera in any case to record JPEG + RAW.

The RAW, even if you don’t use it, serves as a backup that contains all the possible information.

If you are not going to process the image or you are going to do straightforward editing, you can directly use the direct camera JPEG.

Try the different predefined image profiles of your camera: vivid, neutral, standard, landscape … These profiles are the ones the camera uses to ‘cook’ the RAW and generate the JPEG: contrast, saturation, etc.

Many cameras allow you to customize these profiles or create profiles from scratch with the parameters you want.

In many situations, direct camera JPEG is more than sufficient.