Sometimes the analogy is made with the world of analog photography and is colloquially called digital negative, referring to the negative of the film.
I’m not a big fan of this analogy, but in a way RAW and negative share some similarities that we will see later.
How does a digital image sensor work?
- The sensor consists of millions of light sensitive cells
- Each cell transforms the photons of light that come from the scene into an electrical signal (transforms photons into electrons)
- That signal is measured, amplified if necessary (ISO) and converted to an integer (in the analog-digital converter)
- That integer associated with each cell is what is known as the RAW value .
This RAW value associated with each cell is on a scale that depends on the number of bits with which the analog signal is encoded.
For example, if we have an 8-bit sensor, each cell can take a value between 0 and 255.
0 corresponds to a completely black point in the scene.
The 255 corresponds to a completely white point in the scene.
In a 12-bit sensor, each cell can take a value between 0 (black) and 4095 (pure white). In a 14 bit sensor the scale goes from 0 to 16383.
A very intuitive way to see the RAW value is by imagining that each of these units is generated from a specific number of photons.
For example, imagine that the signal corresponding to 10 photons is encoded with a value 1 at ISO base. If the cell receives 1,000 photons we would have a RAW value of 100 for that cell. If the cell receives 10,000 photons we would have a value of 1,000 …
When we raise ISO we reduce the scale. In the previous example, raising an ISO step implies that for every 5 photons a RAW 1 value is encoded. With 1000 photons we would have a RAW value of 200… and so on.
Sensors see in black and white
A typical CMOS sensor in any camera is sensitive to light in a range from approximately 350nm to 1000nm (the human eye 400 to 700nm).
The sensitivity of the sensor is not constant in that range, but it will convert photons from that entire spectrum of light.
Once the photon generates an electron, that electron does not retain any information about the original photon .
The ‘color’ information is lost.
If we had a monochrome sensor, like the ones used in astronomy, and we wanted a color image, we would have to take at least 3 photos of the same scene: one putting a red optical filter in front of it, another putting a green filter in front of it, and another with a blue filter.
And in editing we could merge those three ‘channels’ of color to generate the final image.
To achieve the three versions at the same time we could have cameras with 3 sensors, each of them with a different color optical filter. But this would be very expensive and impractical due to size and complexity.
In the cameras we are used to, a single sensor is used, but each cell is covered with a small color filter .
These filters form a mosaic that covers the entire sensor:
This mosaic is known as an RGB filter or RGB pattern.
There may be different distributions or patterns. The most commonly used are the Bayer pattern , used by most sensors, and the X-Trans pattern , used by some Fujifilm sensors.
In these patterns, more importance is given to green, which is usually the wavelength where the sensor’s efficiency (its sensitivity) is highest. That information from the greens better represents the brightness of the scene.
When we take a photo with a sensor of this type, it is as if we had taken 3 photos in black and white, from the light filtered by the color filters.
For example if the sensor is 24Mpx and uses a Bayer mosaic we will have:
- A 6MP black and white image corresponding to the blue channel
- A 6MP black and white image corresponding to red tones
- A 12 Mpx (2 x 6 Mpx) black and white image corresponding to greens
The 3 images are overlaid on the sensor surface, but do not overlap each other.
Each of these 3 ‘images’ is often called a channel: red channel, green channel, blue channel.
But you have to take the context into account. These channels are not exactly the same as the color channels of a final image (JPEG, TIFF …)
What is the RAW file?
The RAW file contains an exact copy of the RAW values that have been generated by the sensor.
Remember: the cell converts photons to electrons, the accumulated electrons generate a measurable voltage (analog signal), and that voltage level is converted to an integer: RAW value.
The word RAW is not an acronym, it comes directly from the English ‘ raw ‘, which means raw , without processing .
And the RAW file contains that: an exact copy, without processing, of the values that the sensor has generated at the output of the analog-digital converter .
Is the RAW file an image?
For cameras that have an RGB filter sensor: the RAW file contains the information of an image, but it is not an image as such .
Most devices and programs that work with digital images are based on a representation in which each point of the image contains the information of the 3 colors: red, green and blue.
The representation of an image is a matrix (x, y) of vectors (r, g, b)
For example, a point whose coordinates are (12, 20) has color components (20, 45, 250)
The RAW file is a matrix (x, y) of scalars (an integer value, a number)
For example, a sensor cell whose coordinates are (12, 20) has a value of (250)
To convert RAW information into an image, a process known as color interpolation (chromatic interpolation) must be carried out, or by its English name: demosaicing .
In addition, other corrections and / or transformations must be made to obtain the final image in a standard format understood by devices (screens, monitors …) and editing programs.
RAW file, RAW image or RAW format?
RAW is not a format as such.
Each manufacturer packages RAW information in their own formats, there is no single standard or format.
The most universal RAW format would be Adobe’s DNG.
We have already seen that the RAW file of most of our cameras does not contain an image as such, but the information necessary to build it a posteriori (in the development process)
In any case, all these terms are usually used to refer to RAW: RAW file, RAW image or RAW format would be equivalent forms.
What additional information do RAW files contain?
As we have mentioned, each manufacturer has its own RAW formats. The same manufacturer can have several different formats or versions of the same format.
But generally speaking, what does the RAW file usually contain ?
- The matrix with the RAW data from the sensor
- Metadata with information about the camera, lens and its configuration parameters
- Sensor related metadata
- Image in JPEG format embedded .
This image is used for previewing the file on the camera or in editing programs, etc.
What is developing a RAW?
The development process consists of converting the data matrix with sensor information into an image with RGB information.
A basic development would consist of:
- Chromatic interpolation , using the most appropriate algorithm for the type of scene or the results we want to achieve
- Initial white balance , either from the information that was configured in the camera at the time of taking the photo or from a neutral white balance associated with that sensor (each sensor has a different response and therefore the balance neutral white is different).
- Apply a gamma correction (usually associated with a color space) so that we can see the image correctly in terms of brightness on a monitor or screen.
With that we would already have a ‘standard’ image in which each point of the image has its 3 color components.
From here we could edit the image or save it in some universal format, for example JPEG or TIFF.
That initial image will normally be very flat: low contrast and low color saturation.
Most development programs apply a base curve to give some contrast and make the image look more like what we saw on the camera.
It should be borne in mind that the color interpolation process and white balance involve decision-making, a transformation, in which we lose some of the original information in a certain way.
If we save the resulting image in TIFF format, for example 16 bits per channel TIFF, the loss of information is very small, practically negligible at a practical level.
If we save the image in JPEG (8 bits per channel) the loss of information is much more important.
JPEG applies lossy compression. It is a final format, for consumption.
Advanced RAW development
In many programs the basic development is transparent to the user. We open the RAW file and click it … we are already viewing the developed image.
From that moment on, what we do is process the image to make it more attractive or to better match the memory we had of the real scene.
Anyway, all this processing that we do within a development program: Lightroom, Dark Table … is often known as ‘development’.
Most development programs use a non-destructive workflow . The changes we make are saved as a sequence of commands, a kind of recipe to ‘cook’ the RAW data and generate the final image.
Shooting in RAW vs shooting in JPEG
Before going into more detail, let’s see how a camera generates the image in JPEG:
- The whole process is identical until generating the matrix with the RAW values
- The camera’s processor performs the color interpolation (demosaicing) with the algorithm that it has programmed internally
- The processor makes the white adjustment based on the information provided by the user (if a specific white balance is configured) or using some algorithm to estimate the balance (when the automatic balance option is configured)
- The processor applies the contrast and saturation curves from the image profile configured by the user (neutral, vivid … custom)
- Gamma correction is applied and the image is associated with a color space, for example sRGB or Adobe RGB
- Optionally, a noise reduction algorithm can be applied in some part of the process
- Optionally, some geometric correction algorithm can be applied
- The JPEG file is generated with the corresponding compression, in many cameras you can choose the quality of the JPEG (low, high, fine, super fine …)
- The JPEG file is saved on the memory card
- If we have configured the camera to shoot only in JPEG: RAW information is removed
As you can see, based on the information captured by the sensor, the camera makes many decisions over which the user only has partial control.
Information contained in JPEG vs RAW
Let’s talk a bit about bits and information.
Today most sensors are 12 or 14 bit.
For example, on a 12-bit sensor each RAW level is on a scale that has a tonal resolution of about 4000 possible values.
When we do the chromatic interpolation of a RAW ‘image’ we will have as a result an RGB image in which each color (each channel) is encoded with the same number of bits as the initial RAW.
That is, at each point of the image we will have the color information encoded with 12 or 14 bits per channel:
In a 12-bit sensor each point in the RGB image would be encoded with 12 x 3 = 36 bits .
In a 14-bit sensor each point would be encoded with 14 x 3 = 42 bits .
This does not mean that all this volume of information corresponds to information from the real scene.
The interpolation process ‘invents’ an important part of the color of each point. We cannot generate information from where there is none. But once the interpolation process is finished, we could say that we have an RGB matrix with high tonal resolution.
The JPEG format for its part works with 8 bits per channel (8 x 3 = 24 bits per pixel )
Going from the RGB matrix to the JPEG matrix is equivalent to sampling. We lose resolution of the tones of each point.
This is noticeable in images that contain very smooth gradients.
The most typical example is a clear blue sky. The human eye is able to see the sky as a continuous and subtle gradient of shades of blue.
In a digital image there is no continuous gradient, but if the tonal resolution (and spatial resolution) is adequate, the human eye is not able to distinguish these ‘jumps’ between very similar tones.
When the tonal resolution (number of bits) is not high enough, there comes a time when some jumps within the gradient are no longer seen as continuous: an effect known as banding (posterization) occurs.
Furthermore, to generate the final JPEG file, a series of lossy compression algorithms are applied.
IMPORTANT : JPEG compression algorithms take into account how human eyes work and how the brain processes that information. Unless the image is compressed a lot, we will perceive it as very faithful to the real scene that our own eyes would see.
Although we might see some banding / posterization in direct camera JPEG images, this is not the norm.
It is a widely used format to save images preserving all their information or as a format for information exchange between image processing applications.
We are simply going to comment on it as an example of what it would be like to store the image in an RGB format without compression or loss of information.
To store an image from the RGB matrix of 12 or 14 bits per channel we would need to work with a TIFF of 16 bits per channel (16 x 3 = 48 bits per pixel)
Note that we have 4 or 2 bits left per channel. Those extra bits of the TIFF take up space but don’t include any image information. They are filler.
The problem with the TIFF format is that it generates huge files.
Doing a quick napkin calculation: 48 bits per pixel is 6 bytes per pixel. For a 20Mpx sensor, a file of at least 120MB (megabytes) would be generated. A JPEG file with the same image and very little compression could be stored at about 10MB.
Going back to JPEG vs RAW …
How is the RAW file generated on the camera?
- For the RAW part (the matrix with the RAW values) this information is simply taken and packed in the file.
- In most RAW formats, a JPEG file is also generated, applying the processes that we have seen in the previous section. And that JPEG image is embedded within the RAW file.
- All metadata about the camera, lens and sensor settings are also saved
- The RAW file is sent to the memory card
In some cameras the RAW file, the data matrix, can take some initial processing.
For example, some cameras can make corrections to compensate for geometric aberrations of a certain lens.
It is said that a slightly ‘cooked’ RAW is generated.
What are the advantages of using JPEG files directly from the camera?
- It saves us a lot of time
Working with RAW means that you have to do at least a basic development before you have a usable image
- In many situations the JPEG image already has a very good final processing that it would take us work and time to replicate in the development and editing process.
If we use the image profiles correctly and we can customize them at will, we will have final images to our liking.
- The internal noise reduction algorithms of cameras are usually very efficient .
They are algorithms specifically designed (or adapted) to work with a certain sensor.
- Most monitors and devices are 8 bits per channel
Therefore, on your 8 bit monitor you will not be able to directly see that tonal resolution that your camera sensor can generate. You are always seeing a simplified 8-bit version.
Disadvantages of JPEGs
- We have to make sure at the time of shooting to configure the exposure as exact as possible
- We have to set the correct color balance suitable for each scene
- We have to choose a priori the final style of the image: contrast, saturation, special profiles …
- If we are going to do a later edition, the JPEG will limit us more, it contains much less information than the original RAW
- There are many situations in which we can greatly improve the image from RAW, for example with tonal mapping (raising shadows, recovering highlights …) compared to what we can do from a JPEG.
Although we key in the exposure, with JPEG we have much less room to make these kinds of selective adjustments in editing.
- Editing on JPEG is always destructive.
Modifying a JPEG image and saving it again means that all lossy compression processes are reapplied to it.
What are the advantages of using RAW?
They are related to the disadvantages of JPEGs
The main advantages from my point of view:
- We can choose the color interpolation algorithm that interests us the most (not all development programs offer this option)
- We can choose the white balance at the time of developing
- We work with all the information that the sensor provided us .
This gives us much more margin in case we have to adjust the exposure, do a tonal mapping (raise shadows, recover lights …), adjust the color …
- We can adjust the contrast and saturation at will from the initial information (not from information that has already been altered when generating the JPEG)
- We can use different noise reduction algorithms if necessary
- The developing process is a nondestructive
From the same RAW can get many different versions of the final image revealed once and processed. We always have that original RAW that is not touched or modified internally.
The most obvious disadvantage is that to get the final image we have to do the development , which takes time and we need to have some basic knowledge.
Sometimes, especially if we are not very experienced, it is not easy to improve or even match the JPEG version that the camera itself has generated.
For a beginning user, it can be a bit frustrating to see that, for example, the JPEG from the camera or the one generated by a mobile phone seem visually more attractive than the images that he tries to get from RAW.
In other words, working with RAW requires extra effort and knowledge.
But once you get the hang of it a little the results can be spectacular, and with a little experience the workflow can be very fast.
How about RAW + JPEG?
If you are always going to edit your photos: advanced development + image editing… Then always choose RAW .
If you hardly ever edit or do a very light editing and you have a lot of experience with your camera settings: Direct JPEGs from your camera ( SOOC – Straight out of camera) may be the most convenient option for you.
Most cameras allow the option to save RAW + JPEG .
I usually use this configuration.
Many of the photos I can use directly with very little further editing.
And in the most complicated situations or photos for which I want a deeper and more personalized edition, I use the RAW file to try to get the full potential.
RAW files also serve as backup copies.
The development programs use a non-destructive workflow. The RAW information is always there, preserved in the file.
Final summary: choose JPEG or RAW
When we shoot in JPEG we are delegating a good part of the decisions to the camera. We lose control over the process.
The JPEG image contains considerably less information about the scene than the RAW file: fewer bits per color channel and also the loss generated by the compression process.
On the other hand, cameras have very advanced algorithms and a ‘science of color’ that has evolved and improved a lot over the last few years.
Direct camera JPEG images are generally of very good quality.
If you plan to process the images: adjust contrast, saturation, color, etc. -> Use RAW
If you are photographing scenes with high dynamic range (a lot of difference in brightness between the brightest areas and the darkest areas), then you are interested in using RAW , since you will have more information to recover highlights and raise shadows (tonal mapping).
If you notice that the final photo generated by the camera (JPEG) does not describe exactly what your eyes saw in the scene: for example duller colors, no contrasts between colors, lack of sense of depth … Then use RAW and try to make a developed that manages to reflect those feelings that you lack in JPEG.
From RAW you can get the most out of your camera’s sensor.
And the development process will also help you understand how your camera works, its limitations and how to improve your technique when taking photos.
I recommend configuring the camera in any case to record JPEG + RAW .
The RAW, even if you don’t use it, serves as a backup that contains all the possible information.
If you are not going to process the image or you are going to do a very simple editing you can directly use the direct camera JPEG .
Try the different predefined image profiles of your camera: vivid, neutral, standard, landscape … These profiles are the ones the camera uses to ‘cook’ the RAW and generate the JPEG: contrast, saturation, etc.
Many cameras allow you to customize these profiles or create profiles from scratch with the parameters you want.
In many situations camera direct JPEG is more than sufficient.