We store digital images as a series of 0s and 1s, but how we manage that info is another story altogether. Am image is a long string of pixels which starts with the header (we insert there the EXIF data), then a number of pixels, then a padding data and then the next number of pixels. Each pixel can be represented by 1 byte or 8 bits of data which is called a bit depth of 8.
Grayscale are usually 8 bit depths, but rgb are 32 bit depth images (3x 8 for rgb, and another 8 bits for alpha, for example).
Now, how do we capture those images and then convert them to digital information? See below:
Linus explains what image sensors are and how they work. In order for us to capture the images in digital format the light is first passing through a series of red, green and blue filters and then it hits the sensor.
The sensor then converts the light it gets into electric current which is later converted into digital code. There are two types of sensors: CCD and CMOS.
CCD is short for Charged Coupled Device and CMOS is short for Complementary Metal-Oxyde Semiconductor. CCD processes charges individually from bottom to top, it then converts them into current and then into digital information. Consumes more power, but offer less noise. More expensive.
CMOS, on the other hand, handles each pixel, not like CCD which takes in charges row by row. It has less power usage than CCD. Less expensive.