Q: My Altair camera has 8bit or higher bit depth options. Which should I choose and when?

A: First we need to explain what bit depth is when it comes to imaging. Your computer uses ones and zeros to represent information – a bit. In the case of an image with a bit depth of one it would be one digit a 1 or 0, which means the image would be black and white only, like this:

1bit image example
1bit image example

But if we use more bits, say two, we have four possible combinations, like this: 00 01 10 11. That means we can display four levels of grey. Black, dark grey, light grey and white. Here’s the same image in 2-bit mode, with four levels of grey:

4bit image example
4bit image example

The more levels of grey you add the more tonal range you get. Here’s an 8 bit image with 256 levels of grey:

8 bit image with 256 levels of grey
8bit image with 256 levels of grey

So let’s assume you’re doing video stacking or “lucky” imaging with your Altair camera. You’d think you should just choose the highest bit level – say 12 bit output. Well 2 to the 12th power is 4096, which is a lot of grey levels. Thing is, in 12bit mode, you’ve still got to get those huge files onto your PC hard drive which means the frame rate of your camera will likely drop and the size of your video files will be huge. So is it worth the hassle for all that extra tonal range?

If you are stacking many frames say more than 50 frames, then going from 8 to 12 bit mode will not give an improvement if the pixel noise is typically bigger than one 8 bit level.

For example, if a value you’d get without noise is 20.5, then with noise of about 1 level you might see 50% of the frames have a 20 readout for that pixel and 50% have 21 – so the average is about 20.5.

Suppose a neighbouring pixel has a true (recorded) value of 20.9, then all it takes is a little noise in most of the frames for that pixel to effectively have a value of 21, a few frames with a value of 20, and maybe the odd value of 22. The average will be about 20.9 though. So using 12 bit mode has no benefit, because the extra sub levels it allows between each 8 bit level are much less than the noise.

Planetary imaging: Planets are quite dim especially at high power, so typically in planetary imaging you run with high gain – think of gain as “ISO”, You’ll want to use high gain to keep exposure duration short to freeze the distortion caused by the air. Using high gain increases noise though, so the noise is going to be much more than 1, so 8 bit is quite good enough.

Lunar imaging or white light solar imaging might be a bit different – you could probably run at low gain if the object is quite bright, in which case the noise level might be less than one pixel level, so maybe 12 bit will help you there.

For long exposures of deep-sky objects, noise builds up quite a lot, so you’d typically run at minimum gain to get rid of that noise. So in this low gain situation, 12 bit should help improve the image, and it should also increase the dynamic range between the dimmest and brightest features making the object appear more detailed to the eye.

Video astronomy (live stacking in SharpCap for example) seems to work best at fairly high gains, so 8 bit might be sufficient, but you can start to see more tonal range in 12 bit mode and seeing you aren’t going for high frame rates or recording vast amounts of data it can’t hurt.

Fluorescence Microscopy usually requires a higher bit depth of 12bits or more to extract faint signal information from the background noise in post-processing. However normal micropscopy documentation images only require 8bit depth. If you are going to use HDR processing in Photoshop or other apps to show more contrast and detail, then we do recommend selecting the RGB48 setting in AltairCapture or other capture software and saving in 16bit format .TIFF or .PNG.

Some more advanced theory, if you’re interested:

Your choice of bit depth is influenced by the three sources of noise which all camera sensors have, regardless of what type, what design, and what make:

1) Readout noise: A function of the sensor design.
2) Shot noise: A fundamental variation in the number of photons arriving on a pixel. It’s proportional to the square root of the number of photons received at a pixel during the exposure.
3) Thermal noise: A function of sensor design and temperature. Thermal noise usually increases in proportion to the exposure time. For short exposures used in solar system imaging (fractions of a second) it can largely be ignored. (for a description of thermal noise and FPN or Fixed Pattern Noise, see our other blog on the dark frame correction features in AltairCapture).

If you think of a sensor with full well capacity of 10000 electrons, a pixel that is nearly full will have a shot noise of about 100 electrons – 1% of it’s full range, and larger than 1/256 levels of grey, so more than one 8 bit level.

A pixel that only collects 10e will have a shot noise of about 3e.

If readout and thermal noise are also small, than that pixel might have a total noise less than 1/256 of full range, so 12 bit would be an advantage for those nearly dark pixels.

The choice of which bit depth to use is up to you but this should help you make a more informed choice.