Is a 16-bit pipeline worth the effort ?

I finally found the motivation to experiment with the RAW12 mode of my SC1803R camera.

Now, image acquisition, stacking and stitching all work with 16-bit TIFF files.

This transition was a significant effort, especially because I wanted to maintain 8-bit support for other cameras as well.

Why switching to 12-bit

A common problem with chip imaging is that some surfaces are highly reflective (e.g., bond pads) while other are not.

To address this you can lower the exposure to avoid overexposed or “burned” areas. However, this often leads to other areas being so dark that local exposure adjustements during post-processing may degrade image quality.

This tradeoff forces you to balance exposure: keeping critical details (like circuitry) visible while accepting that some areas—such as bond pads—may become completely white.

I suspected that using 12-bit depth could resolve this issue.

Initial result

For my first test, I decided to image the Sitronix ST2016B chip with an exposure low enough to capture all details, even on reflective bond pads.

As a result, some areas appeared very dark.

On the following tests I deliberately increased the brightness, extracted the blue channel (the darkest channel) and compared the results.

8-bit 12-bit

The 12-bit image is noticeably smoother and retains all details, while the 8-bit version significantly noisier due to lost information.

This confirmed that 12-bit is useful for what I wanted.

Acquiring more than 12-bit

As explained here, I use a vibration detection algorithm that keep track of previous frames and compare them to check if there is any vibration.

On 8-bit mode, I average the last two stable frames to reduce noise by 50%. However, the new 16-bit pipeline allows me to sum frames directly.

This lets me effectively increase the depth of acquired images:

  • With 2 frames, I achieve 13-bit depth.
  • Using 4 frames, I get 14-bit images, a depth comparable to what can be found in modern Sony sensors.

This approach strikes a balance between capture time and image quality and can also be applied to 8-bit cameras to expand the depth to 10-bit.

Exposure fusion

Exposure fusion combines images captured at different exposures to create a more balanced final image.

My setup outputs 14-bit images, but since TIFF files encode data as 16-bit, the resulting images appear very dark.

The original 14-bit image


Despite their darkness, no data is lost, as the TIFF format retains all 16-bit details.

A simple trick to simulate multiple exposures is to multiply the original image by powers of two:

convert fused.tif -evaluate Multiply 4 fused_4.tif

Multiply 8
Original image multiplied by 8x

Multiply 16
Original image multiplied by 16x


After generating these adjusted exposures, I use enfuse to merge the ones I want to keep into a single image:

enfuse -l -1 fused_4.tif fused_8.tif fused_16.tif fused_32.tif -o out.tif

The final image preserves detail across all regions:

Exposure fusion
The final image after exposure fusion


Tradeoffs

Switching from 8-bit to 16-bit images will double their size, which can be problematic for very high resolution panoramas that already weigh several gigabytes.

Additionally, using the camera’s 12-bit output halves the frame rate, as it’s sending uncompressed 16-bit data instead of 8-bit.

For these reasons, I’ll use the 12-bit mode only with the 5x and 10x objectives.

Tips for verification

To confirm that an image is encoded as 16-bit, use tiffinfo :

tiffinfo tile.tif
TIFF Directory at offset 0x13cbc68 (20757608)
  Image Width: 1860 Image Length: 1860
  Bits/Sample: 16
  Sample Format: unsigned integer
  Compression Scheme: None
  Photometric Interpretation: RGB color
  Samples/Pixel: 3
  Rows/Strip: 1
  Planar Configuration: single image plane