Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Black body calibration matrix for ASI1600 + Baader filters


Recommended Posts

This is essentially product of some of my research and failure to do stellar calibration from stars in the image.

More on this particular topic can be found here:

Because color calibration failed on sample of stars from the image in range of 3000-7000K (actually a bit less), I decided to do algorithmic approach to determining correct transform matrix. Here are results:

First a little intro. I used CIE XYZ matching functions found online for calculating XYZ values of black bodies of different temperature. Known matrix was then applied to transform this to sRGB linear space.

For data I used published QE graph on ASI1600 from ZWO:

ASI1600MM-QE2.jpg.d038c09b541a5896536083aa9b4ca786.jpg

And same for Baader Filters:

EN_l-rgb-c_filterkurve.thumb.jpg.911fd2d3ff5d873e1cf9ac8606c29278.jpg

I used following web based utility to read off values from these images:

https://automeris.io/WebPlotDigitizer/

Result of read plotted in LibreOffice:

image.png.351c8759e2643eddbd55e398ecdfeb84.png

and for filters:

red:

image.png.72dc01f23bbefaa0cb22708cb57c11f9.png

image.png.09e7f38c9031353a54a5c0ee331306ad.png

image.png.e4d4692d84f23e39d5edf6af0b499a02.png

Not all curves were sampled over all wavelength range - missing values are assumed 0.

Now the fun part.

image.png.9484deed94e0ad21811fb639caae1c40.png

This graph represents error in linear RGB space prior to calibration and after calibration. Error is calculated as distance in 3d Euclid space between two RGB unity vectors - Expected sRGB linear unity vector for black body of certain temperature (X-axis) calculated from CIEXYZ matching functions and transformed into linear sRGB space and Column K  - raw RGB values that would be obtained with ASI1600 camera and Baader filters - with perfect telescope (no color impacting optics). Column L is same thing but this time raw RGB values were corrected by obtained correction matrix.

We can see significant drop in error. However, error is still quite large in red / yellow zone. This is because sharp cut-off filters are not well suited for capturing true color. This is why color sensor have smooth varying R, G and B filters on pixels.

Here are two more graphs that will be of interest - these show difference in actual R, G and B values. First raw data:

image.png.571aabdf72321481321855123568f316.png

Here we can see that green is pretty much as strong as it should be. We also see that red is weaker than it should be and we see that blue is stronger than it should be.

Same graph after correction:

image.png.363d7e3588d308242b7bc989df9d0125.png

I just realized that I did not account for energy of photons. Stay tuned / watch this space need to redo whole thing :D

 

Link to comment
Share on other sites

Most stars aren't exactly black bodies cf spectrometry. I feel the pursuit of some sort of 'accurate' colour rendering in astrophotography is a neverending minefield - perhaps akin to a Mandelbrot Set!! Don't forget to take into account atmospheric extinction, monitor calibration, ambient light, and what your eyes actually see (how are you going to know for sure?). Surely a simple, approximately accurate but satisfying, pretty, coloured picture will do the job? Will anybody really care that much otherwise? Me, I mostly stick to monochrome/luminance these days - so much easier, especially when life is so short!

Louise

Link to comment
Share on other sites

9 hours ago, Adreneline said:

Hi @vlaiv.

Don’t know if it’s just me but none of the images after the QE graph show anything - it’s just a uniform grey box.

Adrian

Not sure why would that be. Images display nicely on my computer. Maybe we should ask others if they have the same issue?

Link to comment
Share on other sites

8 hours ago, Thalestris24 said:

Most stars aren't exactly black bodies cf spectrometry. I feel the pursuit of some sort of 'accurate' colour rendering in astrophotography is a neverending minefield - perhaps akin to a Mandelbrot Set!! Don't forget to take into account atmospheric extinction, monitor calibration, ambient light, and what your eyes actually see (how are you going to know for sure?). Surely a simple, approximately accurate but satisfying, pretty, coloured picture will do the job? Will anybody really care that much otherwise? Me, I mostly stick to monochrome/luminance these days - so much easier, especially when life is so short!

Louise

For this method - it does not really matter that black body only approximates real stars. I just needed some sort of spectra that:

a) is well defined so I can calculate exact color information in CIEXYZ color space and values that will be obtained by sensor/filter combination for source giving off that spectrum.

b) It has good enough spread in color space (issue with real stars is that most of the stars are in very narrow range of temperatures and extremes like very hot stars are rare and chances that we'll find them in the image are slim)

c) colors that we work with are colors that we are likely to encounter in astrophotography - or close to those colors

I agree that getting accurate color is not easy - but this is because we don't yet have developed workflow nor tools to do it. Removing background light pollution gradients is not easy task either, but once tools are available and people figure out how to incorporate those into their processing workflow - we start getting nicer images.

Theory of color matching exists for many years now (CIE XYZ color space was developed in 1931) - we just need to sensibly apply it to astrophotograpy and we will get accurate colors.

Link to comment
Share on other sites

30 minutes ago, vlaiv said:

For this method - it does not really matter that black body only approximates real stars. I just needed some sort of spectra that:

a) is well defined so I can calculate exact color information in CIEXYZ color space and values that will be obtained by sensor/filter combination for source giving off that spectrum.

b) It has good enough spread in color space (issue with real stars is that most of the stars are in very narrow range of temperatures and extremes like very hot stars are rare and chances that we'll find them in the image are slim)

c) colors that we work with are colors that we are likely to encounter in astrophotography - or close to those colors

I agree that getting accurate color is not easy - but this is because we don't yet have developed workflow nor tools to do it. Removing background light pollution gradients is not easy task either, but once tools are available and people figure out how to incorporate those into their processing workflow - we start getting nicer images.

Theory of color matching exists for many years now (CIE XYZ color space was developed in 1931) - we just need to sensibly apply it to astrophotograpy and we will get accurate colors.

it was just late night musing on my part! 😄 

Louise

Link to comment
Share on other sites

25 minutes ago, Adreneline said:

Out of interest how does your proposed approach differ/compare from/with Photometric Colour Calibration in PixInsight?

Adrian

Significantly.

I don't really understand PCC in PI. I tried to understand it and read that document but there are sentences there that diverge significantly from my understanding of color, color models / spaces and color matching.

For example this:

Quote

In the PCC tool, the default white reference is based on the average spectra of Sb, Sc and Sd galaxies.

In my understanding, white reference is tied to display medium not phase of capture. Therefore white point will be chosen such that it corresponds to color space you are going to present your data in.

For most purposes we present our work in web image formats (png, jpg). These by default use sRGB color space unless you specify different color space and provide transform for it (ICC profile in PNG). sRGB has well known white point of D65. There is no choice there - nor you can arbitrarily set one.

PCC deals with photometric filters and defines color as:

Quote

The color of an astronomical object is defined by measuring its light intensity in two wave bands. We measure the light passing through two filters, then convert to magnitudes the recorded signals for both filters, and subtract these magnitudes. This difference in magnitudes is what we call color index.

It therefore does not deal with human visible color

My approach is not actually using data from the image - this is attempt to create "calibration profile" from sensor and filter information. I simulate how would sensor that has certain QE graph behave when paired with filters that have certain response graphs if exposed to light from a black body of certain temperature and compare that with standard color for that temperature.

From that I derive transform matrix that needs to be applied to data.

This is very similar to what DSLR manufacturers do when they create white balance presets for their cameras. Except the fact that their preset depends on illumination used (hence you can choose - daylight, shade, tungsten, incandescent ....) but here you don't have illuminated object - but rather source of light and as such does not change and has certain "signature". For that reason we only derive single "preset".

If you want to be precise with this approach - you should include information such as atmosphere response, and instrument response.

I've found this very nice image:

Image38.gif.8d28ca532a051f7c4e0e34860d01d951.gif

From British Astronomical Association paper here:

https://britastro.org/vss/ccd_photometry.htm

and I'm going to "apply" it as well for the data from the thread I linked to at the beginning of this one.

  • Thanks 1
Link to comment
Share on other sites

Just an update on all of this.

I eventually implemented everything, tried two different ASI1600 QE curves - one published by ZWO and one produced by Christian Buil (not really sure which one is correct - they are quite a bit different). I also tried with atmosphere compensation and without - and simply could not get proper calibration.

It is most likely due to flats. In order for this to work - flats must be normalized to 1 (flat for each color must have values such that brightest part of flat has value of 1). If this is not done, color values are scaled by unknown constants (max ADU for each flat).

I'll keep trying to get proper calibration this way, but I'll need some way of verifying results. Probably by shooting couple of stars of known spectral class / spectrum (like Vega and similar) and comparing that with synthetic results to at least determine which is proper QE curve.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.