Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Astrophotography pop art - critique of the process


vlaiv

Recommended Posts

I thought photometric calibration was meant to give you “true” colours (i’m using Siril). It probably does- until you stretch! I’m very new to processing but it seems i get very different results every time i process the data and i guess it must be down to how much i stretch the data. The Asinh transform is meant to preserve colour but you can’t stretch it enough with that for eg to see ifn in the iki data. Have to do Histogram stretch for that but then the colours get corrupted. I’m probably doing it all wrong. Had a dabble with Startools and found the same- impossible to get consistent results but again i’m probably doing it wrong.

First time i’ve had a go with the Iki data and it’s very cool having others results to compare to. But I have no idea how they get those results as I get nowhere near!

Mark

Edited by markse68
Link to comment
Share on other sites

16 hours ago, vlaiv said:

guess that scientists loved the way it looks although it is not accurate / natural rendition of the target so they decided to release it to general public (probably under influence of PR team - there is often pressure to justify funds and scientists often need to make their work likeable as well).

So in a way I guess we are saying that if there is a reasoning behind the colour scheme and it looks pleasing to the eye, then its an acceptable entry for a competition? ;)

  • Like 1
Link to comment
Share on other sites

36 minutes ago, markse68 said:

First time i’ve had a go with the Iki data and it’s very cool having others results to compare to. But I have no idea how they get those results as I get nowhere near!

You are not alone. I am in same boat 🙂

Link to comment
Share on other sites

16 minutes ago, AstroMuni said:

So in a way I guess we are saying that if there is a reasoning behind the colour scheme and it looks pleasing to the eye, then its an acceptable entry for a competition? ;)

I would not know that. Doesn't that depend on competition itself? Rules of competition and judges can best tell you what is valued in that particular competition.

  • Like 1
Link to comment
Share on other sites

12 hours ago, tomato said:

I can provide a calibrated monitor for the calibration device,  it’s manipulating the data that I would need help with.

What camera do you intend to calibrate?

Can you mount it on a simple lens or is telescope only option? What sort of screen will you use for calibration? It should be calibrated for sRGB / D65.

Here is what you need to get you started. In fact, we can do this step by step together - I can make color calibration of regular image to show how it works in daytime photo and you can do that with your camera?

1. Get image that displays distinct range of colors in form of uniform squares or rectangles - they need to be easy to image and measure mean value across channels.

We can use this resource: https://www.babelcolor.com/colorchecker-2.htm#CCP2_images

or we can make our own calibration image

2. Have your calibration screen show calibration image in dark room (screen should be only significant light source) and take image of it with your camera. You'll need either OSC raw data or three filters - R, G and B

3. Take calibration image and open it in software that will do measurements of pixel values (average over some surface), for each color measure values of R, G and B and then use this calculator:

http://www.brucelindbloom.com/index.html?ColorCalculator.html

image.png.e7191482f3944b5ff5f4148829ad7be2.png

White reference should be set to D65, Gamma to 2.2 and RGB model to sRGB. If you measure values in 0-255 range, check Scale RGB option (otherwise leave unchecked for 0-1 range). When you enter measured RGB values and click RGB button - you should get XYZ values calculated.

Above I calculated (normalized - Y=1) XYZ values for RGB = 1,1,1 - white color in sRGB space which is - white reference for that space - D65 and you can see XY chromaticity coordinates being 0.313 and 0.329 (second row) - which checks out if we ask google for verification:

image.png.fb24bb0e2125179a720e8a4d15ab4feb.png

4. After you've written down all XYZ coordinates of your color patches on calibration image - measure actual image that you recorded with camera - raw_r, raw_g and raw_b. Just measure average ADU values - don't use calculator for this step.

5. Note both sets of coordinates in spreadsheet software - XYZ calculated from measured RGB values from original calibration image and RAW rgb values measured from recording of calibration image.

 

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

What sort of screen will you use for calibration? It should be calibrated for sRGB / D65.

Just a thought... if using a phone or tablet as refence display the check the display settings and make sure it's not on 'vivid' or 'cinema' or whatever. And turn off blue light filter! 

  • Like 1
Link to comment
Share on other sites

Bit late to this discussion. The key reason to use star calibration is that complete knowledge of the image train cannot be obtained, because part of that train is the atmosphere. Neutralizing the background corrects not just for issues in what we normally consider the optical train and the camera, but also for (variable) light pollution. The degree of moisture or dust in the air changes transmission significantly, in particular in the blue. I remember imaging Ca-K in Estes Park, Colorado on August 20, 2017, and being surprised at the need for sub millisecond exposures. The next day in Glendo, Wyoming, during the eclipse, I needed 3-5 ms expsoures, due to a high haze caused by smoke from distant forest fires. That's a factor of 5-10 on the edge of the visible spectrum. The best way to calibrate any imaging system is by imaging a calibration source, and working from there. Stars with known spectral properties are very handy in that respect. Calibrating ton them helps correct for any colour cast in the optics as well.

The other issue I see is that the CIE system works with a "standard observer" model. No human being is such a standard observer, because the spectral sensitivity of the eyes varies dramatically depending on lighting conditions. Proper calibration of the image can certainly maintain the correct ratios of colour components. How the brain interprets these is a matter beyond our control.

Another issue is the dynamic-range compression needed. When printing on paper, the dynamic range is at most 30-40 (if you have VERY good quality print, with 2-3% reflection in the darkest areas). Slides, and monitors can handle 10x that dynamic range, but that is still woefully inadequate to show all detail. Non-linear stretching is one way to allow the human eye to see a much larger range of features than the raw data would allow. The other is the use of pseudo colour, which I use in most of my solar images.

The first image is straight from the camera, showing detail on the surface, but hardly showing the prominences

Sun_112727_lapl4_ap410_out_stitch.thumb.jpg.188b92272eca421c079528c1b7e8a20e.jpg

True colour would look like this

Sun_112727_lapl4_ap410_out_stitch-red.thumb.jpg.a0c557107db49f3b3445d43b440aeb66.jpg

Use of a non-linear stretch combined with a heated body colour map brings out hidden detail.

Sun_112727_lapl4_ap410_out_stitch-pinv-colour.thumb.jpg.a10f65c7d173c0803dcfde08886b1289.jpg

  • Like 2
Link to comment
Share on other sites

4 hours ago, michael.h.f.wilkinson said:

The first image is straight from the camera, showing detail on the surface, but hardly showing the prominences

...

Use of a non-linear stretch combined with a heated body colour map brings out hidden detail.

Once more showing (a) the difference between a pretty picture and a scientific image and (b) that science rarely requires a pretty picture.

All your images look nice (IMO). The last shows details of interest to some (most?) people but, as you say, the colour representation has been munged heavily.

The important image, again IMO, is the one which preserves all the original data.  The reason: because it lets others re-process it for their own purposes whether scientific or aesthetic.

For my part, I very often perform some very heavy-duty image processing on my data before publishing the images. I do one or more of contrast-stretching, noise reduction, smoothing, removing background, deconvolving, applying a false-colour map, superimposing isophote contours, and perhaps more I have forgotten. I do these things to make it easier to see features which I consider interesting and/or important. None of these things are done for aesthetic reasons; all are done to make scientific analysis easier. Further, almost al my images are taken in a single wide-band pass filter.

Needless to say, the original data is saved for posterity.  Not that posterity has ever done anything for me ...

Link to comment
Share on other sites

13 hours ago, Xilman said:

Once more showing (a) the difference between a pretty picture and a scientific image and (b) that science rarely requires a pretty picture.

All your images look nice (IMO). The last shows details of interest to some (most?) people but, as you say, the colour representation has been munged heavily.

The important image, again IMO, is the one which preserves all the original data.  The reason: because it lets others re-process it for their own purposes whether scientific or aesthetic.

For my part, I very often perform some very heavy-duty image processing on my data before publishing the images. I do one or more of contrast-stretching, noise reduction, smoothing, removing background, deconvolving, applying a false-colour map, superimposing isophote contours, and perhaps more I have forgotten. I do these things to make it easier to see features which I consider interesting and/or important. None of these things are done for aesthetic reasons; all are done to make scientific analysis easier. Further, almost al my images are taken in a single wide-band pass filter.

Needless to say, the original data is saved for posterity.  Not that posterity has ever done anything for me ...

I always keep the original, linear image for that exact reason. I also like the following representation, which uses a heated body colour map direct applied to the linear grey scale. This map is know to represent differences in intensity in a way that corresponds well to perceptual differences in the human visual system, again allowing a larger dynamic range to be represented than the roughly 64 grey levels the eye can distinguish on screen (or 32 on paper). It is frequently used in scientific visualisation in many fields. Science often does need pretty pictures.

Sun_112727_lapl4_ap410_out_stitch-colour.thumb.jpg.bffbed61a71ae28fb175defff24b75ce.jpg

BTW, strictly speaking this is a pseudo-colour map, not false colour. false colour is when you map different spectral bands to RGB, like in the case of the Hubble palette, or like the IR-R-G image of Saturn I took years back, mainly because seeing in B was so bad I decided to give this false colour representation a go

post-5655-0-47720200-1401519988.png

Link to comment
Share on other sites

Quote

First time i’ve had a go with the Iki data and it’s very cool having others results to compare to. But I have no idea how they get those results as I get nowhere near!

You are not alone. I am in same boat 🙂

Me too, cannot get the colours right at all which is why I haven't posted anything.  Guess I am so used to narrowband (living in a LP location), that I don't get enough practice with broadband.

I have been avoiding posting on this thread being an Astrophotographer, but these are my feelings about it. 

Broadband

When imaging in broadband using RGB I think the main thing is to get the colours balanced, and then it is just a question of the degree of saturation and this I think is down to personal choice.   Processing can often wash out star colour in the stretching, so I think it is OK to replace that lost colour later in the processing.  

Narrowband

However with narrowband being false colour and being able to combine the filters in a number of ways, I think there is no hard and fast rule on what the colours should look like.  I am also a bit unsure about this obsession to get rid of green colour in imaging, if it's there it is there, and I have chosen to keep it on a couple of my images.  I do confess I have sometimes tried to emulate some other colours I have liked on a similar image, but have also done my own thing.  

I guess with narrowband it is "beauty in the eye of the beholder".

Carole        

  

Link to comment
Share on other sites

In the mean time, let me share my processing of IKO M81/M82 data. I'll do it here because I don't think I should be included in competition due to this thread (hopefully I'm not braking any rules by doing this :D ).

This is not fully color corrected processing workflow, and Ha was not handled the way it should be handled (I mixed it in later not at XYZ stage).

I just did basic channel balancing since I don't have CCM for camera and filters used to obtain the data. Some colors might be slightly off because of that. I did not do any perceptual adjustments either (tried Rawtherapee as it has CAM02 adjustments - but I simply can't figure out how it works - or rather how to adjust it for what I believe are proper parameters in this case).

Color management was completely done in math - I did not do single color adjustment in processing software, except for Ha data blend in at the end - that was done in Gimp (I had no idea what I was doing, but I did manage to blend it in satisfactory).

 

result.thumb.jpeg.585ef8c7df8144b2b7f3210ac0ce00ee.jpeg

  • Like 6
Link to comment
Share on other sites

I like that a lot Vlaiv- it has a sophistication to it. Colours don't match that hubble image you liked but that has a lot of blue which doesn't seem to be there in the Iko data, and not much of the Ha star forming regions that are there in the Iko data.

How did you stretch it as much as you have to bring out the ifn and yet still keep your stars so nice and tight? 

Mark

Link to comment
Share on other sites

4 hours ago, markse68 said:

Colours don't match that hubble image you liked but that has a lot of blue which doesn't seem to be there in the Iko data

That baffled me for a bit as well, but then I did "litmus test" on the image above and it matches theory very well.

image.png.e892969a4b2706ed29e5e09bb3ca3e38.pngimage.png.942a3ece30758ef63a66c85b487f83f6.pngimage.png.67114ce29d1b2ef2f81aa27d7c2e68b5.pngimage.png.9765b018577c1ae2522390278b9ae9ab.png

Where color bar is taken from here:

https://en.wikipedia.org/wiki/Stellar_classification

I also matched several stars to this scale and checked their stellar class and was able to guess their temperature correctly by just looking at the image.

I guess that Hubble team emphasized blue data as it shows star forming regions - young hot stars are much more frequent in these star forming regions - and this happens in spiral arms because of gravity and rotation of galaxy - there are gravity waves that form those arms and these disturb gas and dust that then collapses and forms new hot stars.

It is still present in image above but it is not emphasized as much - it is depicted in "natural" intensity rather than boosted to be shown.

On the other hand - Ha is boosted to be shown as I did not color manage it - I just blended it in to the final color managed image. It is there due to separate Ha data that was taken.

4 hours ago, markse68 said:

How did you stretch it as much as you have to bring out the ifn and yet still keep your stars so nice and tight? 

I did my stretch on luminance data alone and used RGB to produce color for it. I did basic levels stretch in Gimp and initial stretch was even stronger than in above image. Only when I added color did I notice I went too much and then I backed off a bit.

Here is original luminance data stretched (strong stretch version):

lum_processed.thumb.jpeg.32306fdf9f8c46cf3ae2322fc5b91b68.jpeg

This is just levels stretch and I used middle and "bottom" (left) sliders to produce this image - I did not touch right one as core of M81 was close to saturation point anyway. I used selective sharpening and noise reduction (sharpening in high SNR areas and noise reduction in low SNR areas)  - this was done with layer masks where mask was original layer (stretched differently than original image).

Simple stretch and no masks for stars or anything - only levels and stars were controlled ok because data is good.

  • Like 2
Link to comment
Share on other sites

Inspired by this thread I've just been messing around with my QHY462c.

I thought I may be able to get the colour reproduction straight out of the camera quite close to the true colour with regards to planetary imaging.  We do see lots of planetary images with funny colour casts so thought it would be worth having a go at 'calibrating' the planetary cam.  I put calibrating in inverted commas as I haven't done any of the maths or transforms described by Vlad.. all I did was:

1) download the colour calibration chart linked to earlier, and display on my phone (Galaxy S10)

2) fit meteor lens to camera and point it at the phone screen in dark room (with Baader L filer fitted)

3) bring up camera feed in firecapture, on a fairly decent monitor whihc is located so as to not cast light at phone/cam (dont think it is properly calibrated but it's as good as I've got)

4) tweak the red, green and blue levels in the capture settings until the colour on the feed from the cam matches as closely as possible the colours in the calibration chart, and note them for future reference

It's not perfect - the resulting colours are not very close to the chart -  but it is way closer than the default and it only took a couple of minutes. It should enable much better colour images straight out of the camera.

 

 

 

462c colour test.png

  • Like 2
Link to comment
Share on other sites

3 minutes ago, CraigT82 said:

4) tweak the red, green and blue levels in the capture settings until the colour on the feed from the cam matches as closely as possible the colours in the calibration chart, and note them for future reference

You can do even simpler thing that does the same - just make a capture of color checker and measure gray patches - any of them for value of RGB. It should give you equal values of R, G and B.

It won't on raw image - but it will give you reciprocal of weights (you can divide R with measured R value, B with measured B and G with measured G). If you fix G to be 1 - you can calculate R and B weights as well.

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.