Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Astrophotography pop art - critique of the process


vlaiv

Recommended Posts

58 minutes ago, Xilman said:

I am not arguing for discoveries, or for competing against Hubble. Far from it! Neither am I approving or disapproving of any pictures you wish to take.

When you go cycling (for pleasure, not transport) do you restrict yourself to only one or two standardized journeys or do you like to go exploring routes you have only rarely or never taken before?

What I am arguing for is for people to try taking images of objects which lie off the well-trodden tracks.

There are many, many objects in the sky which very few people image. Some of them are beautiful (in my eye anyway). Why not give some of them a try some time?  That's all.

For instance how often do you see an image of galactic cirrus like this one? It is visible from the entire northern hemisphere.

Polaris.jpg
 

Or this star cluster which is starting to be well placed for imagers:

NGC1647.jpg?mw=1000%26mh=800&f=1&nofb=1

Or this planetary nebula:

ngc6826_SXVH9_Manfred_rot180-crop.jpg.43

 

Or this globular:

ngc6934.jpg&f=1&nofb=1

 

Or this galaxy:

ngc524.jpg

That last has already been posted to SGL but it is not commonly seen! Note the background galaxies too.

Edited by Xilman
Fix typyo
  • Like 1
Link to comment
Share on other sites

1 hour ago, globular said:

 

Rather than counting perhaps using brightness as a weight would help reduce the dominance of background and ensure the best colour rendition is in the brighter areas?

And as we only need to produce the calibration pairs once per camera (not each time we image like we do for flats) we could have a large number of colour samples rather than just a handful.  (But I agree that colours more likely to be in astro images are more important if only a limited number of them are used).

Just as an example - here is (wrong) calibration that I did some time ago when experimenting.

I say wrong, because math used was slightly wrong (and yet it gave very good results). Problem with math is that I normalized both raw and xyz values - which should not be done (I wanted to produce matrix that does not scale values unnecessarily - but I did it the wrong way).

This was template used, calibrated camera was ASI178 and I used Xiaomi Mi A1 phone as calibration source (in hope that display is properly calibrated to sRGB at factory)

reference.png

I displayed above image on my phone and recorded it with ASI178mcc camera, raw result was this:

linear_unbalanced.png

(shows how much raw data differs from RGB - yet we readily use it as RGB and later we need to "remove green cast" and what not - but all of it is properly handled with color management)

linear_simple_balance.png

This is what happens when we simply "white balance" - we make sure that bottom row of squares has RGB ratios of 1:1:1

One more step needs to be incorporated - and that is gamma transform for sRGB image, so after applying sRGB gamma we get:

simple_balance_gamma.png

Now that really looks like starting set of colors - but some colors are not quite right. They can be made better looking - more like original colors by using color correction matrix, and then result looks like this:

calibration.pngreference.png

(I pasted source again next to it for comparison). There are still very small differences between the two - some of them are due to sensor gamut (thing that we discussed that there will be some residual errors), some are due to wrong math (which I later realized) and some are due to calibration source used (I'm not sure how accurate my phone screen is).

Even with all of that - color matching is very good in my opinion.

 

  • Like 2
Link to comment
Share on other sites

I apologise Vlaiv.

I was being deliberately provocative in order to generate a response. I understand why you started this thread and accept the validity of your scientific approach to colour. I also understand why the Hubble image is the way it is.

I just have a different perspective, and presumably different motivation for photographing celestial objects than you appear to have.

It is my opinion that Science and Art both have their place in photography. It is up to the photographer to decide on the balance.

  • Like 3
Link to comment
Share on other sites

@Astro Noodles

No need to apologize.

I do share your opinion - everyone should themselves choose level of science and/or art to be present in their work. My concern is only that science part is not that readily available / understood and I thus want for people to familiarize themselves with that part so they can properly choose the balance between the two.

  • Like 3
Link to comment
Share on other sites

13 minutes ago, Xilman said:

For instance how often do you see an image of galactic cirrus like this one? It is visible from the entire northern hemisphere.

Polaris.jpg


 

Or this star cluster which is starting to be well placed for imagers:

NGC1647.jpg?mw=1000%26mh=800&f=1&nofb=1

Or this planetary nebula:

ngc6826_SXVH9_Manfred_rot180-crop.jpg.43

 

Or this globular:

ngc6934.jpg&f=1&nofb=1

 

Or this galaxy:

ngc524.jpg

That last has already been posted to SGL but it is not commonly seen! Note the background galaxies too.

sorry, I wasn't disagreeing or arguing  with you - just trying to show  that different folk get different things from the hobby. As you say, each person takes what they think looks cool. There's a hell of a lot of things to target. I'm very new to hobby, and at the moment I'm going for the 'popular' stuff because - well - those are also the ones I like most that I've seen. We are all different - some folk get their kicks observing or imaging doubles for example - which have no interest to me whatsoever. kinda the same for globular clusters. each to thier own. 👍

 

Link to comment
Share on other sites

12 minutes ago, powerlord said:

sorry, I wasn't disagreeing or arguing  with you - just trying to show  that different folk get different things from the hobby. As you say, each person takes what they think looks cool. There's a hell of a lot of things to target. I'm very new to hobby, and at the moment I'm going for the 'popular' stuff because - well - those are also the ones I like most that I've seen. We are all different - some folk get their kicks observing or imaging doubles for example - which have no interest to me whatsoever. kinda the same for globular clusters. each to thier own. 👍

 

I hadn't picked up that you are new to this lark. In that case you are well advised to practice your skills where you can get feedback on how you are progressing by comparing your efforts with those of others.

When you have honed your skills, I urge you to branch out into areas which you can make your own. If you do the job properly other people will take inspiration from you!

Link to comment
Share on other sites

Without participating any further in the discussion, I want to share an image that shows how choosing different white references in PixInsight Photometric Color Calibration affects the colours in a stretched image.

I used the RGB data of the latest IKI data set. Combined the raw colour masters and applied DBE to even out the background. Note that I checked the "Normalize" tick box in Target Image Correction. This means that DBE does not affect the median image values of the three channels, and as such does not perform a background neutralization.

Next I defined a small preview without stars as a reference for Background Neutralization. After that I cloned the image x 12 and applied Photometric Color Calibration with the preview as background reference and different white references. Finally I stretched the image using Arcsinh stretch, which is supposed to best retain colour balance in an image. All images received the same stretch.

I resampled the images to get a manageable final image size (integer resample / 4 with average).

The tag in each panel indicates the white reference that I used in colour calibration:

Star spectral types: (from red to blue) O5V, B0I, A0I, F0I, G0I, K0II, M0III, O5V and Photon Equalization

Galaxy types: ASP (Average Spiral Galaxy, the default in PCC), Sa, S0, Sb

"Take your pick"

spectraltypes.thumb.jpg.285379dc511e906f95305a93bfbb8e55.jpg

 

Edited by wimvb
  • Like 1
  • Haha 1
Link to comment
Share on other sites

53 minutes ago, wimvb said:

Without participating any further in the discussion, I want to share an image that shows how choosing different white references in PixInsight Photometric Color Calibration affects the colours in a stretched image.

This is where I have problem with that process from PI - color calibration of astronomical images does not require white reference.

Link to comment
Share on other sites

1 hour ago, globular said:

@Xilman would it be ok if I use your wonderful globular picture for my avatar?

Fine by me, but it is not my image.  I found it on the net somewhere while searching for images of that GC.

The original can be found by examining the URL of that imagte.

Edited by Xilman
  • Thanks 1
Link to comment
Share on other sites

9 hours ago, jager945 said:

Wait... what?

 

6 hours ago, wimvb said:

??

Please elaborate.

White reference is related to perceptual color - not to physical color.

We can see certain spectrum of the light as being white in certain conditions, but we will see same spectrum of the light as little bluish or maybe little yellowish or even reddish in different conditions - depending on our environment.

Similarly - we can see one spectrum of the light being white and see different spectrum of the light as being white under different conditions. D50 and D65 will both be seen as white in certain circumstances - but when you observe them next to each other - you'll clearly see the difference and it can happen that neither of them is perceived as white in that moment.

Point of calibration is not to deal with perceptual phenomena of color, point of calibration as first step in processing of data is to "align" one's instrument with agreed upon standard so that all can measure the same thing. We color calibrate our camera so it can reliably produce XYZ values (within error margin) that will match both CIEXYZ standard observer and other properly calibrated cameras.

Using any sort of "reference white" in this context - is like calibrating our ruler to length of what we believe one meter is long rather than calibrating against agreed upon standard.

Reference white only comes into picture once we start dealing with perceptual side of things - depending on what we want to achieve (modeling viewer conditions), and should be used in last step of processing workflow.

  • Like 1
Link to comment
Share on other sites

48 minutes ago, vlaiv said:

Point of calibration is not to deal with perceptual phenomena of color, point of calibration as first step in processing of data is to "align" one's instrument with agreed upon standard so that all can measure the same thing. We color calibrate our camera so it can reliably produce XYZ values (within error margin) that will match both CIEXYZ standard observer and other properly calibrated cameras.

This is very important, in my view, and failure to appreciate it may be the source of some of the confusion we have seen here and elsewhere.

Once again, it touches on the difference between a scientific image (conforming to agreed standards) and a pretty picture (conforming with one's view on what it should look like).

Link to comment
Share on other sites

3 hours ago, Xilman said:

This is very important, in my view, and failure to appreciate it may be the source of some of the confusion we have seen here and elsewhere.

Once again, it touches on the difference between a scientific image (conforming to agreed standards) and a pretty picture (conforming with one's view on what it should look like).

Getting back to the original post, can we assume the judges of the IKI M81/M82 competition will be assessing the entries on the basis of  what makes a pretty picture rather than a scientific image, or is it possible to do both?

If it is the latter, then there must still be an element of subjective judgment to achieve a balance between the two, and if the assessment is 100% quantitative then why do we need the judges?

  • Like 1
Link to comment
Share on other sites

10 minutes ago, tomato said:

If it is the latter, then there must still be an element of subjective judgment to achieve a balance between the two, and if the assessment is 100% quantitative then why do we need the judges?

Or indeed any human input at all. 

Link to comment
Share on other sites

25 minutes ago, tomato said:

Getting back to the original post, can we assume the judges of the IKI M81/M82 competition will be assessing the entries on the basis of  what makes a pretty picture rather than a scientific image, or is it possible to do both?

If it is the latter, then there must still be an element of subjective judgment to achieve a balance between the two, and if the assessment is 100% quantitative then why do we need the judges?

I think that is is possible to do the both, and in fact I would argue that color calibration is necessary for quality artistic work.

If color is completely arbitrary thing - why do we take RGB data at all? Why not spend all that time doing luminance and then assign colors completely arbitrary to the image? That would create both deeper and more "artistic image".

I think that most people prefer to have "bounds" or "guideline" in their artistic expression - they want to start with some color and then shape it to their liking.

If so - why not start with "correct / calibrated" colors rather than arbitrary raw colors. Starting with correct colors means that you have common starting point whether you are processing data from your own camera or someone else's - or in case you have multiple cameras - you'll always have common starting point. This avoids dependence on your particular camera and situation "I've learned to process the data from my own camera - yours is so much harder for me to get the colors I want".

 

  • Like 2
Link to comment
Share on other sites

Competitions with judges will generally be scored on both how "accurately" the elements of the scene are reproduced and how "pleasing" the image is.

If an entrant likes oval starts then the judges will likely score low on accuracy but might still be won over if the image is particularly pleasing.

If an entrant distorted size to make one of Jupiters moons look larger and hence closer to the planet then the judges might not be impressed (unless it was a specific artistic category) with the unnatural look; but again might claw score back on aesthetic.

Nice sharp focus will score high on accuracy, but a softer image might look nicer sometimes.

Colour, though, seems to be judged ONLY in the "pleasing" category... justified on the basis that there is no right or wrong colour.

I think vlaiv has presented a good argument that there could be both "accuracy" and "aesthetic" elements for colour.  The colour management could have a standard (like there generally is in terrestrial photography) to which accuracy can be assessed.  That doesn't mean, just like the other characteristics of the elements of the image, there there is no freedom to be artistic or to change colour balance to, for example, imply a particular set of environment conditions to a viewer. 

Then, outside of competition, images could adhere to both the accuracy standard and an agreed viewing environmental condition - making them scientifically useful.

  • Like 2
Link to comment
Share on other sites

11 minutes ago, vlaiv said:

If color is completely arbitrary thing - why do we take RGB data at all? Why not spend all that time doing luminance and then assign colors completely arbitrary to the image? That would create both deeper and more "artistic image".

I am in agreement about the fact that we should all be able to achieve the 'right/true colours'. But what are your thoughts on the hubble palette? That isnt a true colour but is acceptable to many is it not?

Link to comment
Share on other sites

Just now, AstroMuni said:

I am in agreement about the fact that we should all be able to achieve the 'right/true colours'. But what are your thoughts on the hubble palette? That isnt a true colour but is acceptable to many is it not?

It is not true color, of course, but there is strong reasoning behind it as it was originally devised.

- it is very easy to just directly map components to channels

- Ha which is usually the strongest signal (best SNR) and also tends to be present in most parts of the target is mapped to green that carries the most luminance information

- No point in mapping Ha and SII to true color as they are virtually indistinguishable by color - 656nm and 672nm are so close that we see them both as deep red.

SHO was originally used as scientific tool - way to visualize gasses distribution in the same image rather than looking at three different images. I guess that scientists loved the way it looks although it is not accurate / natural rendition of the target so they decided to release it to general public (probably under influence of PR team - there is often pressure to justify funds and scientists often need to make their work likeable as well).

 

  • Like 1
Link to comment
Share on other sites

So we can have both. And on the perceptual element of the processing,  taking @jager945 and @wimvb's point, we still  need a white reference?

So going forward, would it be possible to see an initial scientific colour calibrated image, (just to demonstrate that this has been done correctly) and then a final perceptually altered image? Would I be correct in saying every imager's scientific version would look the same when viewed on the same monitor, or as I suspect, would all other elements of the images need to be standardised for this to be true?

Link to comment
Share on other sites

19 minutes ago, tomato said:

So we can have both. And on the perceptual element of the processing,  taking @jager945 and @wimvb's point, we still  need a white reference?

So going forward, would it be possible to see an initial scientific colour calibrated image, (just to demonstrate that this has been done correctly) and then a final perceptually altered image? Would I be correct in saying every imager's scientific version would look the same when viewed on the same monitor, or as I suspect, would all other elements of the images need to be standardised for this to be true?

Depends what you are trying to achieve.

White reference is needed when you are performing transform from XYZ color space which is absolute color space, into color space that deals with perceptual qualities. It is one of quantities needed to describe environment.

Let me give you two distinct scenarios and explain that a bit better.

Scenario 1.

You want to show the color of a star on your computer screen in such a way that if you had actual light of that star next to your computer screen and you looked at those two colors side by side - you would say - these are the same color.

Scenario 2.

You are in space suite floating in outer space and you view star with your own eyes and see the color of the star. You remember what that light looked like. You now see image of the star on your screen, and you want to be able to say - yes that is exact color of the star as I remember seeing when I was in outer space.

First scenario is scenario of color matching - we want to match physical quality of the light so when you see them side by side you can say - yes they are the same color. Second scenario is trying to match your perception. Thing is - human perception changes with environment and in order to match perception - we need to alter physical side of light so that it produces the same perceptional response with new enivronment (in first instance you were in dark looking at faint light from star/galaxy, in second case you are in lit room having certain illumination and looking at much brighter image on your computer screen - same light will produce two different "color sensations" in those different environments).

All this time I'm advocating for - Scenario 1. or possibly scenario 2. with clearly defined viewing conditions for both environments. In fact, we have "destination" environment in first scenario as well - we will all be looking in some environment.

This viewing environment is defined by sRGB standard. If we create jpeg or png image and we don't include color space information - sRGB color space is implied. sRGB color space has clearly defined viewing environment:

image.png.2095e938dd58c3cd0b4774278d779d29.png

https://en.wikipedia.org/wiki/SRGB

Here you can see that white point of the color space is D65. Encoding and ambient white point has to do with materials being photographed - not applicable in our case as we don't illuminate anything in outer space.

That is the second thing that I'm advocating - we should treat all the light that we capture in astrophotography as emissive case and not reflective case. Rationale behind this is:

- Most sources are in fact emissive - stars, open / globular clusters are formed out of those stars, galaxies are formed out of those same stars, emission nebulae also emit their light. Only reflective objects are gas/dust in form of reflection nebulae or IFNs.

- We handle reflective case because we are used to looking objects in different lighting conditions. White point is used because you see sheet of white paper as being white both under sunlight, under fluorescent light and under candle light. In all of these cases light reflected of the paper is different - yet we perceive it as white. One of reasons we use white point is so we can see paper as white both in given viewing conditions and later when looking at it at our computer screen on image we took. We expect paper to be white in both cases as we are used to it.

Reflection nebulae and IFNs never fall into this category - we never get to see them under different light. We can never change illumination that illuminates them - they are of "fixed" spectrum as far as we are concerned - unlike light reflected of paper which can change depending on the light we turn on.

Since it is fixed spectrum - it behave as emission case (we can't change the spectrum of the star either) - it is fixed for all observers on earth not only for us.

Given all the above, we can either:

Use XYZ color space that our calibration produced - do stretch on Y component that is perceptually very close to luminance (although XYZ is not perceptually uniform color space - neither is our stretch linear so we need not worry about it - as long as we are happy how the stretch looks like in mono) and scale all pixels with ratio of stretched_Y to starting_Y (or rather their X and Z components) - in the end with simply do XYZ -> sRGB conversion and we are done,

or

We use perceptual color space like CIECAM02 or newer version CAM16 (still not CIE standard) to produce perception correlates - like we are floating in outer space and looking at light that has intensity amplified (stretch of XYZ data by stretching Y) and then we produce same perceptual color in our viewing environment - again needs to be sRGB since that is implied in jpeg / png image.

After you have done that - well, tweak colors as you please - or just leave them as they are.

 

  • Thanks 1
Link to comment
Share on other sites

Ok, thanks.

I would really like to follow the recommendations, so can anybody tell me if this can be done using existing processing software, (e.g. PI, StarTools, APP or Affinity Photo), without using spreadsheets to calculate the values of each pixel.

Sorry if this sounds like the lazy option, but I think it has more chance of being widely adopted if it is relatively straightforward and easy to carry out.

Perhaps someone smarter than me could write a PI script assuming one doesn’t already exist?

  • Like 2
Link to comment
Share on other sites

1 minute ago, tomato said:

Ok, thanks.

I would really like to follow the recommendations, so can anybody tell me if this can be done using existing processing software, (e.g. PI, StarTools, APP or Affinity Photo), without using spreadsheets to calculate the values of each pixel.

Sorry if this sounds like the lazy option, but I think it has more chance of being widely adopted if it is relatively straightforward and easy to carry out.

Perhaps someone smarter than me could write a PI script assuming one doesn’t already exist?

I can create sort of tutorial once transform matrix is known, but creating transform matrix for particular setup is probably problematic part.

I've written about it before in this thread - one needs reference calibration device and a bit of recording, math and spreadsheet work to derive transform matrix.

Alternatively, maybe software exists that does that for us? I haven't searched but creating color correction matrices is common operation - photographers use it all the time when using color checker passports.

 

  • Like 3
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.