Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

What would happen?


Recommended Posts

Here is an interesting question.

I see that many people struggle to do processing of astro photos, and it is certainly considered by many to be some sort of art / magic needed to get a good image.

What do you think would happen if you took regular daytime photograph (raw data out of DSLR / OSC or Mono + LRGB) and took that thru the same paces you usually use to process astro photography?

I think it would be interesting to see - as we know how to judge good daytime photo, but would we get anything resembling a good daytime photo by using our favorite astro photography work flow?

Link to comment
Share on other sites

For anyone wanting to give it a go, I prepared some files

I took two exposures of the same scene. One is properly exposed and processed "in camera", while second is very under exposed (to simulate large dynamic range of astrophotography) and as a consequence - rather noisy.

Here is reference image:

sample.jpg.ccd0711f8ed970fe7e56399641998442.jpg

(yes fellow astronomers, that is clear sky, that is how it looks :D )

Here are extracted colors (split debayer) of raw image for same scene - try your astro processing and see what sort of result you get.

red.fits

green.fits

blue.fits

  • Like 1
Link to comment
Share on other sites

Very interesting challenge 🙂

Rough tweak / recombination in Photoshop, levels / curves ... nowhere near the stretching on an astro image and some saturation and hue adjustments.

Very noisy and couldn't get near the original.

But I am supposed to be working 🙂

image.thumb.png.555deb8c63e01342eff3f9a8fb812bb1.png

  • Like 1
Link to comment
Share on other sites

I also did very basic processing in Gimp - levels, curves, some saturation and denoising. The way I usually process image when I don't pay much attention to color workflow. It turned out much better than I expected:

demo-process.thumb.jpeg.6f4cbe16d22d0cec8f810040cbd6bb82.jpeg

Even with boosting saturation colors were rather difficult to reproduce (can't get sky as blue as above for example).

Wavelet denoising in Gimp does work wonders though.

Link to comment
Share on other sites

Just realized that I used fits works and that it applies white balance, so above data is not quite raw. Will use dcraw to extract actual raw data and I'll post it later.

Link to comment
Share on other sites

22 minutes ago, tomato said:

Crikey! It’s hard enough processing an image which is 90% dark sky, look at all those colours and textures, light and shade across the image, I don’t think I’m up to it.☺️

Well, that is the point exactly - it should not be hard to process either of two images for "proper" result. After all - it happens in cameras and in mobile phones - you point them at something, you press the button and you get very nice looking image that corresponds to what you see.

 

Link to comment
Share on other sites

15 minutes ago, vlaiv said:

Well, that is the point exactly - it should not be hard to process either of two images for "proper" result. After all - it happens in cameras and in mobile phones - you point them at something, you press the button and you get very nice looking image that corresponds to what you see.

 

I totally agree, but of course it helps a lot that we can capture and process a well lit daytime image with the Mk 1 eyeball and brain, we don’t have that luxury with DSOs.
If they ever make an Astro camera where you just press a button and a perfectly  calibrated, stacked, stretched, colour balanced, sharpened and noise reduced image pops out, I’ll be in the first wave of purchasers.

 

Link to comment
Share on other sites

6 minutes ago, tomato said:

I totally agree, but of course it helps a lot that we can capture and process a well lit daytime image with the Mk 1 eyeball and brain, we don’t have that luxury with DSOs.
If they ever make an Astro camera where you just press a button and a perfectly  calibrated, stacked, stretched, colour balanced, sharpened and noise reduced image pops out, I’ll be in the first wave of purchasers.

 

Some mobile phones are getting close to this...

Alan

  • Like 1
Link to comment
Share on other sites

21 minutes ago, tomato said:

I totally agree, but of course it helps a lot that we can capture and process a well lit daytime image with the Mk 1 eyeball and brain, we don’t have that luxury with DSOs.
If they ever make an Astro camera where you just press a button and a perfectly  calibrated, stacked, stretched, colour balanced, sharpened and noise reduced image pops out, I’ll be in the first wave of purchasers.

 

There is really not much difference between the two - "well lit daytime image" and DSOs. Sure, photon flux is very different - but we use long exposure to compensate for that.

Don't think there will be Astro camera with single button - but software part is very real option. Equivalent of making perfectly calibrated stacked, color calibrated, sharpened and noise reduced image is already there in form of math / algorithms developed.

 

 

Link to comment
Share on other sites

But no software package comes with a single button labelled “Process Image”and with the almost infinite variability of the input data quality, I don’t see how it ever could.

It might be an interesting experiment to try processing a dataset entirely on APP’s and StarTools’ default settings and see how it compares to a “human input” version.

Link to comment
Share on other sites

21 minutes ago, tomato said:

But no software package comes with a single button labelled “Process Image”and with the almost infinite variability of the input data quality, I don’t see how it ever could.

I don't think there is as much variety in data as might seem at first.

PSF tells you all you need to know about image sharpness / resolution

Stacking tells you all you need to know about pixel intensity and noise statistics (this is by the way very under utilized information - no one is using standard deviation of pixel samples - although it gives you very straight forward data on the noise)

Color calibration is done once per equipment profile (so it is even easier than frame calibration that we regularly use).

Stretching can be done in such way as to suppress background noise (again, statistics tells us what is noise and how much of it there is) and highlight data without over exposing bright parts (whatever is bright and is not star core needs to be kept from over exposing). Gamma curve can be kept standard or adjusted so that there is the most information in the image (calculating image entropy for different gamma settings).

I think that variety in data is not the issue - it is variety of "processing workflows" that people use, possibly without fully understanding what they are doing.

Link to comment
Share on other sites

2 hours ago, tomato said:

So if an ideal process workflow can be derived just from the image data statistics, why don’t we have a “Process Image” button on the Astro processing software packages?

Maybe because authors as well as users don't fully understand processing workflow, or are influenced by "box" - they make software the way users are accustomed to using it.

In any case - adding at least "let me do this for you" kind of wizard might not be bad idea (and possibly with steps that users can inspect if they want to learn further).

  • Like 1
Link to comment
Share on other sites

I think in some respects, and it may be slowly, it is getting there. Take PI as one example, not too long ago its WBPP script came with a "warning" that it was not "optimal" and should not really be used in place of the manual processes. That warning no longer exists and in fact is PIs preferred method now and what's more also now incorporates more processes too.....

I realise this is not post-processing but even so it's a nod towards ever more automation.

Link to comment
Share on other sites

I have tried running @Pitch Black Skies's excellent M101 stack through StarTools using the default settings throughout. It produced a final result with a very bright background  and strong false colours, clearly  because the default settings in the second AutoDev stretch were not optimum. I altered just the Ignore Fine Detail and Shadow Linearity settings to correct this and ran everything else on default.

The final result has a clear green caste but overall it's not too far away IMHO.

M101_19_hr_10_mn_calibrated-cropDEFAULTS.thumb.jpg.1bdbe99c5a3a366d238ea0915e41e8a7.jpg

 

  • Sad 1
Link to comment
Share on other sites

Interesting discussion regarding standardised workflow. Are any of you familiar with the works and writings of Ansel Adams? He basically only used black and white film for his artistisc photography. (”When all else fails, you can use color.”) But he hardly used a standardised workflow, even though he introduced the zone system. Of which the fundamentals can be applied to astrophotography as well. Sort of.

What I’m trying to say is, that image processing isn’t a one button affair. Even in camera you have to choose a white reference (sunny, cloudy, incadescent lighting, etc). If you want more than snapshots out of your photography, you need to apply colour corrections, red eye removal, smoothing, sharpening, hdr processing, etc etc. Definitely not a one button process.

  • Like 1
Link to comment
Share on other sites

I would love a processing package that would produce the best result from the input data by following a set of rules based on the science. However, the first and biggest hurdle is determining what is the ‘best’ result. The huge variation of results presented in image processing competitions is testament that getting consensus on this would be hard. For example I would agree that most galaxy images display too much colour saturation, but that's what most imagers (including myself) do, I can’t see that changing.

Link to comment
Share on other sites

5 minutes ago, tomato said:

I would love a processing package that would produce the best result from the input data by following a set of rules based on the science. However, the first and biggest hurdle is determining what is the ‘best’ result. The huge variation of results presented in image processing competitions is testament that getting consensus on this would be hard. For example I would agree that most galaxy images display too much colour saturation, but that's what most imagers (including myself) do, I can’t see that changing.

Colour calibration tools based on photometry and a visual spectrum capture (no useless LP filters) are pretty close to processing with facts, are they not?

Siril photometric colour calibration always produces good colours as long as you have enough integration and suitable stars found.

Link to comment
Share on other sites

1 hour ago, wimvb said:

Even in camera you have to choose a white reference (sunny, cloudy, incadescent lighting, etc).

Choosing white reference is there to let us match our perception of the color to perception of the color later when viewing.

This is because there are two sets of environmental conditions - one when shooting (which is variable in earth based conditions) and one when viewing. White balance tries to do perceptual matching.

I advocate that we completely drop perceptual thing (at least in beginning) until we get measured part correctly.

Why do I insist that there is no color balance in astrophotography? Simply because there is no lighting in outer space (this is not really true for solar system objects, but again, there is no variable lighting - you can't see Jupiter on a sunny day and on a cloudy day and under fluorescent lighting). We get light as is, and best we can do is to measure it properly and then "emit same type of light" from our screens (color matching - if we take original light and view it side by side to light from the screen - they would look the same).

 

Link to comment
Share on other sites

3 minutes ago, ONIKKINEN said:

Colour calibration tools based on photometry and a visual spectrum capture (no useless LP filters) are pretty close to processing with facts, are they not?

Siril photometric colour calibration always produces good colours as long as you have enough integration and suitable stars found.

I'm not sure how good color Siril or any other photometric color tool produces.

It is not down to procedure - procedure is good and sound. It is due to samples. We are trying to calibrate vast color space - using only very narrow set of samples.

I often show this image when talking about color, but here it is again:

image.png.d4f9b28f269ed9a900fbfbe3fecc46cf.png

This is 2d representation of 3d color space - a sort of projection. There is a line that is drawn in it - it is Plankian locus and all black body radiators sit on that line. Star colors sit on that line (there is even effective temperature marked on that line). We are trying to calibrate whole color space - using only samples that we want to align on that line.

That would be fine if projection was rigid (like when aligning two subs against star points - but we know that both subs have same scale and distances between stars).

Color space transform is just linear - which means it can have all linear components - translation, rotation, but also stretching in any axis independently and shearing in any axis. There are many degrees of freedom to the transform.

In order to derive good transform - we really need samples that are as uniformly spaced as possible. We also need some green samples and some deep blue samples and some pink samples.

We don't have those in outer space (at least not all of them and not regularly - we do have some NB sources that lie on outer contour of the space like Hb, Ha and OIII emissions).

Best approach would be to do "in house" color calibration (needs to be done only once) and then use color calibration tool in Siril and others to make fine adjustments to color (atmosphere tends to shift color of object because blue light is scattered more then red - sun for example appears red/orange near horizon because of this effect - but that is not true color of the sun).

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Best approach would be to do "in house" color calibration (needs to be done only once) and then use color calibration tool in Siril and others to make fine adjustments to color (atmosphere tends to shift color of object because blue light is scattered more then red - sun for example appears red/orange near horizon because of this effect - but that is not true color of the sun).

How would one subtract light pollution then? Most images are varying degrees of brown if camera white balance was sound during capture. I dont think this goes onto the fine adjustments category of colour processing.

The degree of light pollution often varies within the night too, so a simple calibration image of sky colour to subtract from the images wouldnt work, unless a calibration image was done uniquely for each sub.

But i see what you mean with the limited data thing to produce all of the colours. The noisier my image is the higher the likelihood that the result comes out as (what i think is) a bad colour balance. With maybe 6h+ images i dont see weird results at all however, usually at this point the tool has several hundred stars to calibrate on and i guess there are enough samples to get it right.

Link to comment
Share on other sites

5 minutes ago, ONIKKINEN said:

How would one subtract light pollution then? Most images are varying degrees of brown if camera white balance was sound during capture. I dont think this goes onto the fine adjustments category of colour processing.

Light pollution is additive signal (all light adds up), and as such won't get mangled if we apply linear transform to our raw data.

One of properties of linear transform is that it preserves linear combinations. This means that when we apply color correction matrix (to any color space) - LP will still be added to our scene - but both scene and LP will be transformed to that color space.

Regardless of which color space we work in - we always remove light pollution in the same way.

It is probably one of the hardest parts to automate since we don't have clear distinction of what is background and what is foreground in the image.

I've developed an algorithm that works rather ok in this role, but it is not flawless. I'll demonstrate it a bit later. There are other algorithms like DBE that do the same.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.