Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

What would happen?


Recommended Posts

@ONIKKINEN Here is result of "automatic background removal" on image that already featured in this thread - M101 by @Pitch Black Skies

image.png.861b7152d6a997d2ad8d07399c45a8b4.png

Left is image after removal, second is what algorithm identified as foreground and background (white and black) and third is removed gradient

I guess extended nebulae filling the FOV would probably be issue for the algorithm - but I don't have any test data to run it against (I did run it against data provided by IKI observatory and it did fine in spite having not very well defined background in some cases).

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

@ONIKKINEN Here is result of "automatic background removal" on image that already featured in this thread - M101 by @Pitch Black Skies

image.png.861b7152d6a997d2ad8d07399c45a8b4.png

Left is image after removal, second is what algorithm identified as foreground and background (white and black) and third is removed gradient

I guess extended nebulae filling the FOV would probably be issue for the algorithm - but I don't have any test data to run it against (I did run it against data provided by IKI observatory and it did fine in spite having not very well defined background in some cases).

 

Is this different from the available tools already, like siril background extraction, photoshop gradientXterminator, Pixinsight this and that? This particular dataset was in my opinion very good and needed the bare minimum of work to get a decent result.

So how about worse data, how does this algorithm work on that?

Try with this one if you want to: Camelopardalis-IFN.fit

Thats a split green channel stack from a target i have in the works (for who knows how long). This particular stack has terrible SNR, terrible gradients and generally just isn't good data. The IFN structures around NGC2633 (the weird looking barred spiral galaxy to the left) are very weak at this integration and anything other than a perfect background removal process kills it completely.

But back to the light pollution calibration thing. If one has the hypothetical on camera colour calibration thing going on at capture/calibration and one still has to do a background calibration for the stack, then isn't this just normal colour calibration with extra steps?

Link to comment
Share on other sites

13 hours ago, ONIKKINEN said:

Is this different from the available tools already, like siril background extraction, photoshop gradientXterminator, Pixinsight this and that? This particular dataset was in my opinion very good and needed the bare minimum of work to get a decent result.

To be honest, I have no idea if it is different, since I don't know implementation details of those algorithms you mentioned.

My algorithm works on:

1) linear data

2) Almost has no user input (no selection points needed or things like that). It has 2 parameters which almost always stay default when I apply the algorithm

It removes background light levels and any linear gradient that is due to light pollution. It does not deal (nor intent to deal) with: more complex background gradients that are formed by poor stacking or high altitude clouds or failed flat calibration. It is not background flattening algorithm - it expects proper data, calibrated and clear of artifacts.

13 hours ago, ONIKKINEN said:

So how about worse data, how does this algorithm work on that?

Try with this one if you want to: Camelopardalis-IFN.fit

Here is the result of applying algorithm to that data:

Camelopardalis-IFN-background-removed.fits

It would be nice if you could do small side by side comparison to other methods you used - just a simple stretch after removing background.

13 hours ago, ONIKKINEN said:

But back to the light pollution calibration thing. If one has the hypothetical on camera colour calibration thing going on at capture/calibration and one still has to do a background calibration for the stack, then isn't this just normal colour calibration with extra steps?

Not sure what you are asking, but here is how I classify things:

1. Regular sub calibration (darks, flats, that lot)

2. Stacking

3. Color calibration - this step is meant to transform (L)RGB data (irrespective if it's coming from mono or OSC) to XYZ color space (which can be thought of "standardized" sensor raw - but it is more than that as it is aligned with our visual system - Y corresponds to our perception of luminance).

4. Gradient removal. This step is actually "optional". In daytime camera making an image setup - closest thing to it is "red eye removal". It is altering of the data to account for well understood phenomena in order to more faithfully render the target that has been tampered by environmental factors (using flashlight and retina reflection for red eye, light pollution thing for astrophotography).

Step 4 does not really depend on step 3 - you can remove gradients both pre and post color calibration. This is because light adds linearly and color space transform is also linear operation. Result of 3 then 4 will be the same as 4 then 3.

5. Stretch

.... rest of it.

 

Link to comment
Share on other sites

4 hours ago, vlaiv said:

It would be nice if you could do small side by side comparison to other methods you used - just a simple stretch after removing background.

The comparison is difficult in this case since your algorithm crushed blacks to 0. Intended to do so?

2022-04-09T15_57_38.thumb.png.94d7ea885dca16269c923680a77db13c.png

SiriL background extraction:

2022-04-09T15_59_46.thumb.png.d8f240a66a9def365d78f6e18c525e20.png

ASTAP linear gradient tool:

2022-04-09T16_10_07.thumb.png.d0c39959951fc4fc59769f83f239224b.png

And what it looked like before any of this:

2022-04-09T16_04_19.thumb.png.28c4bc4c613a8c92346e0e8ef3f1628b.png

Neither of the methods resulted in what i would prefer to get out of background removal. SiriL and ASTAP dont remove the background gradient completely and your tool turns out very different from the rest so not sure how to compare the results. I would use gradientXterminator later on in processing to deal with the corners the gradient removal tools have left too dark and it would turn out ok.

4 hours ago, vlaiv said:

Not sure what you are asking, but here is how I classify things:

1. Regular sub calibration (darks, flats, that lot)

2. Stacking

3. Color calibration - this step is meant to transform (L)RGB data (irrespective if it's coming from mono or OSC) to XYZ color space (which can be thought of "standardized" sensor raw - but it is more than that as it is aligned with our visual system - Y corresponds to our perception of luminance).

4. Gradient removal. This step is actually "optional". In daytime camera making an image setup - closest thing to it is "red eye removal". It is altering of the data to account for well understood phenomena in order to more faithfully render the target that has been tampered by environmental factors (using flashlight and retina reflection for red eye, light pollution thing for astrophotography).

Step 4 does not really depend on step 3 - you can remove gradients both pre and post color calibration. This is because light adds linearly and color space transform is also linear operation. Result of 3 then 4 will be the same as 4 then 3.

5. Stretch

.... rest of it.

 

The way i understood the colour calibration on camera thing is that this is done just once and the raw subs themselves would have corrected colourbalance. I guess its the same as just leaving this step for the stacked image though? Not sure i understood the point you had so not sure i understood how to ask the right questions 😃.

But if it did go this way and the image would have the corrected colour balance when stacked or this was applied to the stacked image, i would still have to do background neutralization and gradient removal to deal with added brown from LP and if this step is not perfect, the previously set in-camera colour balance thing would not result in good colours. Thats what i thought, if one has to do colour calibration in one way or another anyway, why bother with the in-camera colour transformation thing? Maybe i just dont get it.

Not sure any of what i typed above makes any sense, i have a clear thought in my head that i want to type out but dont think i got it out 😅. Ill stop here so i dont get more confused.

Link to comment
Share on other sites

2 hours ago, ONIKKINEN said:

The comparison is difficult in this case since your algorithm crushed blacks to 0. Intended to do so?

That is not what is intended nor what happened. I guess that software that you are using can't deal with negative values.

It puts background at zero - since there is some noise in background - some values end up negative - but that is just math thing - you can put black point where you want - all data is still there.

Here is it stretched in Gimp:

image.png.6f0b758d69c08048881cb3136f0b7d50.png

Results are not much different than ASTAP and SIRIL and I guess it's down to the actual data - gradients are not due to simple light pollution, but maybe some issues with calibrations or perhaps something happened when stacking.

If you want to investigate possibility of issue when stacking - then try to do just normal add/average stack and see if you can successfully wipe the background.

Sigma reject algorithms can cause issues with linear gradients.

Sum / average of images with linear gradients will produce linear gradients - even if gradients have different direction - result will still be linear gradient. Sigma reject can start to do funny things if subs are not normalized for gradient as well - in some places gradient will be "to the left" and in others "to the right" and in some subs it will be stronger on one side of the image than on other - and all those pixels will be seen as "deviant" as they don't follow same distribution as others (signal is actually different and it can be handled with regular sub normalization).

I have another algorithm that I termed "Tip-Tilt sub normalization" that deals with this on sub level and aligns gradient to be the same in each sub thus creating "stack compatible" subs that work well with Sigma reject algorithms.

  • Like 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

That is not what is intended nor what happened. I guess that software that you are using can't deal with negative values.

It puts background at zero - since there is some noise in background - some values end up negative - but that is just math thing - you can put black point where you want - all data is still there.

Here is it stretched in Gimp:

image.png.6f0b758d69c08048881cb3136f0b7d50.png

Results are not much different than ASTAP and SIRIL and I guess it's down to the actual data - gradients are not due to simple light pollution, but maybe some issues with calibrations or perhaps something happened when stacking.

If you want to investigate possibility of issue when stacking - then try to do just normal add/average stack and see if you can successfully wipe the background.

Sigma reject algorithms can cause issues with linear gradients.

Sum / average of images with linear gradients will produce linear gradients - even if gradients have different direction - result will still be linear gradient. Sigma reject can start to do funny things if subs are not normalized for gradient as well - in some places gradient will be "to the left" and in others "to the right" and in some subs it will be stronger on one side of the image than on other - and all those pixels will be seen as "deviant" as they don't follow same distribution as others (signal is actually different and it can be handled with regular sub normalization).

I have another algorithm that I termed "Tip-Tilt sub normalization" that deals with this on sub level and aligns gradient to be the same in each sub thus creating "stack compatible" subs that work well with Sigma reject algorithms.

Well how about that, now it looks more or less the same as the other tools, perhaps even a bit better? Siril reports that some pixel values are down to -44 in the previous stack, which would indeed seem to be the issue causing the weird looking blacks. This new version you posted is as id expect. Definitely would not have figured that one out myself.

I chose this stack for this experiment because i know it to be flawed in many ways regarding the gradient and flats, there were light leaks, shots from 2 different focusers, different secondary spider vanes, pre flocking and post flocking, 2 different methods of shooting flats etc so yes flats have not worked perfectly for all of the subs which is the cause of the still uneven background after attempts at extraction. How i deal with a fragile image and a strong gradient is to remove it from the subs pre stacking and then recompute normalization. Takes forever with thousands of subs but it works great in the end and there are no surprises with sigma clipping and normalization with most of the gradients removed (the parts that can be, not applicable to the issues where flats have failed of course).

But from what i can see now is that your algorithm works great even for data that has some flaws.

Link to comment
Share on other sites

It seems to me that the differences between daytime and astrophotography include:

1) Dynamic range. Almost everything which interests us in an astrophoto is compressed into a tiny part of the histogram, so compressed as to be invisible in the linear image. This brings us to...

2) The 'decompressing' of the invisible data, known as stretching. This can be done in various ways to create different relationships between the brightness variations we are revealing by decompressing. We have a limited dynamic range available, so do we favour the very faint at the expense of local contrasts higher up the brightness range? Etc. Do we invert any brightnesses? (If we do, we should be sent to see the Headmaster. 😁) Which brings us to...

3) There are decisions to be made in AP stretching. Hooray! Different imagers will make different decisions and bring out different features. That's why, thank God, the one-click processing package will never be able to produce the best image. (There isn't a best one. They have different priorities.)

4) the daytime imager, using little or no stretching, has a much reduced problem with noise.

5) AP is done at infinity so the consequences, good and bad, of finite depth of field do not apply.

6) Multiple observers of the photographed daytime scene can see it and, largely, agree on what they've seen and how the photo relates to that. Nobody can see M33 as it is photographed, in any telescope. That's why we photograph it.

I'll stop there...*

*No I won't. 👹 I wish there was a one-click processing package for my astrophotos = I wish there was a one-stop word processing package which would write the perfect poem about the person I love.

😁lly

  • Like 1
Link to comment
Share on other sites

@ollypenrice

You have been known to take a snap or to in daylight, right?

Do you start with completely raw image and then process it from start or do you start with what your camera provides? More importantly - if you take two cameras and you shoot same scene at very similar settings and save your images as 16bit for post processing - do they:

1) look pretty much the same

2) look pretty much as the scene you saw (if you exposed properly)

I took raw file from the start and then with DCRaw - exported actual raw data from sensor:

image.png.fe200a315d91263bea38ebd9807cae55.png

Left is raw image, right is what is usually presented by any sort of software that extracts data from the raw image.

Note few things (right image is as opposed to left):

1. Image is flat fielded

2. Image is color corrected

3. Image is gamma adjusted (which is really stretch)

When I'm talking about "auto" button - or automated mode - then I'm saying following:

Same as with daytime photo and camera - there is certain workflow that prepares image to either be viewed or further post processed - nobody starts with true raw in daytime photography.

Point of this standardized work flow would be to arrive from raw image (left) to image that can be viewed or further post processed (right) - but that will be a match between people with different setups if they are shooting same target, and to the best of our knowledge - it will be a match to what we would potentially see if we were floating in outer space and light from distant source was bright enough (in some cases - maybe as is).

Same math and physics works in both cases as there are sensors and there is light. No difference except in few minor details:

1) In daytime photography we need to set white balance due to the fact that our brain adjusts to environmental factors (we see differently and expect color to match as we remember not as we view it side by side - it is not the camera that sees differently)

2) in daytime photography it is exposure length (and F/stop) that sets what is seen in image, in astrophotography that will be determined by SNR (and to some extent to total exposure as SNR depends on it for given setup)

3) we have atmosphere to deal with

Other than that - process is the same.

I just want to create scientifically based - sane / same starting point for different setups that depends on target only (and not on rest of the equipment or environment). What is done afterwards with that image is up to person processing it (pretty much as in daytime photography).

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

@ollypenrice

You have been known to take a snap or to in daylight, right?

Do you start with completely raw image and then process it from start or do you start with what your camera provides?

Good question! I shoot in RAW and, as a Canon user, open the results in Canon Digital Professional, choose the ones I like, save them as TIFFs and then take them into Photoshop and/or Lightroom where I'll work on them - but only to refine them. I convert them to ProphotoRGB colourspace and convert to sRGB for the net versions. Nothing like the level of input that goes into astrophotography. A typical Lightroom adjustment for me, with a single exposure 'regular' daytime shot, would involve small adjustments to colour temperature, vibrance, hue and saturation, 'texture' (I like that one in LR but don't know what it's really doing) and some selective sharpening. I think a lot of people looking at the 'before' and 'after' might ask, 'What's the difference?'

(As a processing-intensive astrophotographer I'm a sucker for more complicated kinds of daytime photography, notably focus-bracketed macro and HDR composites for some targets.)

I'm useless at the 'spot it and grab it' kind of thing that street photographers do. I have to think about a picture for half an hour before I can take it! :D

Olly

Link to comment
Share on other sites

Applying common astrophotography processing steps, I came up with the following result.

whatwouldhappen_grb.thumb.jpg.d1776ea518ac937c82100e1cf85c00a0.jpg

As usual, all processing done in PixInsight

  • Tone mapping: Red to Green, Green to Red and Blue to Blue
  • To get rid of the strong green cast I applied SCNR green, and because there is no magenta in nature, I inverted the image and applied SCNR green a second time. Then inverted back
  • Stretch: arcsinh stretch to keep colour ratios, followed by curves transformation with a strong S-curve and colour saturation.
  • Multiscale Median Transform to enhance local contrast.
  • Frame added with PixelMath

And, dare I say it: 😉

Edited by wimvb
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.