Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Help please - someone show me I'm rubbish at processing!


edarter

Recommended Posts

I always found I could not get as good an initial first stretch in PS as everybody else, then I got Affinity, plus the (this sounds like an advert, but not!) JR Astrophotography macro plugins for Affinity, which gives me six stretch options that just run, and you can either like or not and try a different one, so you can be done with that in a couple of minutes!
I then export the image from there, and refine it in PS, using mostly levels and curves, saturation and hue.

Starnet++2 to get a starless image, is great, as is GraXpert. 

  • Thanks 1
Link to comment
Share on other sites

40 minutes ago, Laurieast said:

I always found I could not get as good an initial first stretch in PS as everybody else, then I got Affinity, plus the (this sounds like an advert, but not!) JR Astrophotography macro plugins for Affinity, which gives me six stretch options that just run, and you can either like or not and try a different one, so you can be done with that in a couple of minutes!
I then export the image from there, and refine it in PS, using mostly levels and curves, saturation and hue.

Starnet++2 to get a starless image, is great, as is GraXpert. 

Why export to PS, from affinity, as you can do it all in there, no need for PS…👍🏼

  • Like 1
Link to comment
Share on other sites

16 minutes ago, Stuart1971 said:

Why export to PS, from affinity, as you can do it all in there, no need for PS…👍🏼

Just prefer the appearance when using levels curves etc, easier to see what effect your having?

Less cluttered view I would say.

Edited by Laurieast
  • Like 1
Link to comment
Share on other sites

15 hours ago, wimvb said:

Some of the unusual gradients may be due to the fact that this is a mosaic. If gradients aren't removed from each panel before mosaic composition, they can in effect repeat in the final image. And given the fact that this is a very difficult target for gradient removal to begin with, that may very likely be what we are all seeing.

Just thinking about this comment, would you recommend removing gradients on each of the frames before the final stack then?

Link to comment
Share on other sites

2 hours ago, edarter said:

Just thinking about this comment, would you recommend removing gradients on each of the frames before the final stack then?

It depends on how you create the mosaic. If you first create two masters that are to be combined, it seems better to remove gradients from each. But if you stack all subs at once, this becomes harder. Pixinsight could do it, because it allows processes to be applied to groups of images. But I’ve never seen it done that way.

  • Thanks 1
Link to comment
Share on other sites

22 hours ago, ONIKKINEN said:

then gradient removal in Graxpert,

I was of the understanding that the background extraction capabilities of Graxpert are now integrated into Siril. So am I missing something here?

  • Like 1
Link to comment
Share on other sites

57 minutes ago, AstroMuni said:

I was of the understanding that the background extraction capabilities of Graxpert are now integrated into Siril. So am I missing something here?

I have not heard about this, do you have a link to some reading? I have some old version installed and that has still the old extraction tool, but this would be great news if its the case for a new version.

  • Like 1
Link to comment
Share on other sites

13 minutes ago, ONIKKINEN said:

I have not heard about this, do you have a link to some reading? I have some old version installed and that has still the old extraction tool, but this would be great news if its the case for a new version.

Check this out...Its in release 1.0.2 https://siril.org/download/2022-05-16-siril-1.0.2/

And the updated gradient removal tutorial https://siril.org/tutorials/gradient/

Edited by AstroMuni
  • Like 2
  • Thanks 2
Link to comment
Share on other sites

6 hours ago, wimvb said:

It depends on how you create the mosaic. If you first create two masters that are to be combined, it seems better to remove gradients from each. But if you stack all subs at once, this becomes harder. Pixinsight could do it, because it allows processes to be applied to groups of images. But I’ve never seen it done that way.

Ah well... there is a story there as well!  I use APP to stack and initially I was stacking panel 1 across all 6 sessions, saving that (unedited) and doing the same for panel 2, then combining them as a mozaic afterwards. When I was not getting the results I expected from my processing I initially thought it was the stack causing the issue, so I reverted to treating each session separately. Doing a panel 1 and panel 2 mozaic for each session (again not edited in any way) and then finally combining the 6 resulting images in a final stack. Didn't seem to make any difference 😕  It is something I could go back to though.

Ed

Link to comment
Share on other sites

3 hours ago, carastro said:

Thanks Laurence for your help, I have downloaded it now.

I had already processed the image from the Tiff file, here it is, done in Photoshop and a little in Images Plus.

Carole 

Heart_mozaic_0120_and_0329_removed-session_1-NoSt Proc IP2.png

You have got way more out of that in PS than I have managed to Carole! Just underlines the fact that my issue is my processing skills, not the data.

Link to comment
Share on other sites

With regard to APP and mosaic panels, after much experimentation I create each panel stack separately then combine them in a separate integration, adjusting the LNC and MBB parameters accordingly to get the best overall result. As I am usually only integrating 4 or 6 panels, this process does not take very long so you can try different settings and home in on the optimum quite quickly.

Here is my effort on your unstreched FITS file, background neutralisation in PI, initial Histogram transformation in PI, then separate the stars with StarXterminator then Curves transformation on the starless image and Morphological Transformation on the stars image to reduce them slightly. Recombine with PixelMath then application of NoiseXterminator.  Some further minor adjustments in Affinity Photo using Levels and HSL.

I tried to create a pseudo HST palette version but there is not enough blue or green signal to make this work.

I think the data is pretty decent.👍

Image03AP.thumb.jpg.a466491412cd3d8a9833dfcbb43b5a35.jpg

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

5 hours ago, tomato said:

I tried to create a pseudo HST palette version but there is not enough blue or green signal to make this work.

I don't know, I'm always up for a challenge when I can't image. :D

I struggled a bit with the yellow because the outline of the Heart & the centre of the Fishhead are quite bright when done like this, but it didn't come out too bad. Maybe a little darker overall would have worked better, now I see it on here. 

Edit: I darked the background a little, a little Topaz Sharpen AI and reworked the stars. It looks much better now. :D

edarter-Hubble.png.d68be56169636f64fc9ee17e8cab3b09.png

 

 

Edited by Budgie1
Image updated.
  • Like 3
  • Thanks 1
Link to comment
Share on other sites

16 hours ago, edarter said:

You have got way more out of that in PS than I have managed to Carole! Just underlines the fact that my issue is my processing skills, not the data.

Well I have been imaging for over 10 years so probably had a lot more practice than you.  If you use Photoshop you could take a look at some of my You Tube tutorials.  You can access them from my website in my signature.

or here: https://sites.google.com/view/astrophotography-carole-pope/video-tutorials?authuser=0

Carole 

 

Edited by carastro
  • Like 4
  • Thanks 1
Link to comment
Share on other sites

On 18/06/2022 at 05:10, edarter said:

Hi,

Would there be anyone out there happy to have a go at processing a dataset I have of the Heart Nebula?

I'm not particularly great at image processing but I get by, however this one is giving me real trouble! It doesn't matter what I do in Photoshop I cannot seem to get anything resembling a half decent image from the stack. Especially given the integration time (about 5hrs with an astro modded Canon 600D and my 130PDS). I would post a result of my attempts at it, but to be honest I've never got far enough in to the processing to be happy enough to save it. Things go wrong pretty much from the off with initial stretches. I've tried gentle stretches, aggressive ones, colour preserving Arc hyperbolic etc etc and the end result always looks VERY red and background incredibly washed out. Any attempt at colour balance / correction or gradient removal sees a lot of the nebula just vanish. I'm stumped as to what I'm doing wrong. APP gives a bit better result, but VERY noisy and I just get nowhere with startools.

So if anyone is up for a challenge then please let me know, I would be genuinely intrigued to see what could be done with it compared to my half baked attempts!

Thanks
Ed

 

Youtube videos would a lot of help buddy

  • Thanks 1
Link to comment
Share on other sites

On 20/06/2022 at 00:03, Stuart1971 said:

Why export to PS, from affinity, as you can do it all in there, no need for PS…👍🏼

Having delved into it more, and several YouTubes later, I'm coming to the conclusion you are right! 👍
So much hidden in there, and the stacking seems better than DSS. 

  • Like 2
Link to comment
Share on other sites

Hi,

As others alluded to, it appears that the dataset's channels were heavily re-weighted. So much so that it appears to me the only usable luminance signal resides in just the red channel. I'm not sure at what point this is happening, but this is the root of the issue (on the StarTools website you can find a guide with regards to preparing your dataset/signal for best results, which should hold for most applications).

If you wish to "rescue" the current stack, you can extract the red channel and use it as your luminance (which I think some others have done here), while using the entire stack for your coloring. The gradients then, while troublesome, become fairly inconsequential, as they mostly affect the channels that were artificially boosted - they will only appear in the coloring where the luminance signal allows them to. Processing then becomes more trivial.

In the visual spectrum, HII regions like this are typically colored a reddish-pink due to a mixture of emissions at different wavelengths (not just H-alpha!). Pure red is fairly rare in outer space (areas filtered by dust are probably the biggest exception). The use of a CLS filter will usually not change this - it just cuts a part of the spectrum (mostly yellows). What a CLS filter will change, however, is the ability for some color balancing algorithms (including PCC) to come up with anywhere near correct coloring.

Though you cannot expect very usable visual spectrum coloring from a CLS filter, you can still use a CLS filter to create a mostly useful bi-color (depending on the object) that roughly matches visual spectrum coloring, by mapping red to red, and green + blue to green and blue, e.g. R=R, G=(G+B)/2, B=(G+B)/2. This will yield a pink/cyan rendition that tends to match Ha/S-II emissions in red, and Hb/O-III emissions in cyan/green fairly well, as it would appear in the visual spectrum.

In general, if you are just starting out in AP processing, these days it is worth tackling an important issue upfront; the question of whether you are okay with introducing re-interpreted deep-faked detail that was never recorded (and cannot be corroborated in images taken by your peers), or whether are you just in it to create something plausible for personal consumption. Some are okay with randomly missing stars and noise transformed into plausible detail that does not exist in reality, others are definitely not. Depending on your goals in this hobby, resorting to inpainting or agumenting/re-intepretating of plausible detail by neural-hallucination based algorithms, as part of your workflow can be a dead end, particularly if you aspire to practice astrophotography (rather than art) and aspire to some day enter photography competitions, etc.

FWIW, here is a simple and quick-ish workflow in StarTools that stays mostly true to the signal as recorded to get you started, if useful to you;

In the Compose module, load the dataset three times, once for red, green and blue. Set Green and Blue total exposure to 0. Keep the result - you will now be processing a synthetic luminance frame that consists entirely out of the red channel's data, while using the color from all channels.

--- AutoDev

To see what we're working with. We can see heavily correlated noise (do you dither?), stacking artefacts, heavily varying star shapes, some gradients.

--- Bin

To reduce oversampling and improve signal.

--- Crop

Crop away stacking artefacts.

--- Wipe

Dark anomaly filter set to 3px, you can use correlation filtering to try to reduce the adverse effects the correlated noise.

--- AutoDev

AutoDev, RoI that includes Melotte 15 and IC1795/NGC896, Increase Ignore Fine Detail parameter until AutoDev no longer picks up on the correlated noise. You should notice full stellar profiles visible at all times.

--- Contrast

Equalize preset

--- HDR

Optimize preset. This should resolve some detail in Melotte 15 and NGC896.

--- Sharp

Defaults.

--- SVDecon

The stars are quite heavily deformed in different ways depending on location (e.g. the point spread function of the detail is "spatially variant"). This makes the dataset a prime candidate for restoration through spatially variant PSF deconvolution, though expectations should be somewhat tempered due to the correlated noise. Set samples across the image, so SV Decon has examples of all the different star shapes. Set Spatial Error to ~1.4 and PSF Resampling to Intra-Iteration + Centroid Tracking Linear.

You should see stars coalesce into point lights better (and the same happening to detail that was "smeared out" in a similar way, now being restored). It's not a 100% fix for misshapen, but it's a fix based on actual physics and recorded data, as far as the SNR allows. Improved acquisition is obviously the more ideal way of taking care of misshapen stellar profiles though.

--- Color

The processed synthetic luminance is now composited with the coloring. The module yields the expected visual-spectrum-without-the-yellows (strong orange/red and blue) result, consistent with a CLS filter. As mentioned above, you can opt for a Bi-Color remapping (Bi-Color preset). You should now see predominantly HII red/pink nebulosity, with hints of Hb/O-III blue, as well as some blue stars.

--- Shrink

Shrink make stars less prominent (but never destroying them). Defaults. You may wish to increase Deringing a little if Decon was applied and ringing is visible.

--- Super Structure

Super Structure Isolate preset, with Airy Disk Radius 10% (this is a widefield). This pushes back the noisy background, while retaining the superstructures (and their detail). You could do a second pass with the Saturate preset if desired.

--- Switch Tracking off

Perform final noise reduction to taste. Defaults were used for expediency/demonstration, but it should be possible to mitigate the correlated noise better (ideally it should not be present in your datasets at all though!).

You should end up with something like this;

EdArter_heart.thumb.jpg.d9bda0068493199fe5a81ec71eef9bf0.jpg

Hope this helps!

 

 

 

 

 

 

 

 

 

  • Like 7
  • Thanks 1
Link to comment
Share on other sites

On 23/06/2022 at 05:30, jager945 said:

Hi,

As others alluded to, it appears that the dataset's channels were heavily re-weighted. So much so that it appears to me the only usable luminance signal resides in just the red channel. I'm not sure at what point this is happening, but this is the root of the issue (on the StarTools website you can find a guide with regards to preparing your dataset/signal for best results, which should hold for most applications).

If you wish to "rescue" the current stack, you can extract the red channel and use it as your luminance (which I think some others have done here), while using the entire stack for your coloring. The gradients then, while troublesome, become fairly inconsequential, as they mostly affect the channels that were artificially boosted - they will only appear in the coloring where the luminance signal allows them to. Processing then becomes more trivial.

In the visual spectrum, HII regions like this are typically colored a reddish-pink due to a mixture of emissions at different wavelengths (not just H-alpha!). Pure red is fairly rare in outer space (areas filtered by dust are probably the biggest exception). The use of a CLS filter will usually not change this - it just cuts a part of the spectrum (mostly yellows). What a CLS filter will change, however, is the ability for some color balancing algorithms (including PCC) to come up with anywhere near correct coloring.

Though you cannot expect very usable visual spectrum coloring from a CLS filter, you can still use a CLS filter to create a mostly useful bi-color (depending on the object) that roughly matches visual spectrum coloring, by mapping red to red, and green + blue to green and blue, e.g. R=R, G=(G+B)/2, B=(G+B)/2. This will yield a pink/cyan rendition that tends to match Ha/S-II emissions in red, and Hb/O-III emissions in cyan/green fairly well, as it would appear in the visual spectrum.

In general, if you are just starting out in AP processing, these days it is worth tackling an important issue upfront; the question of whether you are okay with introducing re-interpreted deep-faked detail that was never recorded (and cannot be corroborated in images taken by your peers), or whether are you just in it to create something plausible for personal consumption. Some are okay with randomly missing stars and noise transformed into plausible detail that does not exist in reality, others are definitely not. Depending on your goals in this hobby, resorting to inpainting or agumenting/re-intepretating of plausible detail by neural-hallucination based algorithms, as part of your workflow can be a dead end, particularly if you aspire to practice astrophotography (rather than art) and aspire to some day enter photography competitions, etc.

FWIW, here is a simple and quick-ish workflow in StarTools that stays mostly true to the signal as recorded to get you started, if useful to you;

In the Compose module, load the dataset three times, once for red, green and blue. Set Green and Blue total exposure to 0. Keep the result - you will now be processing a synthetic luminance frame that consists entirely out of the red channel's data, while using the color from all channels.

--- AutoDev

To see what we're working with. We can see heavily correlated noise (do you dither?), stacking artefacts, heavily varying star shapes, some gradients.

--- Bin

To reduce oversampling and improve signal.

--- Crop

Crop away stacking artefacts.

--- Wipe

Dark anomaly filter set to 3px, you can use correlation filtering to try to reduce the adverse effects the correlated noise.

--- AutoDev

AutoDev, RoI that includes Melotte 15 and IC1795/NGC896, Increase Ignore Fine Detail parameter until AutoDev no longer picks up on the correlated noise. You should notice full stellar profiles visible at all times.

--- Contrast

Equalize preset

--- HDR

Optimize preset. This should resolve some detail in Melotte 15 and NGC896.

--- Sharp

Defaults.

--- SVDecon

The stars are quite heavily deformed in different ways depending on location (e.g. the point spread function of the detail is "spatially variant"). This makes the dataset a prime candidate for restoration through spatially variant PSF deconvolution, though expectations should be somewhat tempered due to the correlated noise. Set samples across the image, so SV Decon has examples of all the different star shapes. Set Spatial Error to ~1.4 and PSF Resampling to Intra-Iteration + Centroid Tracking Linear.

You should see stars coalesce into point lights better (and the same happening to detail that was "smeared out" in a similar way, now being restored). It's not a 100% fix for misshapen, but it's a fix based on actual physics and recorded data, as far as the SNR allows. Improved acquisition is obviously the more ideal way of taking care of misshapen stellar profiles though.

--- Color

The processed synthetic luminance is now composited with the coloring. The module yields the expected visual-spectrum-without-the-yellows (strong orange/red and blue) result, consistent with a CLS filter. As mentioned above, you can opt for a Bi-Color remapping (Bi-Color preset). You should now see predominantly HII red/pink nebulosity, with hints of Hb/O-III blue, as well as some blue stars.

--- Shrink

Shrink make stars less prominent (but never destroying them). Defaults. You may wish to increase Deringing a little if Decon was applied and ringing is visible.

--- Super Structure

Super Structure Isolate preset, with Airy Disk Radius 10% (this is a widefield). This pushes back the noisy background, while retaining the superstructures (and their detail). You could do a second pass with the Saturate preset if desired.

--- Switch Tracking off

Perform final noise reduction to taste. Defaults were used for expediency/demonstration, but it should be possible to mitigate the correlated noise better (ideally it should not be present in your datasets at all though!).

You should end up with something like this;

EdArter_heart.thumb.jpg.d9bda0068493199fe5a81ec71eef9bf0.jpg

Hope this helps!

 

 

 

 

 

 

 

 

 

Jager - thank you so much for this! I really appreciate that you have not only given the steps needed, but explained why and what to look out for. I'm going to give this a go over the weekend and see what I get.

 

Thanks
Ed

Link to comment
Share on other sites

Just to say once again a big thank you to everyone that took the time to have a go at processing my data. I am now 'happy' that its more to do with my processing skills than the data itself so I'm knuckling down and trying to get better at it, both with the apps I already have and trying others out.

 

Thanks
Ed

  • Like 3
Link to comment
Share on other sites

  • 2 weeks later...

For those interested, I have a bit of an update on this. I've been experimenting with Startools and Pixinsight and my images from each are below. I'm definitely further along that I was getting in Photoshop but still lots of improvements that could be made. Of the two I feel more comfortable with Pixinsight I think, and I'm happier with the resulting image but thats not to say that more time put in to understanding Startools better would improve that as well.
I'm pleasantly surprised that I've managed to get a PI image so quickly having read the many many posts about the learning curve! I'm sure experts will look at it in horror but its the start of a journey for me :)

Startools version:
image.thumb.jpeg.ea1cf48b8f9a3b8cdff065108cd9a5d1.jpeg

 

Pixinsight version:
image.thumb.jpeg.ab97a9868c9d20394f67505bf318df4f.jpeg

  • Like 5
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.