Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Really struggling to get a good image of the Horsehead & Flame Nebulae


Recommended Posts

My processing was on the no filter colour stack..  crop, background extraction, background neutralisation,  colour calibration, histogram and masked stretch, noise reduction using ACDNR..  colour saturation boost using curves.. 

what scope ? 

  • Like 1
Link to comment
Share on other sites

8 hours ago, Budgie1 said:

I took your 5h57m stack and put it through my normal workflow in PI. After stretching, I removed the stars and processed the background & nebula separate to the stars, then put the background through Topaz AI Gigapixel to further reduce the noise and sharpen, before adding the slightly reduced stars back into the image.

When you remove the stars (I assume in PI with Starnet) do you still have the big ones like Alnitak to contend with ?

I had never heard of Topaz AI Gigapixel, just looked at their website now, does it really do what they say it can, and can this not be done with other software ?

Steve

  • Like 1
Link to comment
Share on other sites

9 hours ago, gorann said:

Just to let you know that using PS and 16 bit does the trick more than well enough and that is what a big bunch of us are using and are happy with, whatever Valiv tells you, and it can even give you APODs and IOTDs. If you know PS stick to it, PI is a very different animal (I do cherry pick some processes in PI), or what do you think @ollypenrice.....

I think Photoshop is the world's leading graphics program for good reasons. Of course it isn't astro-specific and, like you, I cherry-pick some PI routines, but Photoshop's genius lies in its user interface, which turns its underlying mathematics into metaphors from the days of darkroom and printing. I like this because I am making a picture which I am going to look at with my eyes. In Ps I can use my eyes to do this.  I can make adjustments in real time and see them in front of me.  This seems logical to me and suits the way my mind works. (This contrasts with Pixinsight whose user interface turns its underlying mathematics into visible mathematics.)

You can get comparable results in either program, the question being, 'Which do you prefer?'  I like being in the Photoshop environment and I do AP for enjoyment, so that's my choice.  The key thing in processing is to do different things to different parts of the image and both programs allow that. As for 32 bit, I struggled to see any difference when we all moved from 8 bit to 16, quite honestly. If 32 bit is essential, can we see the proof in the form of two images side by side?

Olly

  • Like 4
Link to comment
Share on other sites

6 minutes ago, teoria_del_big_bang said:

When you remove the stars (I assume in PI with Starnet) do you still have the big ones like Alnitak to contend with ?

I had never heard of Topaz AI Gigapixel, just looked at their website now, does it really do what they say it can, and can this not be done with other software ?

Steve

The star removal was done in PI but using StarXTerminator, instead of StarNet.

I happen to still have the starless images on my PC so here is the original starless image after processing and then after putting it through Topaz Ai Gigapixel.

Topaz is a game changer for cleaning up images in a quick an easy way, but you do have to be careful that you don't over do it because it can add detail that isn't there. ;)

Can you do it with other software? I expect so, but this makes it an quick and easy process.

858286411_Horsehead5h57m119Lights_BG.thumb.png.2f996d73ec5b199fce0e92186e554d22.png

355305256_Horsehead5h57m119Lights_BG-gigapixel.thumb.png.759b378053ebcb2f86eb708cf0b84f7b.png

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

53 minutes ago, Budgie1 said:

Topaz is a game changer for cleaning up images in a quick an easy way, but you do have to be careful that you don't over do it because it can add detail that isn't there. ;)

Which it clearly has done here in the Flame... apparently, (so I've read elsewhere) its trained on faces and hair..  so when it sees dark filamentary lanes, as here, it adds detail... not a criticism Martin, just an observation.  Very good for cleaning up noise in low signal areas but to me its pretty clear when folk use it for sharpening as it gives fine filamentary detail in images with huge misshapen stars.. 

  • Like 1
Link to comment
Share on other sites

3 hours ago, Laurin Dave said:

My processing was on the no filter colour stack..  crop, background extraction, background neutralisation,  colour calibration, histogram and masked stretch, noise reduction using ACDNR..  colour saturation boost using curves.. 

what scope ?

 

I'm not sure on the translation of some of those - what does background extraction and neutralisation mean in PS ?

My normal process is crop the image first, then run gradxterminator, I usually run HLVG later on in the processing as I've realised if you do it to begin with it can remove detail or colours, then I do linear stretches, curve adjustments. 

I did have the demo version of StarXterminator which is great and I think I'll have to buy as it really allows you to pull out the nebulosity without affecting the stars.

My scope is the Z73 f/5.9 

  • Like 1
Link to comment
Share on other sites

2 hours ago, teoria_del_big_bang said:

When you remove the stars (I assume in PI with Starnet) do you still have the big ones like Alnitak to contend with ?

I had never heard of Topaz AI Gigapixel, just looked at their website now, does it really do what they say it can, and can this not be done with other software ?

Steve

I've used Topaz DeNoise (with great results) on some of my wildlife photography but haven't used the other programs.

  • Like 1
Link to comment
Share on other sites

34 minutes ago, smr said:

 

I'm not sure on the translation of some of those - what does background extraction and neutralisation mean in PS ?

My normal process is crop the image first, then run gradxterminator, I usually run HLVG later on in the processing as I've realised if you do it to begin with it can remove detail or colours, then I do linear stretches, curve adjustments. 

I did have the demo version of StarXterminator which is great and I think I'll have to buy as it really allows you to pull out the nebulosity without affecting the stars.

My scope is the Z73 f/5.9 

They're Pixinsight process code... Background Extraction perfoms a similar function to gradexterminator.  background neutralisation sets the colour of a chosen piece of background sky to equal R G and B, colour calibration sets the average star colour in the image to white..  like you I leave HLVG (or Pi equivalent scnr - green) till later in the process as doing it too early can mess with colour... ACDNR I would say is roughly equivalent to Noel's DeepSkyNoiseRedn and ColourBlotchReduction (if you have those)...  Histogram Stretch is Levels.

In my processing I switch between Pixinsight Photoshop and APP depending on what I'm doing and which tool is best for doing it..  

HTH

Dave

Link to comment
Share on other sites

3 hours ago, ollypenrice said:

If 32 bit is essential, can we see the proof in the form of two images side by side?

That is rather easy to do and you can do it yourself.

How much difference you'll see depends on your equipment and style of imaging. Largest difference there will be if you have high dynamic range subs and you stack lots of them.

Take any of your old images with decent number of stacked subs - like 100+, with something faint in the image - like IFN.

Take 32bit linear stack, make a copy and make it 16bit

Now you have same data both 32 float point and 16 bit integer. Process 32bit but repeat each step on 16 bit image (in Gimp it is easy as it remembers each stretch or action as "last used preset", guess PS can do the same).

At some point, you'll start to see that faintest parts are not the same. 16bit will have more grain.

Link to comment
Share on other sites

2 hours ago, Laurin Dave said:

Which it clearly has done here in the Flame... apparently, (so I've read elsewhere) its trained on faces and hair..  so when it sees dark filamentary lanes, as here, it adds detail... not a criticism Martin, just an observation.  Very good for cleaning up noise in low signal areas but to me its pretty clear when folk use it for sharpening as it gives fine filamentary detail in images with huge misshapen stars.. 

In the Topaz suit there are three programs; Gigapixel is basically an all-in-one which will clean the noise & sharpen the image. Then there's DeNoise & Sharpen, which do what they say. On all of these you can use the AI or manual settings to get the level of effects you want. Sometimes Gigapixel will do a good job on an image, others need the individual software, so it's down to personal taste, as it is with the other processing software. 

As I said above, you do have to be careful with it because it can introduce detail which isn't there if you're heavy handed with the corrections, especially when Sharpening. It also applies the corrections to the whole image, so you need to check the whole image and not just the part you're zoomed in on.  I haven't looked into whether you can use masks with it yet.

I've tried not to let it add detail to the filamentary lanes in any of the images I've done because that's not what I want. I do feel it does a good job at sharpening detail which is slightly blurred or out of focus. I also only use Topaz on starless images because it does effect the stars.

  • Like 1
Link to comment
Share on other sites

Slightly OT but the skies may be clear tonight, and I'm wondering whether I should image with my l enhance filter or not (re the Moon) what would you suggest? It's 80 percent Moon and sort of close to the Horsehead.

The only issue is that I have to unscrew the Camera to add the filter inside the field flattener so when I screw the Camera back on it's going to be at a different angle to the previous sessions. ie. The 6 hour stack was two sessions (with and without filter) hence the differing angles and required cropping.

Edited by smr
Link to comment
Share on other sites

14 minutes ago, smr said:

Slightly OT but the skies may be clear tonight, and I'm wondering whether I should image with my l enhance filter or not (re the Moon) what would you suggest? It's 80 percent Moon and sort of close to the Horsehead.

The only issue is that I have to unscrew the Camera to add the filter inside the field flattener so when I screw the Camera back on it's going to be at a different angle to the previous sessions. ie. The 6 hour stack was two sessions (with and without filter) hence the differing angles and required cropping.

Personally with that moon I would add the filter and go for 10 (or longer if guiding is up to it) min subs. 
Regarding angle, before I got my rotator (which is a great help)  if I had to remove camera , or anything in the optics, I would put a small piece of masking tape on somewhere where I could draw a line across the gap between them so I could line them back up when re-assembling.
OK not accurate to seconds but usually got me back to within a degree so minimal cropping required.

Steve 

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, teoria_del_big_bang said:

Personally with that moon I would add the filter and go for 10 (or longer if guiding is up to it) min subs. 
Regarding angle, before I got my rotator (which is a great help)  if I had to remove camera , or anything in the optics, I would put a small piece of masking tape on somewhere where I could draw a line across the gap between them so I could line them back up when re-assembling.
OK not accurate to seconds but usually got me back to within a degree so minimal cropping required.

Steve 

Thanks Steve

Not sure my guiding is up to 10 mins but will try 5.

Actually just worked out how to install the filter without changing camera rotation. I can unscrew the camera from the 360 rotator so the rotator is still fixed in place. Pop the filter in and then screw it back on. Your post helped inspire that !

  • Like 1
Link to comment
Share on other sites

1 hour ago, teoria_del_big_bang said:

Personally with that moon I would add the filter and go for 10 (or longer if guiding is up to it) min subs. 
Regarding angle, before I got my rotator (which is a great help)  if I had to remove camera , or anything in the optics, I would put a small piece of masking tape on somewhere where I could draw a line across the gap between them so I could line them back up when re-assembling.
OK not accurate to seconds but usually got me back to within a degree so minimal cropping required.

Steve 

10 minute subs should not be necessary with CMOS. Sure, if guiding is up to it then it won't hurt, but as long as the sub length allows the sensor read noise to be swamped by the background sky level, there is little difference in stacks of short subs or longer ones. 

Link to comment
Share on other sites

I have seen this video but (and its only what I have noticed in my imaging so maybe does not relate to all setups) is that yes this works for broadband but for NB I get poor results below 5 minute exposures no matter how many frames I take and generally will go for 10 minutes for NB. 

Now maybe because I am using Ultra NB filters this is why and maybe the bandwith of a Optolong L-eNhance is not so narrow and that will work with shorter subs.

Steve

Edited by teoria_del_big_bang
Link to comment
Share on other sites

1 hour ago, david_taurus83 said:

Does that camera switch to high gain mode at 100 or at 101?

Not sure it comes with a high gain mode, I think I read that the Altair Camera does, haven't heard anything about a high gain with the 2600MC Pro. 

Re sub length - I've just assumed that as long as the data is off the left hand side of the histogram, ie. not clipping then it doesn't really matter what sub length you choose? Which is why I always shoot at 3 minutes, filter or no filter, as the resulting histogram in APT looks the same, it's off the LHS, not by much, but is off it, and when I stack in DSS the resulting TIFF shows a histogram and data which hasn't been clipped as well. 

Out of curiosity I have just started doing 5 minute subs though, to see if my guiding can handle them and if there's any difference.

 

Edit - oh yeah I remember reading this a while ago... there is a high gain mode.. When the gain value is 100, the magical HCG high gain mode is turned on, the readout noise is greatly reduced, and the dynamic range is basically unchanged. It is recommended to set the gain to 0 or gain 100 in deep space.. - ZWO

Edited by smr
Link to comment
Share on other sites

2 hours ago, smr said:

Not sure it comes with a high gain mode, I think I read that the Altair Camera does, haven't heard anything about a high gain with the 2600MC Pro. 

Re sub length - I've just assumed that as long as the data is off the left hand side of the histogram, ie. not clipping then it doesn't really matter what sub length you choose? Which is why I always shoot at 3 minutes, filter or no filter, as the resulting histogram in APT looks the same, it's off the LHS, not by much, but is off it, and when I stack in DSS the resulting TIFF shows a histogram and data which hasn't been clipped as well. 

Out of curiosity I have just started doing 5 minute subs though, to see if my guiding can handle them and if there's any difference.

 

Edit - oh yeah I remember reading this a while ago... there is a high gain mode.. When the gain value is 100, the magical HCG high gain mode is turned on, the readout noise is greatly reduced, and the dynamic range is basically unchanged. It is recommended to set the gain to 0 or gain 100 in deep space.. - ZWO

As is discussed in the video linked above, the optimum minimal sub length depends on your equipment and sky conditions. 3 minutes is probably in the right ballpark for you with the l-extreme.

In broadband, you'd probably find the optimal minimum sub length drops down to somewhere around 20 seconds, however if 3 minutes is working for you in terms of mount performance, number of images affected by satellite trails, etc. then there's no burning reason to go shorter - just know that you can if you want or need to. 

  • Thanks 1
Link to comment
Share on other sites

12 hours ago, The Lazy Astronomer said:

As is discussed in the video linked above, the optimum minimal sub length depends on your equipment and sky conditions. 3 minutes is probably in the right ballpark for you with the l-extreme.

In broadband, you'd probably find the optimal minimum sub length drops down to somewhere around 20 seconds, however if 3 minutes is working for you in terms of mount performance, number of images affected by satellite trails, etc. then there's no burning reason to go shorter - just know that you can if you want or need to. 

Thanks, I didn't realise I could go that short without the filter. Shorter would definitely be better as there would surely be more keepers in a stack of 1m subs to 3m subs?

Link to comment
Share on other sites

On my ZWO 071 cooled camera I generally use fairly high gain (200 on that camera, which has a max of 240) and 2 mins when using the L-enhance filter, but would generally use 30-60 seconds if I'm only using a UV/IR filter. This works fairly well for me. 

Link to comment
Share on other sites

13 minutes ago, smr said:

Thanks, I didn't realise I could go that short without the filter. Shorter would definitely be better as there would surely be more keepers in a stack of 1m subs to 3m subs?

If you feel like you're throwing away a high number of subs, then yeah, maybe try shorter exposure times.

I usually use 30 - 60 seconds for broadband too (ZWO 294MM).

Link to comment
Share on other sites

I've got 16 hours on this so far. Just having a look at the data, starting to pick up some brown dust which is nice. Any advice on being able to pull the brown dust out a bit without introducing noise. I'm going to buy StarXterminator later on, I tried the demo version and was very impressed with it so that might help stretch the data and give some processing flexibility, this is just a rough process to see what is in the data really.

 

HH.thumb.jpg.5836755d28642136d63c4c9ef12bd1e3.jpg 

 

 

Link to comment
Share on other sites

Take your time with the stretch and do it in increments, maybe even switch between Levels & Curves. 

I'm happy to put it through PI again, if you want. Been cloudy up here so don't have any of my own data to process. :clouds1:

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.