Jump to content

NGC 1499


Rodd

Recommended Posts

Here's another one I was never satisfied with--and I have been over it and over it beyond measure.  There are 30 hours of data in this image collected with the np101is reduced by .8x and the STT-8300.  That is one reason why it bothered me--30 hours of data...how much does it take?  Anyway--I always ended up with either a cartoon, or a bland, featureless blob.  I think I have managed to render a fairly realistic image--that is without allot of artifacts.  The palette is HaSHO--so being narrowband many will disassociate from realism in that respect.  So, I guess the measuring stick is "level or animation", or how much of the cartoon element does it have.  I still am disappointed for 30 hours of data, though I think the details in the crop are interesting.

New image

Image25c.thumb.jpg.794ce815971e4eaf6939544ae9c343dd.jpg

Crop

Crop2.jpg.20c4f5419f5741633a58b0f1f9380c79.jpg

Old Image

Blend3-TGV-cs-C2.thumb.jpg.957e3cb30ff9f4b93d4bcc621c529591.jpg

Link to comment
Share on other sites

4 hours ago, HunterHarling said:

The new version looks much better. 

Lol! My images have suffered this as well. It is interesting how we'll spend hours processing, only to realize we way over did it.

That is why I save versions as I go (far too many sometimes).  But, that is how we learn.  Olly's "Leavie 5-10% of the image on the table" (or something like that) meant nothing tme when I was starting out.  Now, the meaning is clear as night (I wish).  Now, if only I adhered to those words of wisdom!

Rodd

Link to comment
Share on other sites

3 hours ago, iansmith said:

Beauty is very much in the eye of the beholder as I don't think either version is cartoony, although I prefer the colours of the new image. Either way it's a great looking picture.

Cheers, Ian

Thanks Ian.  

Rodd

Link to comment
Share on other sites

53 minutes ago, souls33k3r said:

Good God man! that's an excellent image. One of the best i've seen. 

Mind if i ask which software you used to process the image in?

If your answer is PI, mind if i ask for a work flow? 

Thank you souls33k3r.  Yes it was PI.  My typical work flow looks something like this for narrow band images (much different for broadband images)

1-calibrate using the batch calibration script

2-register all subs to one sub--tegh sub with the smallest FWHM, what ever filter it happens to be

3-integrate each stack using the strongest SNR sub in the stack for reference.  I use WSC at default settings most often (I don't see much difference between the pixel rejection algorithms) 

4-dynamic crop each stack using a dynamic crop icon

Mure denoise comes here if I use it, which is almost never now, though I used to.  I still will for terribly noisy images and/or very faint targets

5-dbe each stack using the same dbe icon usually generated using the Ha stack (using different dbe instances for each stack may be better--but I am too lazy.  I find this works better than using dbe on the SHO image after combination

6-integrate each stack using a sub with the highest SNR for reference.  I don't usually drizzle, but sometimes I do.  I use WSC at default settings typically.  (I don't see much difference between various rejection algorithms)

7-Background neutralization

8-color calibration---I don't use the photometric color calibration tool for narrowband images.

9-initial large stretch using Histogram tool.

10-SNR green using average neutral setting at 100%

at this point it becomes more variable.  I usually find the left edge of the histogram to be asymptotic to the baseline very close to the left margin but not touching.  If all colors are the same (very rare) I will use TGV denoise at very low settings in RGB mode.  If the various colors are not aligned--I use tgv denoise on each extracted color from the image then reinsert it using channel combination.  I end up with aligned colors asymptotic to the baseline a few mm away from the left had margin.  I insert the Ha stack (after stretching to match as closely as possible with the histogram of the SHO image) as a luminance layer using channel combination.  I used to replace the stars in the SHO image with Ha stars using a star mask and Pixel Math--but lately I have not been doing this.  I sharpen using unsharp mask with a star mask using a standard deviation equal to the pixel scale with an amount of .35 or less.  I follow with instances of progressively smaller standard deviations (2-3 instances ending at .7).  And on and on (and on, and on ad invinitum).  After this it is wholly image dependent.

Rodd

 

Link to comment
Share on other sites

11 minutes ago, Rodd said:

Thank you souls33k3r.  Yes it was PI.  My typical work flow looks something like this for narrow band images (much different for broadband images)

1-calibrate using the batch calibration script

2-register all subs to one sub--tegh sub with the smallest FWHM, what ever filter it happens to be

3-integrate each stack using the strongest SNR sub in the stack for reference.  I use WSC at default settings most often (I don't see much difference between the pixel rejection algorithms) 

4-dynamic crop each stack using a dynamic crop icon

Mure denoise comes here if I use it, which is almost never now, though I used to.  I still will for terribly noisy images and/or very faint targets

5-dbe each stack using the same dbe icon usually generated using the Ha stack (using different dbe instances for each stack may be better--but I am too lazy.  I find this works better than using dbe on the SHO image after combination

6-integrate each stack using a sub with the highest SNR for reference.  I don't usually drizzle, but sometimes I do.  I use WSC at default settings typically.  (I don't see much difference between various rejection algorithms)

7-Background neutralization

8-color calibration---I don't use the photometric color calibration tool for narrowband images.

9-initial large stretch using Histogram tool.

10-SNR green using average neutral setting at 100%

at this point it becomes more variable.  I usually find the left edge of the histogram to be asymptotic to the baseline very close to the left margin but not touching.  If all colors are the same (very rare) I will use TGV denoise at very low settings in RGB mode.  If the various colors are not aligned--I use tgv denoise on each extracted color from the image then reinsert it using channel combination.  I end up with aligned colors asymptotic to the baseline a few mm away from the left had margin.  I insert the Ha stack (after stretching to match as closely as possible with the histogram of the SHO image) as a luminance layer using channel combination.  I used to replace the stars in the SHO image with Ha stars using a star mask and Pixel Math--but lately I have not been doing this.  I sharpen using unsharp mask with a star mask using a standard deviation equal to the pixel scale with an amount of .35 or less.  I follow with instances of progressively smaller standard deviations (2-3 instances ending at .7).  And on and on (and on, and on ad invinitum).  After this it is wholly image dependent.

Rodd

 

Thank you for the very detailed workflow Rodd. Can't thank you enough. School day for me at this one. I pretty much use all the steps that you've described apart from the unsharp mask with star mask which i'll start to look in to incorporating. What i have an issue with is really the nebulosity structural details, i don't seem to get them pop out. Do you think unsharp mask will do this or something else in the processing that i need to look in to doing?

Link to comment
Share on other sites

49 minutes ago, souls33k3r said:

Thank you for the very detailed workflow Rodd. Can't thank you enough. School day for me at this one. I pretty much use all the steps that you've described apart from the unsharp mask with star mask which i'll start to look in to incorporating. What i have an issue with is really the nebulosity structural details, i don't seem to get them pop out. Do you think unsharp mask will do this or something else in the processing that i need to look in to doing?

Well, the underlying structures and details have to be there, to begin with.  If you look closely at an image from before any sharpening and the same image after a lot of processing including sharpening, you will see that all of the details are there--they are just not as well defined.  The contrast is not as variable across transitional boundaries.   You can never manufacture details by sharpening.  As Olly Penrice recently said, sharpening requires strong signal.  if you attempt to sharpen low signal features, you will generate artifacts.  Remember, very slight and subtle changes can transform an image entirely.  It does not take much.  This is a game of tiny increments once the image has been rendered non-linear.

Rodd

Link to comment
Share on other sites

1 minute ago, Rodd said:

Well, the underlying structures and details have to be there, to begin with.  If you look closely at an image from before any sharpening and the same image after a lot of processing including sharpening, you will see that all of the details are there--they are just not as well defined.  The contrast is not as variable across transitional boundaries.   You can never manufacture details by sharpening.  As Olly Penrice recently said, sharpening requires strong signal.  if you attempt to sharpen low signal features, you will generate artifacts.  Remember, very slight and subtle changes can transform an image entirely.  It does not take much.  This is a game of tiny increments once the image has been rendered non-linear.

Rodd

I think you've just hit the nail on the head "strong signal" & "subtle changes". My only question to you sir if i may, how do you go abouts getting a strong signal? Is there anything in the process to ensure that i end up with subs only with the highest / best signals?

Link to comment
Share on other sites

1 hour ago, souls33k3r said:

I think you've just hit the nail on the head "strong signal" & "subtle changes". My only question to you sir if i may, how do you go abouts getting a strong signal? Is there anything in the process to ensure that i end up with subs only with the highest / best signals?

Data, data, data--then more data!  Also, you need to have some understanding of  individual sub exposure times for your camera/scope system.  You have to get above the sky fog/noise threshold in each sub--even if its just by 1 photon, or a million subs won't look any different than 1.  For brighter targets this is not difficult.  Once your subs are of appropriate duration--just pile on the data.  Regardless of which scope I use or which camera I use, I find more data is better.  2 things to keep in mind.  Total exposure time is important and the number of subs as noise and signal multiply at different rates.  Ask Vlaiv about this--he will be able to give you more information than you can remember!

RODD

Link to comment
Share on other sites

2 hours ago, Rodd said:

Data, data, data--then more data!  Also, you need to have some understanding of  individual sub exposure times for your camera/scope system.  You have to get above the sky fog/noise threshold in each sub--even if its just by 1 photon, or a million subs won't look any different than 1.  For brighter targets this is not difficult.  Once your subs are of appropriate duration--just pile on the data.  Regardless of which scope I use or which camera I use, I find more data is better.  2 things to keep in mind.  Total exposure time is important and the number of subs as noise and signal multiply at different rates.  Ask Vlaiv about this--he will be able to give you more information than you can remember!

RODD

Super sound advice Rodd. You said that i have to get above the sky fog/noise threshold in each sub, how can i evaluate that? I'm now all up for getting in the total exposure time and the number of subs. For example my IC63 that i posted was at 13hrs of data, that's doubled the amount of hours i have been putting in. Basically look at it this way, i'm super impressed with your image and there's no shame in admitting that and i want to reach your level. Your work is just amazing mate. The clarity in the nebulosity, the sharpness is exactly what i would like to achieve in my images. I am willing to learn but just need some pointers to start off with.

Oh and i do apologise if my posts are taking over your thread. It is not my intention. I can pm you if you want?

Link to comment
Share on other sites

14 minutes ago, souls33k3r said:

Super sound advice Rodd. You said that i have to get above the sky fog/noise threshold in each sub, how can i evaluate that? I'm now all up for getting in the total exposure time and the number of subs. For example my IC63 that i posted was at 13hrs of data, that's doubled the amount of hours i have been putting in. Basically look at it this way, i'm super impressed with your image and there's no shame in admitting that and i want to reach your level. Your work is just amazing mate. The clarity in the nebulosity, the sharpness is exactly what i would like to achieve in my images. I am willing to learn but just need some pointers to start off with.

Oh and i do apologise if my posts are taking over your thread. It is not my intention. I can pm you if you want?

13

Thanks Souls.  Don't worry about the thread--that's what we are here for!  To tell the truth, I take a low tech approach.  I choose a sub length, and look at my first one.  If you can see the target, you are over the needed threshold.  After a while, you get a feeling for at least a range of appropriate sub durations.  Narrow band is the easiest--go as long as you guiding can support.  Well, not strictly true.  20-30 min is what I do with CCD, and 5min with CMOS.  On a given night--frame your target and start guiding.  then take a series of LRGB exposures (or OSC) at 30 sec, 60sec, 2 min, 5min etc.  Inspect them carefully.  You will know which is best.  BTW--I feel exactly like you when I look at other peoples images.  Be warned.....if you are like me in that respect, you will never be satisfied.  perfection is the goal, and perfection can't be achieved.   But if you are dedicated and diligent (make sure to not miss that 2 hour clear window between 2 and 4 am...ever), and assess your own images with an uncompromising attitude (look for problems...if it's not right, it's not right--if it doesn't look right to you...it isn't right)...your images will improve rapidly.   One piece of advise I can give, advise I did not take, is to take your time posting images.  Step away.  It is amazing how images change overnight (usually for the worse).  Without exception, every time I have posted an image on a forum--including Astrobin--I want to repost it within a very short period of time because I made an improvement.   

Rodd

Link to comment
Share on other sites

17 minutes ago, Rodd said:

Thanks Souls.  Don't worry about the thread--that's what we are here for!  To tell the truth, I take a low tech approach.  I choose a sub length, and look at my first one.  If you can see the target, you are over the needed threshold.  After a while, you get a feeling for at least a range of appropriate sub durations.  Narrow band is the easiest--go as long as you guiding can support.  Well, not strictly true.  20-30 min is what I do with CCD, and 5min with CMOS.  On a given night--frame your target and start guiding.  then take a series of LRGB exposures (or OSC) at 30 sec, 60sec, 2 min, 5min etc.  Inspect them carefully.  You will know which is best.  BTW--I feel exactly like you when I look at other peoples images.  Be warned.....if you are like me in that respect, you will never be satisfied.  perfection is the goal, and perfection can't be achieved.   But if you are dedicated and diligent (make sure to not miss that 2 hour clear window between 2 and 4 am...ever), and assess your own images with an uncompromising attitude (look for problems...if it's not right, it's not right--if it doesn't look right to you...it isn't right)...your images will improve rapidly.   One piece of advise I can give, advise I did not take, is to take your time posting images.  Step away.  It is amazing how images change overnight (usually for the worse).  Without exception, every time I have posted an image on a forum--including Astrobin--I want to repost it within a very short period of time because I made an improvement.   

Rodd

You're a true gentleman Rodd.

I have the same CMOS as you do and with my smaller refractor i've been limiting myself to 5 min exposures and with the SCT i have gone down to 3 min exposure but i do aim to get the cable management done properly so that i can squeeze in a few more minutes in to my subs. I must admit, i don't throw away any of my data, i always take them all and stack them and i guess that's my biggest mistake. I don't blink my images (another fault i guess) and i did look at the Subframe selector at one point but then didn't go with that. I think i should. The only problems that i do look out for if the image has round stars or not and also if a sub contains only but the clouds but what i do look at is the stacked sub result. It is almost certain to show me bright enough object.

By a way of example, when you end up getting a stacked Ha, OIII & SII images, do you see the (almost) same level of sharpness in your nebula details (and then top this up with unsharp mask) as you would see it in the final result or is it blurred out? The reason why i ask is that maybe because my starting point is flawed (and most certainly it looks like) that anything after is me fighting a losing battle? 

Hahaha you're right, i'll never be satisfied but at least i have a goal that i know i have to achieve first before taking it to the next level. I would do anything and literally mean ANYTHING to learn the way you process your images so any advice is much appreciated. You have no idea how much you've helped me in this regard today and i hope to learn more from you (i just hope you don't see me as a bug :D )

 

Link to comment
Share on other sites

8 minutes ago, souls33k3r said:

You're a true gentleman Rodd.

I have the same CMOS as you do and with my smaller refractor i've been limiting myself to 5 min exposures and with the SCT i have gone down to 3 min exposure but i do aim to get the cable management done properly so that i can squeeze in a few more minutes in to my subs. I must admit, i don't throw away any of my data, i always take them all and stack them and i guess that's my biggest mistake. I don't blink my images (another fault i guess) and i did look at the Subframe selector at one point but then didn't go with that. I think i should. The only problems that i do look out for if the image has round stars or not and also if a sub contains only but the clouds but what i do look at is the stacked sub result. It is almost certain to show me bright enough object.

By a way of example, when you end up getting a stacked Ha, OIII & SII images, do you see the (almost) same level of sharpness in your nebula details (and then top this up with unsharp mask) as you would see it in the final result or is it blurred out? The reason why i ask is that maybe because my starting point is flawed (and most certainly it looks like) that anything after is me fighting a losing battle? 

Hahaha you're right, i'll never be satisfied but at least i have a goal that i know i have to achieve first before taking it to the next level. I would do anything and literally mean ANYTHING to learn the way you process your images so any advice is much appreciated. You have no idea how much you've helped me in this regard today and i hope to learn more from you (i just hope you don't see me as a bug :D )

 

From one bug to another--with the asi1600, the mantra is a lot of short subs as opposed to a few long ones.  My current setup may be unusual in that I am using 3um filters at F3--so I might be loosing some signal through a shift of the bandwidths.  Astrodon told me I would be OK--but F3.5 is what the literature says.  I don't know.  Others would be able to give you the theory and numbers. But for LRGB imaging I use 30 sec and 60 sec exposures (sometimes 120 sec).  In the end, with the ASI1600 you end up taking a lot of subs but the overall integration time remains about the same.   For LRGB you can use 30sec-60 sec esposures and if you take enough of them, you will get high SNR. 

 

As far as Ha, SII and OIII stacks--they all look different.  Ha is typically the sharpest and best for making stand-alone mono images.  It is also the stack you use as a luminance foe HaSHO images.  It is where most of the detail comes from and you can process it alone and reveal very sharp and contrasting structures.  Not so much OIII and SII.  None of the stacks should be blurry though.  stars should be roughly the same--though you'll get larger ones in OIII--but they will still look like decent stars.  The 2 most critical things in taking good subs are guiding (for subs longer than 30 sec or with a shaky mount unguided), and focus.  PA is important, but as long as you are close, you should be ok for 2-10 min exposures. (by close I mean within a couple of arcmin).  I use a right angle PA scope to get PA visually.  I think 1min is its theoretical best.   No need to spend hours or allot of money pinpointing PA.    Collimation is critical for your SCT--It probably is as critical as guiding and focus--maybe more so.  I guess collimation and focus are the 2 most important--if they are not decent, your doomed.    

I usually use all my subs too--but inspect them as they come off the camera--or I use SFS throughout the night to see how my focus is.  I will throw out subs if the FWHM values are out of line, or if the SBR plummets --but I will inspect these first.  If the subs are dim and featureless due to a passing cloud--they are gone.  Sometimes SFS reports low SNR and the subs look the same as others--I keep these.  I have used SFS to generate sub lists of various parameters--1 list will be low FWHM and eccentricity, another will be high SNR, or low noise.  I then process the data sets to see if I can see a difference.  I rarely can.  I will definitely throw out subs where the stars look like boxes or where there were guiding glitches (doesn't happen very often).  

Another important thing is dithering.  Definitely worth doing.  The only problem is with a lot of short subs, dither delays of 10-20 sec after each sub add considerable time to you imaging run compared to when you take long subs.  Between dither delays, focusing time, framing after meridian flips and fighting off the gremlins, an 8 hour night can turn into a 4 hour night very easily.  But---don'y compromise, if dithering improves an image, do it and eat the time.  the way I look at it, spending teh extra time getting it right will yield a better image and if I conceded to my impatience, I will forever be unsatisfied.  its just not worth it.  the other reason is I need all teh help I can get!

Rodd

 

Link to comment
Share on other sites

20 hours ago, Rodd said:

Here's another one I was never satisfied with--and I have been over it and over it beyond measure.  There are 30 hours of data in this image collected with the np101is reduced by .8x and the STT-8300.  That is one reason why it bothered me--30 hours of data...how much does it take?  Anyway--I always ended up with either a cartoon, or a bland, featureless blob.  I think I have managed to render a fairly realistic image--that is without allot of artifacts.  The palette is HaSHO--so being narrowband many will disassociate from realism in that respect.  So, I guess the measuring stick is "level or animation", or how much of the cartoon element does it have.  I still am disappointed for 30 hours of data, though I think the details in the crop are interesting.

New image

Image25c.thumb.jpg.794ce815971e4eaf6939544ae9c343dd.jpg

Crop

Crop2.jpg.20c4f5419f5741633a58b0f1f9380c79.jpg

Excellent reprocess Rodd. There's a lot of very fine detail in the nebula, which you managed to pull out, with a lovely blend of bright nebula and dark whisps in front of it. I think the investment in time payed off. Just one slight remark: the smallest stars seem to have "panda eyes", a sign of deringing during deconvolution maybe?

Link to comment
Share on other sites

3 minutes ago, wimvb said:

a sign of deringing during deconvolution maybe?

No--I didn't use ity--but it is from not adequately protecting the stars during sharpening.  That's because the tiniest stars were not visible until sharpened so it was hard to protect them.  I eventually cloned the image, syper stretched it and then made a star mask that had the faintest stars not yet visible in the main image.  I fixed most of them--but not all the stars are right--the bright one near the bottom edge of the nebula has a god awful blue ring.  I think this is an improvement though.  Also less noise--which I noticed was necessary after posting the one above.Image25e.thumb.jpg.f144ddcb107ba8a07b2e4e88d53019a3.jpg

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.