Jump to content

Banner.jpg.b6007b69ccdf5c69bf18273ddfe023df.jpg

Sh2-112 ha


Rodd
 Share

Recommended Posts

Posted (edited)

This has been a strange project.  Guiding has been decent (.43-.55 total rms), focus was maintained, and seeing was forecasted as decent. BUT, my FWHM values struggled to fall below 3.0. I collected 117 300 sec subs and only used 47, and still the FWHM is 2.91. Amidst all this, the sub frame selector tool was acting up-reporting different values of FWHM at different times.  I was beginning to lose hope, but I managed to process the image to a point of relative acceptability (for a mono image).  I am looking forward to creating a HaSHO image, which I have not created in a long time-over a year. Even 117 300 sec subs binned 3x3 had a noisy background, so collecting more doesn’t seem the way to go ( though I would love. To bring the FWHM down. 

C11edge with .7x reducer and ASI 1600

21B2CC0B-6210-488D-8F41-0C46B282845F.thumb.jpeg.7e43856cbcfe3054e49a8740909059b7.jpeg

Edited by Rodd
Listed scope
  • Like 3
Link to comment
Share on other sites

I wouldn't care too much about the (fwhm) numbers. Nebulae don't need the same level of detail as galaxies. Just gather data while the gathering is good, and fix the stars in post processing.

CS

  • Like 1
Link to comment
Share on other sites

39 minutes ago, wimvb said:

I wouldn't care too much about the (fwhm) numbers. Nebulae don't need the same level of detail as galaxies. Just gather data while the gathering is good, and fix the stars in post processing.

CS

I absolutely agree. 

In these circumstances I'd look into de-starring the nebula for one process and then process a star layer differently. This frees you from the need for small stars at capture. You can re-insert them as large or as small as you like and with more data on the nebula you can process for more larger-scale contrast and local contrast without getting into the noise.

Olly

Link to comment
Share on other sites

1 hour ago, wimvb said:

I wouldn't care too much about the (fwhm) numbers. Nebulae don't need the same level of detail as galaxies. Just gather data while the gathering is good, and fix the stars in post processing.

CS

Normally I agree, but tonight between clouds I am pulling down much better data-fwhm of 1.9-2.0. Man, does it make a difference.  At least for mono.  But maybe you are right for volition combination when the image has much more depth. 

Link to comment
Share on other sites

38 minutes ago, ollypenrice said:

I absolutely agree. 

In these circumstances I'd look into de-starring the nebula for one process and then process a star layer differently. This frees you from the need for small stars at capture. You can re-insert them as large or as small as you like and with more data on the nebula you can process for more larger-scale contrast and local contrast without getting into the noise.

Olly

Thanks Olly. Still getting comfortable with removing stars.  It doesn’t seem to work as well for mono images.  Maybe I need to upgrade the tool

  • Like 1
Link to comment
Share on other sites

Looks very promising Rodd but as you say better seeing makes quite a difference at that focal length. Nights with average seeing I only do short FL imaging, and save the long FL for those rare nights with great seeing. But then I have the luxury of having both systems permanently set up. I can see that you hesitate to change scopes from night to night.

  • Like 1
Link to comment
Share on other sites

What you do need to take into account is if you use different filters at nights with different seeing. Eg, if you were to collect, say, Red at a night with poor fwhm, and Blue at a night with good fwhm, you could end up with red star bloat. That's one reason for me to collect data from all (RGB) filters during a night. I know you have (had) problems with your filter wheel, so that may not be an option for you now.

Link to comment
Share on other sites

5 hours ago, gorann said:

I can see that you hesitate to change scopes from night to night.

Indeed--otherwise I would lose a large portion of what little clear sky I get fiddling with such things.  I need two mounts (and another camera.....and computer...etc, etc) then i will do as you suggest..  

Link to comment
Share on other sites

7 minutes ago, wimvb said:

What you do need to take into account is if you use different filters at nights with different seeing. Eg, if you were to collect, say, Red at a night with poor fwhm, and Blue at a night with good fwhm, you could end up with red star bloat. That's one reason for me to collect data from all (RGB) filters during a night. I know you have (had) problems with your filter wheel, so that may not be an option for you now.

That is the preferred way, I know.  But  broadband is often out of the question for me....especially for nebula.  certainly during the time this dataset was collected.  Remember I typically spend 3 nights per channel, and over those three nights, which may very well be in different weeks or months, seeing will vary.  This is an Ha data set--so it represents the lum of an HaSHO image--There is really no way to feel good about poor seeing I guess.  Lat night was like a different world .4 total rms and FWHM values around 2.   Too bad clouds ate up 1/2 the night!

  • Like 1
Link to comment
Share on other sites

16 hours ago, gorann said:

Looks very promising Rodd but as you say better seeing makes quite a difference at that focal length. Nights with average seeing I only do short FL imaging, and save the long FL for those rare nights with great seeing. But then I have the luxury of having both systems permanently set up. I can see that you hesitate to change scopes from night to night.

 

17 hours ago, ollypenrice said:

I absolutely agree. 

In these circumstances I'd look into de-starring the nebula for one process and then process a star layer differently. This frees you from the need for small stars at capture. You can re-insert them as large or as small as you like and with more data on the nebula you can process for more larger-scale contrast and local contrast without getting into the noise.

Olly

 

10 hours ago, wimvb said:

What you do need to take into account is if you use different filters at nights with different seeing. Eg, if you were to collect, say, Red at a night with poor fwhm, and Blue at a night with good fwhm, you could end up with red star bloat. That's one reason for me to collect data from all (RGB) filters during a night. I know you have (had) problems with your filter wheel, so that may not be an option for you now.

I took all your advice--this is a 5 hour stack of Ha--should be ample, but teh background is just plain chunky.  The rest I like--I got the FWHM down to 2.6 by adding very decent subs, and starless processing has madeb stars even smaller.  But thge background, ah, the background!  I don't think adding data is the answer.  I have 171 300 sec subs in all.  that is over 14 hours and the background looks tghe same.  Instead of being a smooth Mascarpone, I have cottage cheese.  Instead of a translusecnt fog, I have pollen clumps.

 

h60alt2.thumb.jpg.b295a3105697c2b9a13c075905894c08.jpg

 

Link to comment
Share on other sites

6 hours ago, Rodd said:

 

 

I took all your advice--this is a 5 hour stack of Ha--should be ample, but teh background is just plain chunky.  The rest I like--I got the FWHM down to 2.6 by adding very decent subs, and starless processing has madeb stars even smaller.  But thge background, ah, the background!  I don't think adding data is the answer.  I have 171 300 sec subs in all.  that is over 14 hours and the background looks tghe same.  Instead of being a smooth Mascarpone, I have cottage cheese.  Instead of a translusecnt fog, I have pollen clumps.

 

h60alt2.thumb.jpg.b295a3105697c2b9a13c075905894c08.jpg

 

Well, as a Photoshop user I'd try this:  get the old and new backgrounds to the same brightness, if they are not already. Paste the new on top of the old with the new active. Use the colour select tool to pick out everything you regard as background and erase it. What this would do is give you your old background back everywhere except for just a small circular region around the now-smaller stars. If it worked (and I don't know if it would or not) you'd get a large part of the best of both worlds.

Olly

Link to comment
Share on other sites

Posted (edited)
8 hours ago, ollypenrice said:

Well, as a Photoshop user I'd try this:

I will think about how to equate this to PI.  Meanwhile, Even the original background is poor.  Maybe its not the background I am referring to.  This region may not have any true background--there is faint Ha emissions everywhere.  It's those faint, surrounding emissions that look rough--no matter how much data I use.  So I am using the term "background" in a photpgraphic framing perpective as opposed to a pixel value or space structure standpoint.  I have used 14 hours so far.  Quite frankly the difference between 5 hours and 14 hours is barely perceptable from a signal perspective and totally imperceptable from a smoothness perspective.  The surrounding emissions (what I referr to as background) look terrible.  I have to solve this or the dataset is going into the can.  The only way this image improves with added data, is if the added data has a lower FWHM value then the FWHM value of the stack--then it improves improves (incrementally).  Remember, FWHM is not only a star thing.  Yes, it is a method of star measurement, but if the stars get tighter, the rest of the data gets tighter too.  But there is only so far I can take this.  a stack with 2.5 FWHM is on the good to very good side for me.  2.1-2.2 is about the best stack I can achieve (I did get a 1.8 once, but that was unusual).  So unless I figure out why the outer regions of this imnage look so grainy, it will represent a huge waste of time

Edited by Rodd
Link to comment
Share on other sites

12 minutes ago, Rodd said:

This region may not have any true background--there is faint Ha emissions everywhere.

That’s most likely the cause of the uneven background.

https://www.astrobin.com/221479/?q=Sh2-112
 

Only you can decide what to keep and enhance, and what to let go. While more data will always allow you to incorporate more of the background, that is not always a practical option. Especially if you are fighting light pollution.

Link to comment
Share on other sites

17 minutes ago, Rodd said:

The only way this image improves with added data, is if the added data has a lower FWHM value then the FWHM value of the stack--then it improves improves (incrementally). 

Not necessarily. You can create two stacks, one with all the data and one with only low fwhm data. Combine the nebulosity and background from the large stack with stars from the low fwhm stack. In pixelmath:

new_image = largestack_starless + smallstack - smallstack_starless

You probably won’t need a mask here.

  • Like 2
Link to comment
Share on other sites

The way you stretch may also play a part. I used to use a custom shaped curve of diminishing aggressivity and bring in the black point after each stretch. I don't do that any more. I stretch until I get the background to a certain brightness (23 in Photoshop speak) then I pin and fix the curve at 23 and below and only stretch above that. This means I don't keep stretching the faint stuff beyond its noise floor.

Olly

  • Like 2
Link to comment
Share on other sites

9 hours ago, ollypenrice said:

The way you stretch may also play a part. I used to use a custom shaped curve of diminishing aggressivity and bring in the black point after each stretch. I don't do that any more. I stretch until I get the background to a certain brightness (23 in Photoshop speak) then I pin and fix the curve at 23 and below and only stretch above that. This means I don't keep stretching the faint stuff beyond its noise floor.

Olly

I understand this. But I don’t really know how to do it in Pi.  I stretched this data initially using masked stretch.  I don’t often like masked stretch for the final look. It can look pretty dim. Do I tweak with curves and histogram.  It’d funny but no amount of noise control works on this image.  Even the smallest amount looks like too much.  Noise control is the only way I know if pushing the left side of the histogram a bit to the right—off the left hand edge.  Mask stretch always has the left side of the curve right at the left side of the histogram.  Using noise control produces a gap.  But with this image it doesn’t Help.  Still looks chunky. Not really sure what why this is happening.  Could it be bad flats?

Link to comment
Share on other sites

Posted (edited)
12 hours ago, wimvb said:

Not necessarily. You can create two stacks, one with all the data and one with only low fwhm data. Combine the nebulosity and background from the large stack with stars from the low fwhm stack. In pixelmath:

new_image = largestack_starless + smallstack - smallstack_starless

You probably won’t need a mask here.

That is a nice formula, thanks! thought about this.  I think I will do this for the ha I use as lum.  I will use all subs for the green channel, but only h the subs with fwhm <2.5 for the ha I insert as lum.  

Edited by Rodd
Link to comment
Share on other sites

4 hours ago, Rodd said:

I understand this. But I don’t really know how to do it in Pi.  I stretched this data initially using masked stretch.  I don’t often like masked stretch for the final look. It can look pretty dim. Do I tweak with curves and histogram.  It’d funny but no amount of noise control works on this image.  Even the smallest amount looks like too much.  Noise control is the only way I know if pushing the left side of the histogram a bit to the right—off the left hand edge.  Mask stretch always has the left side of the curve right at the left side of the histogram.  Using noise control produces a gap.  But with this image it doesn’t Help.  Still looks chunky. Not really sure what why this is happening.  Could it be bad flats?

First off: "levels" in PS is "histogram transformation (HT)" in PI. "Curves" in PS is "curves transformation (CT)" in PI.

This is how I ususally stretch (and I don't use/need noise reduction).

1. Using HT, I manually bring the black point marker in to just below clipping even one single pixel. I set the middle marker to 0.25. I then apply this gentle stretch to the linear image, and repeat until the histogram peak is at about 0.1 (which is equivalent to 25 - 26 in PS speak, because PS uses a range from 0 - 255. 0.1 x 255 = 25.5).

2. I then switch to CT. I put a marker on the straight curve underneath where the peak of the histogram is (should be about 0.1), and pull it down to about 0.08 (which is about 21 in PS). I put a second marker on the curve just to the right of the histogram, and pull this up. All the time I have the preview activated. The two markers will give me a basic S-curve. I then add marker to the right to decrease the brightest areas and avoid star bloat. Usually the curve ends up straight in the upper part, from about 0.7 to 1.0 on the X-scale. Point to note here. The S-curve starts to rise after the first marker. I make sure that this rise isn't too steep where the histogram is still high (this is noise dominated in the image). I may add a marker to the left of "first" one and lift this slightly to make even the lower part of the S-curve straighter.

3. If during this procedure, bright areas start to lose local contrast, I will apply HDRMT with a lightness mask. But, and this is important, the lightness mask will have its white point dialled down to about 50% (linear curve with the white point brought down to 0.5). HDRMT is applied at about 50% strength. I adjust the number of layers (scale?) to get the detail I want.

This is how I stretch luminance and mono images. For colour (rgb) images I start with either arcsinh transformation or masked stretch, because these retain colour in the stars. But I always follow up with curves transformation. And in both arcsinh or masked stretch, I never keep the default black point (clipping point). I either reset it (0) or take it down (to 1/10 of the recommended value in MS, and 0.5 of the recommended value in arcsinh). Doing this keeps the left hand side of the histogram well away from the left hand border.

https://www.astrobin.com/7n0qcu/B/

Link to comment
Share on other sites

Posted (edited)
On 11/06/2022 at 01:53, ollypenrice said:

Well, as a Photoshop user I'd try this:  get the old and new backgrounds to the same brightness, if they are not already. Paste the new on top of the old with the new active. Use the colour select tool to pick out everything you regard as background and erase it. What this would do is give you your old background back everywhere except for just a small circular region around the now-smaller stars. If it worked (and I don't know if it would or not) you'd get a large part of the best of both worlds.

Olly

 

On 11/06/2022 at 10:32, wimvb said:

That’s most likely the cause of the uneven background.

https://www.astrobin.com/221479/?q=Sh2-112
 

Only you can decide what to keep and enhance, and what to let go. While more data will always allow you to incorporate more of the background, that is not always a practical option. Especially if you are fighting light pollution.

Olly/Wim--Improved?  I am not sure this is what you mean when you refer to pinning the background.  But using the curves tool, I fix the mid points and high points and raise the base of the line (dark points).  After several iteration s, the speckels are gone but some dymanic range is lost in the brighter regions.  I then use =curves and histogram to tray an recapture the dynamic range in thge brighter regions.  The dim emissions in teh surrounding region are threfroe smoother--.  It looks better to me.   With more work maybe it can be improved further.  

On another note--I just saw my profile image--the one for my same--M51 zoomed in.  Suddenly, this image on my screen is hyper saturated.  It was not that way before.  Do you see this change?  Has it always looked like this to you?  I woukld never knowingly use an image that looked like this (as it does to me).  It didn't look like this before!  Is it just my screen?  Other images look OK.  Very strange

EDIT--I just turned HDR on and the image looks much less saturated.  Hiow does it look to you--if it looks ultra saturated, maybe I better turn HDR off.  Do all my images look hyper saturated to others.  My god

D1792BA8-E808-42BB-907B-054A18C9590E.thumb.jpeg.bc46a94d2500a8d24ec3a9d77deab12d.jpeg

 

Edited by Rodd
Link to comment
Share on other sites

8 hours ago, wimvb said:

First off: "levels" in PS is "histogram transformation (HT)" in PI. "Curves" in PS is "curves transformation (CT)" in PI.

This is how I ususally stretch (and I don't use/need noise reduction).

1. Using HT, I manually bring the black point marker in to just below clipping even one single pixel. I set the middle marker to 0.25. I then apply this gentle stretch to the linear image, and repeat until the histogram peak is at about 0.1 (which is equivalent to 25 - 26 in PS speak, because PS uses a range from 0 - 255. 0.1 x 255 = 25.5).

2. I then switch to CT. I put a marker on the straight curve underneath where the peak of the histogram is (should be about 0.1), and pull it down to about 0.08 (which is about 21 in PS). I put a second marker on the curve just to the right of the histogram, and pull this up. All the time I have the preview activated. The two markers will give me a basic S-curve. I then add marker to the right to decrease the brightest areas and avoid star bloat. Usually the curve ends up straight in the upper part, from about 0.7 to 1.0 on the X-scale. Point to note here. The S-curve starts to rise after the first marker. I make sure that this rise isn't too steep where the histogram is still high (this is noise dominated in the image). I may add a marker to the left of "first" one and lift this slightly to make even the lower part of the S-curve straighter.

3. If during this procedure, bright areas start to lose local contrast, I will apply HDRMT with a lightness mask. But, and this is important, the lightness mask will have its white point dialled down to about 50% (linear curve with the white point brought down to 0.5). HDRMT is applied at about 50% strength. I adjust the number of layers (scale?) to get the detail I want.

This is how I stretch luminance and mono images. For colour (rgb) images I start with either arcsinh transformation or masked stretch, because these retain colour in the stars. But I always follow up with curves transformation. And in both arcsinh or masked stretch, I never keep the default black point (clipping point). I either reset it (0) or take it down (to 1/10 of the recommended value in MS, and 0.5 of the recommended value in arcsinh). Doing this keeps the left hand side of the histogram well away from the left hand border.

https://www.astrobin.com/7n0qcu/B/

Just saw this. Thank you. It’s very detailed and I will get into this.  Fingers crossed, maybe it will change the way I process entirely 

Link to comment
Share on other sites

Posted (edited)
14 hours ago, wimvb said:

Using HT, I manually bring the black point marker in to just below clipping even one single pixel. I set the middle marker to 0.25. I then apply this gentle stretch to the linear image, and repeat until the histogram peak is at about 0.1 (which is equivalent to 25 - 26 in PS speak, because PS uses a range from 0 - 255. 0.1 x 255 = 25.5).

2. I then switch to CT. I put a marker on the straight curve underneath where the peak of the histogram is (should be about 0.1), and pull it down to about 0.08 (which is about 21 in PS). I put a second marker on the curve just to the right of the histogram, and pull this up. All the time I have the preview activated. The two markers will give me a basic S-curve. I then add marker to the right to decrease the brightest areas and avoid star bloat. Usually the curve ends up straight in the upper part, from about 0.7 to 1.0 on the X-scale. Point to note here. The S-curve starts to rise after the first marker. I make sure that this rise isn't too steep where the histogram is still high (this is noise dominated in the image). I may add a marker to the left of "first" one and lift this slightly to make even the lower part of the S-curve straighter.

OK--now I am beginning to understand.  The numbers you give during CT are on the x axis-right?  I still do not see how to set the black point in HT first--I have to slide the mid point slider to left to bring histogram off the left hand margin.  Once I do that, I understand setting the blackpoint.  Before that, though, I don't see a way of doing it.  Anyway--here is a stack of 157 300 sec subs.  I think the FWHM is 2.66 (I inclyuded 120 subs for signal but sacrifised .12" in FWHM....equitable trade I think).  I refrained from noise control.  If one doesnt pixel peep to hard its not bad.  We'll see when I add the OIII and matbe the SII.

Final mono image--could be a bit over bright--it hard to say.  I do know I am tired of this little nebula,`

h157wimway2a.thumb.jpg.4b834f92ba1e9a94ca1108c560607573.jpg
 

 

Edited by Rodd
Link to comment
Share on other sites

59 minutes ago, Rodd said:

The numbers you give during CT are on the x axis-right?

Correct

59 minutes ago, Rodd said:

  I still do not see how to set the black point in HT first

Usually (in my images at least), the darkest pixel doesn't have a value equal to zero. Mind you, that is after cropping any stacking edges and after dbe (where I always check the "normalize" tick box in the corrections section). So, I bring the darkest pixel in my image close to 0. In HT there is an icon for this, on the row labeled "shadows". It's the first icon of three, next to the box that shows the clipping percentage.

image.png.48ebadbf184153e4040d4807ef1959ed.png

(Image from pixinsight docs)

I then set the midtones marker to 0.25 and apply the stretch. Now the darkest pixel will be (close to) 0. I set the black point (shadows) back to 0 and apply the stretch again. I repeat this several times.

In the image above, you can see that there is a narrow gap between the left hand side of the lower histogram window, with the black point/shadows marker, and the point where the histogram starts to rise. It is this gap that is closed by the black point marker being pushed to the right. This gap increases if you stretch with the dark point marker at 0, as shown in the image (compare top and bottom histograms). If you leave this gap, you essentially decrease the dynamic range of the image, because the pixel values to the left of the foot of the histogram aren't being used.

Btw, don't use the automatic black point adjustment icon when you stretch a colour image, because it will mess up background neutralization. The tool calculates the lowest pixel value for each colour channel and sets a black point for each channel individually.

Link to comment
Share on other sites

3 hours ago, wimvb said:

Correct

Usually (in my images at least), the darkest pixel doesn't have a value equal to zero. Mind you, that is after cropping any stacking edges and after dbe (where I always check the "normalize" tick box in the corrections section). So, I bring the darkest pixel in my image close to 0. In HT there is an icon for this, on the row labeled "shadows". It's the first icon of three, next to the box that shows the clipping percentage.

image.png.48ebadbf184153e4040d4807ef1959ed.png

(Image from pixinsight docs)

I then set the midtones marker to 0.25 and apply the stretch. Now the darkest pixel will be (close to) 0. I set the black point (shadows) back to 0 and apply the stretch again. I repeat this several times.

In the image above, you can see that there is a narrow gap between the left hand side of the lower histogram window, with the black point/shadows marker, and the point where the histogram starts to rise. It is this gap that is closed by the black point marker being pushed to the right. This gap increases if you stretch with the dark point marker at 0, as shown in the image (compare top and bottom histograms). If you leave this gap, you essentially decrease the dynamic range of the image, because the pixel values to the left of the foot of the histogram aren't being used.

Btw, don't use the automatic black point adjustment icon when you stretch a colour image, because it will mess up background neutralization. The tool calculates the lowest pixel value for each colour channel and sets a black point for each channel individually.

Wow. Thanks. I am not sure I have the mtf line.  I have never seen it.  But thank you for the labeled HT tool. I will look this over when I process an image.  I always suspected  I was missing something. Maybe I was right!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.