Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

IC-5146 LRGB and HaLRGB


Rodd

Recommended Posts

  • Replies 66
  • Created
  • Last Reply
11 hours ago, Rodd said:

...

1) Final image in style of above--I can't do better.  Lord knows I have tried.  (except maybe the one above?)  I think there are subtle details in the flower not visible above though.

 

5983b28068e66_Blend3imagesAligned.thumb.jpg.5b898e5c4538f79708f6b62707929e39.jpg

Great image, Rodd! You've managed to bring out detail in the central neb, as well as variation in the faint dust, plus star colour.

Well worth the struggle, imo.

As for background values discussed earlier. @ollypenrice's 23/23/23 is on a full scale of 255/255/255, which in PixInsight becomes .09/.09/.09 (or about .1). You probably already knew that. But your red value of .06 was actually only at 60% of the green and blue.

In order not having to redo processing in PI, you can save your wip as a PI project. This saves all processing history, and redoing a step midprocess becomes very easy. But it does devour harddisk space. Projects are usually in the GB range.

Link to comment
Share on other sites

5 hours ago, swag72 said:

Have you done any measuring of the data sets? For example in PI you can measure noise, FWHM, eccentricity (and more) of individual subs or stacks of data. I have NO doubt that there is a diminishing return somewhere and beyond which there is little point in gathering more data..... but where is it and what determines that?

Let me give you an example.... I am currently working on the Iris nebula. Stupidly I had the wrong sub length in SGP and didn't notice, so instead of 1200s or 1800s I am on 600s. No problem I thought so I decided to get loads of subs! As i was ploughing through the luminance data night after night, I decided to look at the noise (which is always a fair indicating point of a stack I feel) in the PI Subframe selector. I looked at the difference between 30, 60, 100 and 123 subs..... and the noise continues to drop.

I read somewhere that the point of diminishing returns is reached at 75 subs...... Well I found a big difference between 60 and 100 subs and even a marked difference between 100 and 123. So will I continue? I think I may go up to 150...... 

My point is that that *I believe* that sheer number of exposures plays a huge part in this (as does the length) - So perhaps if you tried a similar experiment of where you feel your data improvement drops off, you will have your own idea about when enough data is enough. I am sure that this is based on a whole host of individual conditions, such as equipment, skies, etc that are all specific to each and every one of us. Spend a few sessions getting a large number of exposures, stack them all at different points and see if you can see a difference in quality... when you can't you've reached your limit :) Then you will know that if you go up to that point any shortfalls is *unlikely* to be the data.

Thanks Sara--I do use SFS to measure all my subs--it is one way I check to see if its time to refocus (FWHM).  I measure eccentricity too but I don't do anything to effect it if its high (above .6 or so).  I find different scopes tend to have different eccentricity values in general.  I thought it was related to polar alignment, but it can change from sub to sub.  With the Televue the eccentricity goes up as the FWHM goes down (for me) but its the opposite with the TOA.  I also measure the Median, SNR weight and noise.  But--like you said in your article "Garbage in Garbage out, You Decide"  I see little difference in a stack between choosing the best subs and just throwing all of them in (within reason).  I start to panick if my FWHM shoots up from 2.5 to 3.8 (big change and I refocus).  But when I look at the two subs, I see little difference until I zoom in--and then its noticeable, but only barely.  Even between 1.9 and 4--if you have to zoom in to 5:1 in order to see it--how important is it?

I have had a couple of experiences with using more or less data--but never 100 subs per stack, or 150.  I would be out there all year.  Lately I can't get more than a night or two a month its seems (the reason I am thinking of getting an FSQ 106 and the .6x reducer for F3--Maybe I could finish an image in 1 night....thoughts?  You use a 106 right?  Ever use the .6x reducer?).  It seems crazy, but I had 7 30min Ha subs and was expecting big gains after collecting 11 more, but I was disapointed.  I could not really see the difference.  Maybe because it was a bright Ha target (Pelican and NGC 7000)  and I was shooting 30 min.  Also--quality difference in data could be to blame.  Atmospheric conditions (and the Moon's phase) could be the cuase. Or maybe because there is not that much difference between 7 and 18  Don't know.  Maybe narrow band is a bit different than broadband--I know its allot easier for me to process.  And NB (especially Ha) always looks better. Broadband is tough in my skies.  The target has to be very high in the sky otherwise its a lost cause.  

Wow--123 subs per stack.  My Cacoon had 79 for the whole image! 18.5 hours.  Seems sufficient, but on paper only :icon_biggrin:.  I agree that more data is better.  But I think it has to be ALLOT more--like double in order to really be noticeable to the eye.  Than there's Barry--who made a wonderful image with 3-4 subs per channel!!  I think it was NGC 7822 widefield narrowband (maybe that's more evidence for narrowband not needing as much data?)

 One thing this image has taught me--20 min lums are good (I had never done them--always 10)  Its hard for me to believe, but 10 min RGB subs seem to be sufficient even at F7.7--I did not bump up saturation in this image and it is over saturated really (I think).  When I say I did not increase situation, I mean by specifically doing that.  Some actions tend to boost it-like when I use curves to modify contrast, it goes up or down--but I did not "add" color.  

I took a closer look at Fabian's image you provided the link too.  At first glance it seems worlds away--but most of that is due to the FOV I think--it is striking.  I can definitely see where 11 more hours of data would help--many of the faint details in his image are in mine too--but not as well defined.   Believe it or not, but I had not seen that image until you posted it.  I find this hobby amazing--2 people in different parts of the world collecting photons that have been traveling for thousands of years (millions in some cases), on a sensor the size of a postage stamp, and after traveling many trillions of miles through our turbulent galaxy the photons strike the pixels with amazing accuracy--stars of a couple of pixels being precisely in the same spot (and of the same color!).  Fine tendrils of gas look the same.  What really blows my mind is if you put that same postage sized sensor over every square inch of the Earths surface--you would get the same fine details!  And everything is moving!  Amazing.

Rodd

Link to comment
Share on other sites

1 hour ago, wimvb said:

Great image, Rodd! You've managed to bring out detail in the central neb, as well as variation in the faint dust, plus star colour.

Well worth the struggle, imo.

As for background values discussed earlier. @ollypenrice's 23/23/23 is on a full scale of 255/255/255, which in PixInsight becomes .09/.09/.09 (or about .1). You probably already knew that. But your red value of .06 was actually only at 60% of the green and blue.

In order not having to redo processing in PI, you can save your wip as a PI project. This saves all processing history, and redoing a step midprocess becomes very easy. But it does devour harddisk space. Projects are usually in the GB range.

Thanks Wim!  Yes, it was worth it.  Cookie jar is empty again though...tomorrow its supposed to be clear! Bright Moon though--Ha it is.  Thanks for the info regarding PIs values--I did not know that.  It will definitely come in handy! One question regarding background Neutralization--should I pick a pixel, or a few that are the absolute darkest based on the three color values?  I have been hunting for pixels like .008, .008, .008 but it is hard to find them equal--I usually choose 1 pixel whose values are as close to being equal as I can find--they look black.  Or should make a preview of a dozen darkish pixels from a dark area and not worry about there values?  Choossing 1 or 2 as black as possible seems to work, but sometimes its hard to tell.

I know about saving as projects--I don't know why I don't do that.  Memory I guess is one--I do have a 2 terrabyte external 3.0 flash drive for storage.  But opening and saving to/from it takes some time.  What I do is I make my stacks and save them so that I can open them and start right in on integrating them into an LRGB or SHO image.  Or, better yet, I will have the RGB image saved right after Color calibration, and the Lum saved right after DBE.  I can open them and start right in.  I now its not the proper way to do it.  I have not yet advanced far enough to where I process an image over many days.  Its usually a marathon effort during which I save dozens of variations.  If each one of my variations was a project I would be in trouble:happy8:

One thing with PI--even the 1.8.5 that just came out--I get memory read error when trying to place points in DBE.  Does not happen all teh time and my hard drive is 1/2 empty.  It forces me to exit PI and reboot the computer and then I can use DBE--but not always.  It is very frustrating becuase ABE is inferior.  To combat this, I at times have to generate hundreds of points and then go through each one and keep it or discard it until I get the points for my DBE.  That takes a long time.  So once I get it to work--I make an icon and leave teh computer on.  I guess if I saved as a project that icon would be saved too--probably another reason I should do it that way.

Rodd

Link to comment
Share on other sites

2 TB should be enough to hold a few years worth of projects, considering the number of good nights we get.

I usually keep all my raw files and the caliberated masterbias, -dark, and -flat, but not the calibrated or registered light frames. I also used to save the integrated image at various stages of processing (after dbe, denoise, deconvolution, etc), but don't do that anymore. Saving the project file is much better since it saves the complete image history. To make the project save faster, I don't save all the previews with the project. But I do save the masks, otherwise the history is 'corrupted'.

When I want to go different paths or redo a processing step on an almost completed image, I just go back in the history explorer to where I want to do something different. Then I clone the image and move the clone to another workspace. I then restore the original image to its final state.

Sometimes I even copy one image's process container (small triangle in process history panel) to another workspace, to use parts of it.

If DBE gives you memory errors, then that should be your OS or computer. The hard drive isn't involved in this (I have a 1 TB drive with just 10% left and don't get the issues). Check how much memory (RAM) your computer has, and how much memory is allocated to swap files, and buffers. There are recommended minimum requirements for PI (at least 4 GB ram, but "minimum reasonable amout" is 8 GB).

I generally don't use the automatic sample placement of DBE, but place samples manually (one in each corner, one along each side, some more towards the middle and extra in visible gradient areas; usually not more than 15 samples in total). Increase the size, so that samples just about fit between stars. For my 14 Mpixel DSLR images, I use a sample size of 25 - 35 pixels wide. It's ok to have small stars in a sample. The black holes in a sample in DBE, are areas that are not used for background extraction. Just make sure that there is no large colour halo around these black holes. Ideally samples should be light gray to white, indicating that all colours are weighted equally. This applies to RGB of course. Just make sure you always check the background model, which should only show a gradual colour or intensity change, without any 'detail'. And after DBE, inspect the image with a strong STF stretch applied. There should be no dark shadows around nebulosity or galaxies. Nowadays, I tick the 'normalize' check box, so that only gradients are removed, and the background isn't darkened by DBE.

In background neutralisation, go for the LARGEST preview that shows clean background (difficult to do when you go for the faint dust). In case the preview is too small, clone it and move the clone to another area. Do this until the preview(s) cover a decent size of the background. Then use preview aggregator to make one image of all the background previews. Use this for background neutralisation. Note that you have to create a new aggregated image for colour calibration, but you can use the same previews. This is because the pixel values of the aggregated preview image are used to create a 'new' background in the original. But the aggregated preview image itself wasn't changed. If you don't make a new aggregated image, the wrong pixel values will be used for colour calibration.

DBE, background neutralisation, and colour calibration are all about image statistics. As always in statistics, the larger the sample size that is used, the better the estimate becomes. That's why you shouldn't use single pixels for any of these processes. And that's also why the standard sample size of 5 pixels in DBE is WAY too small for rgb images (especially DSLR images with colour noise).

One more point on colour calibration. As I understand, with the new version of PixInsight (1.8.5), the colour calibration routine can use individual stars for colour calibration. The algorithm plate solves the image, and checks the spectral characteristics of individual stars in a database. It can then correct the image colour such that selected stars have the right colour. Together with common sense, this can be a very powerful technique.

Hope this helps

Link to comment
Share on other sites

3 hours ago, wimvb said:

2 TB should be enough to hold a few years worth of projects, considering the number of good nights we get.

I usually keep all my raw files and the caliberated masterbias, -dark, and -flat, but not the calibrated or registered light frames. I also used to save the integrated image at various stages of processing (after dbe, denoise, deconvolution, etc), but don't do that anymore. Saving the project file is much better since it saves the complete image history. To make the project save faster, I don't save all the previews with the project. But I do save the masks, otherwise the history is 'corrupted'.

When I want to go different paths or redo a processing step on an almost completed image, I just go back in the history explorer to where I want to do something different. Then I clone the image and move the clone to another workspace. I then restore the original image to its final state.

Sometimes I even copy one image's process container (small triangle in process history panel) to another workspace, to use parts of it.

If DBE gives you memory errors, then that should be your OS or computer. The hard drive isn't involved in this (I have a 1 TB drive with just 10% left and don't get the issues). Check how much memory (RAM) your computer has, and how much memory is allocated to swap files, and buffers. There are recommended minimum requirements for PI (at least 4 GB ram, but "minimum reasonable amout" is 8 GB).

I generally don't use the automatic sample placement of DBE, but place samples manually (one in each corner, one along each side, some more towards the middle and extra in visible gradient areas; usually not more than 15 samples in total). Increase the size, so that samples just about fit between stars. For my 14 Mpixel DSLR images, I use a sample size of 25 - 35 pixels wide. It's ok to have small stars in a sample. The black holes in a sample in DBE, are areas that are not used for background extraction. Just make sure that there is no large colour halo around these black holes. Ideally samples should be light gray to white, indicating that all colours are weighted equally. This applies to RGB of course. Just make sure you always check the background model, which should only show a gradual colour or intensity change, without any 'detail'. And after DBE, inspect the image with a strong STF stretch applied. There should be no dark shadows around nebulosity or galaxies. Nowadays, I tick the 'normalize' check box, so that only gradients are removed, and the background isn't darkened by DBE.

In background neutralisation, go for the LARGEST preview that shows clean background (difficult to do when you go for the faint dust). In case the preview is too small, clone it and move the clone to another area. Do this until the preview(s) cover a decent size of the background. Then use preview aggregator to make one image of all the background previews. Use this for background neutralisation. Note that you have to create a new aggregated image for colour calibration, but you can use the same previews. This is because the pixel values of the aggregated preview image are used to create a 'new' background in the original. But the aggregated preview image itself wasn't changed. If you don't make a new aggregated image, the wrong pixel values will be used for colour calibration.

DBE, background neutralisation, and colour calibration are all about image statistics. As always in statistics, the larger the sample size that is used, the better the estimate becomes. That's why you shouldn't use single pixels for any of these processes. And that's also why the standard sample size of 5 pixels in DBE is WAY too small for rgb images (especially DSLR images with colour noise).

One more point on colour calibration. As I understand, with the new version of PixInsight (1.8.5), the colour calibration routine can use individual stars for colour calibration. The algorithm plate solves the image, and checks the spectral characteristics of individual stars in a database. It can then correct the image colour such that selected stars have the right colour. Together with common sense, this can be a very powerful technique.

Hope this helps

The only reason I use the generate button for points is to get a density that completely covers the image, then I remove all the ones except the 20 or so I want--a backwards way to do it I know, but the only way when the memory read error hits.  I have 16GB RAM so it should not be a problem.  I am not a computer savvy guy so maybe its RAM allocation as you mention.  I have 1.8.5--downloaded it yesterday.  Not sure how to use the white reference str value.  We could always use structure detection which was based on all stars in the field.  I was recommended by Vicent Peris not to use this except where there are huge amounts of star.  I rarely do. When I zoom in to make a preview for BN, even in dark areas the pixels are all very colorful.  If I link the channels its different than unlinked in STF.  That's why I go with numbers--the lowest I can find.  But the aggregate thing sounds interesting--I will try it.  Also, I will try makinga preview in a dark area and use 20-30 pixels--hopefully the various colors balance out--which is what I think is the intention.

thanks Wim--Its the details like this that are hard to come by--the reference documentation is not complete--non existent in my version anyway for most things I need it for.  I appreciate the help.  By the way--I have been looking over the various versions of my image and I think the following one is better than the previous.  It reveals more blueish dust but keeps everything else the same. I tend to lose track (you don't want to know how many versions I had!).  But--I think I will reprocess this using the techniques you just ecplained to me.  maybe my background is so dark because of that.

Final.thumb.jpg.be5fe9d25e882e559c3da8dab89275bd.jpg

Link to comment
Share on other sites

3 hours ago, johnrt said:

Your images here have improved as you have stopped suppressing the colour in the background, this area of space is not jet black, but awash with dust and gasses.

Thanks John--Sometimes I cheat--I desaturate to hide gradients or other issues.  No Longer!  RGB is definitely harder for me.  NB is like a cakewalk compared.

Rodd

Link to comment
Share on other sites

Sorry!!! One more.  I had to post it as I think it really elevates the image.  This is the last one with the tiniest of stars dimmed just a bot.  I giuess youd call it star control--but the are not smaller or gone--just toned down and the nebula stands out more. 

Final-stars.thumb.jpg.4b5ee88e862e04896bf9ecc151a319c9.jpg

Link to comment
Share on other sites

I really like this last image.  In a lot of the other ones all i could see on my screen was a green cast to it and on this one you have seem to have taken care of all that and kept the detail and now the stars do not seem quite so bloated and burnt.  Great image!

Link to comment
Share on other sites

5 hours ago, Rudeviewer said:

I really like this last image.  In a lot of the other ones all i could see on my screen was a green cast to it and on this one you have seem to have taken care of all that and kept the detail and now the stars do not seem quite so bloated and burnt.  Great image!

Thanks man!

Link to comment
Share on other sites

14 hours ago, wimvb said:

Great image, Rodd! You've managed to bring out detail in the central neb, as well as variation in the faint dust, plus star colour.

Well worth the struggle, imo.

As for background values discussed earlier. @ollypenrice's 23/23/23 is on a full scale of 255/255/255, which in PixInsight becomes .09/.09/.09 (or about .1). You probably already knew that. But your red value of .06 was actually only at 60% of the green and blue.

In order not having to redo processing in PI, you can save your wip as a PI project. This saves all processing history, and redoing a step midprocess becomes very easy. But it does devour harddisk space. Projects are usually in the GB range.

Wim--wow--I just tried using a large pixel of a dark region for BN and WHAT A DIFFERENCE!!  Remarkable.  Unfortunately I am too tired to prossess this image again--I got about half way through and I'm done.  But I saw the difference immediately when I applied BN.  So many thanks.  I guess its for future efforts.  Maybe in a few days I will have a go to see if it makes a difference in the final image.

Rodd

Link to comment
Share on other sites

Well--one last post to prove you are right when you say walk away and come back to it in a day or two.  Now it seems to me that all my previous attempts were way over done--over processed.  Here is a much more realistic version that has not been over processed.  There is no Ha, but I think its better anyway.  Thanks for reiterating a tried and true adage.

Rodd

LRGB-Minimal-b.thumb.jpg.9be20fa90e1e7c94c525211307d67ca3.jpg

Link to comment
Share on other sites

Once the data is captured, it's always possible to re-process an image. And as we learn more, we can always go back to old data and pull out more or enhance new features we find. That's the beauty of AP.

Which version do YOU like best? (You may want to wait a day or two before answering that question.:icon_biggrin:)

Link to comment
Share on other sites

38 minutes ago, wimvb said:

Once the data is captured, it's always possible to re-process an image. And as we learn more, we can always go back to old data and pull out more or enhance new features we find. That's the beauty of AP.

Which version do YOU like best? (You may want to wait a day or two before answering that question.:icon_biggrin:)

So true--I definitely think my previous attempts were pushing too hard--trying to get the data to appear beyond what it was capable of.  there is only 11.5 hours not 29 and at F7.7 not F3-4.  Is the last one as good as it can be--probably not, but I do think its the......version that adheres most closely to our general philosophy of being true to the data and leaving a bit on the table.  The others hurt my eyes.  This one I can stomach (if that isn't a ringing endorsement, I do know what is:icon_biggrin:  And, now that I look at it on this page--it too has been pushed too far no doubt.  Its a very fine and nebulous line (no pun intended).

Rodd

Link to comment
Share on other sites

2 hours ago, Rodd said:

Well--one last post to prove you are right when you say walk away and come back to it in a day or two.  Now it seems to me that all my previous attempts were way over done--over processed.  Here is a much more realistic version that has not been over processed.  There is no Ha, but I think its better anyway.  Thanks for reiterating a tried and true adage.

That's a lovely version Rodd. Amazing what fresh eyes can see. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.