Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

NGC7000 and IC5070 - OSC


Richard_

Recommended Posts

This is the first "proper" image I've got from my recently purchased EQ6-R mount. Coming from the Star Adventurer, I'm amazed at how quickly I'm able to find targets in the sky and how many subs I am able to keep from each session. This image shows 10.5hrs of integration time of the North America nebula (NGC7000) and the Pelican nebula (IC5070) using the Redcat 51 telescope. It's the first time I've played around with the Luminance channel in PixInsight and I'm thoroughly pleased with the level of noise (or lack of I suppose!) in my final image without the image looking too plasticky. More integration time definitely helps!

I'm still fairly new to this hobby, so all constructive criticism is appreciated and welcomed. All information is provided below along with my workflow I used in PixInsight. If there's anything which looks strange or out of sequence, please let me know! 😁

My next step is to try and figure out how to make a quasi-HOO palette version of this image as I like the contrast and depth this offers. I've seen a couple of tutorials online so I'm looking forward to trying them out!

Richard_

 

EQUIPMENT

Camera: ASI533MC-Pro

Telescope: RedCat51

Mount: EQ6-R

Guide Scope/Camera: WO 32mm/ASI120MM Mini

Filter: Optolong l-eXtreme

Peripherals: ASI Air Pro, TP-link WiFi extender, Dew Heaters 

 

ACQUIISITION DETAILS

Gain: 100 (unity)

Sensor temp: -10°C

Subs: 211 x 180s = 10.5 hours

Darks: 50

Flats: 30

Dark Flats: 50

Dates: 25th July 2021 | 22nd August 2021 | 25th August 2021

Area: Northern Hemisphere, Bortle 5 zone

PROCESSING DETAILS (all PixInsight)

  • WBPP 2.0 script, checked final stack as a "preview" to confirm calibrated was performd correctly
  • Take debayered subs and use SubframeSelector to filter out bad subs
  • Register and Drizzle stack the best subs
  • Crop stacked image and performed Dynamic Background Extraction, Background Neutralisation and Colour Correction
  • Extracted Luminance channel (CIE l*a*b), left RGB image to one side for now
  • Luminance channel
    • Execute EZ Deconvolution (in all honesty, I couldn't really tell the difference after this process step)
    • MaskedStretch + HistogramTransformation
    • TGVDenoise
    • Gentle HistogramTransformation
    • Save image
  • RGB image
    • HistogramTransformation
    • TGVDenoise
    • Gentle HistogramTransformation
    • Applied Luminance channel as a mask
    • CurvesTransformation to improve saturation of colours
    • Remove mask, LRGB combine to add Luminance layer in
  • LRGB image
    • CurvesTransformation on luminance layer
    • SCNR (green, 1.0)
    • DarkStructureEnhance
    • Save image

10hr30min Pelican and North America LRGB Final.jpg

  • Like 19
Link to comment
Share on other sites

Nice image and well done.

Some tips would be to play a little with curves to bring up the saturation and contrast a little to make the deep reads pop a little more.

Also stick a black border around it, makes the image pop more.

Excellent image though, good data and nice and smooth

  • Thanks 1
Link to comment
Share on other sites

Thanks for the advice!

The tip on the black border sounds good, I'll give that a try next time.

On the contrast/saturation side of things, I need to get a bit more experience on this as I'm never sure where I should draw the line. I will look into this when I do my HOO attempt. 

Link to comment
Share on other sites

Great image - I took a look at NGC 7000 the other night, with a slightly longer focal length telescope (478 mm) and used a L-enhance filter, but only got 2 hours worth of data. The 10+ hours of data but the care you have taken in processing the data make it a great image. 

Link to comment
Share on other sites

Thanks Ian, that's much appreciated! 

Check out the same image but with 2hrs of integration time in my "welcome" post below. 

If you open the above 2hr image in the link below and 10hr image as above and compare full screen, you can clearly see the difference in noise levels without zooming in a huge amount. I can definitely recommend getting more integration time to naturally reduce any noise in the image. 

Please share your result too. I'm waiting on a new telescope (focal length 550mm) so the field of view should be quite similar to yours. 

 

Link to comment
Share on other sites

23 minutes ago, Richard_ said:

Thanks Ian, that's much appreciated! 

Check out the same image but with 2hrs of integration time in my "welcome" post below. 

If you open the above 2hr image in the link below and 10hr image as above and compare full screen, you can clearly see the difference in noise levels without zooming in a huge amount. I can definitely recommend getting more integration time to naturally reduce any noise in the image. 

Please share your result too. I'm waiting on a new telescope (focal length 550mm) so the field of view should be quite similar to yours. 

 

Thanks Richard - the recent image I took can be seen at: https://drive.google.com/file/d/1zt3eszt9nfapXpEzjNONWBP_0K4MSNSQ/view?usp=sharing

After stacking in DSS, I did some basic processing in Nebulosity, but may look into getting Pixinsight

Link to comment
Share on other sites

2 hours ago, Catanonia said:

play a little with curves to bring up the saturation and contrast a little to make the deep reads pop a little more.

@Richard_ I tried this, and hope you don't mind.  You can certainly bring out details in the Pelican's neck and the Cygnus Wall.

...but is this pushing things too far?

Tony

_126194206_10hr30minPelican_clone2.thumb.jpg.d16e5aebb12c895daf743f82b9d925f4.jpg

 

  • Like 1
Link to comment
Share on other sites

18 minutes ago, AMcD said:

Great image.  It is also incredibly helpful to those of us still getting to grips with PixInsight that you set out your processing workflow in such detail.👌

Thanks pal! I like to include the process details for this reason, it will help Pixinsight beginners become familiar with what processing tools to use and kick off their own research into using those tools correctly. 

On the other hand, if I've done something completely wrong (hopefully) the more experienced people will be able to chip in and correct any mistakes, so it's a win win in my books! 

 

7 minutes ago, iantaylor2uk said:

Thanks Richard - the recent image I took can be seen at: https://drive.google.com/file/d/1zt3eszt9nfapXpEzjNONWBP_0K4MSNSQ/view?usp=sharing

After stacking in DSS, I did some basic processing in Nebulosity, but may look into getting Pixinsight

Nicely done, Ian! You've framed it quite well and you can see the main features. Keep adding integration time and it should come out nicely.

I haven't used anything other than Pixinsight so I can't compare to other software, but there's a lot of source material and tutorials out there in different medium (text based forums/documentation or video based "walk throughs"). The videos offered by Adam Block on YouTube are fantastic. They are long winded and require some commitment to watch but he does a great job at explaining each process and how it works rather than just telling you what buttons to press. 

Link to comment
Share on other sites

10 minutes ago, AKB said:

 

@Richard_ I tried this, and hope you don't mind.  You can certainly bring out details in the Pelican's neck and the Cygnus Wall.

...but is this pushing things too far?

Tony

_126194206_10hr30minPelican_clone2.thumb.jpg.d16e5aebb12c895daf743f82b9d925f4.jpg

 

Wow, look at the detail popping out from the JPEG, imagine if I did this with the raw file! What steps did you take to achieve this? I'll try and recreate on the raw XISF file.

Thanks Tony. 

Link to comment
Share on other sites

11 minutes ago, Richard_ said:

What steps did you take to achieve this?

Yes, a very fair question!  And in line with this comment too:

50 minutes ago, AMcD said:

It is also incredibly helpful to those of us still getting to grips with PixInsight that you set out your processing workflow in such detail

Here's the history from PixInsight:

process_history.thumb.jpg.47da656bd597b6dd5365aadc63930f1e.jpg

 

  • SCNR – I see you did this anyway, but what I see now is a bit blue?
  • Curves – to boost contrast and saturation
  • Local Histogram Equalisation – at three different kernel radii: 32, 64, 128, but only in a small amount, around 0.1
  • Unsharp mask – standard deviation of 2, amount around 0.6

As you say:

17 minutes ago, Richard_ said:

imagine if I did this with the raw file!

HTH

Tony

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Excellent, thanks very much @AKB! That screenshot of the history is really useful.

I've used LHE before but completely forgot to use it this time around. I'll do a bit of reading up on it to see how it works, but will start with the settings you used.

Thanks again. 

Link to comment
Share on other sites

  • 2 weeks later...

@Richard_ Really like the image and thank you for outlining your processing steps.  Coincidently, I imaged NGC 7000 the other night before seeing your thread but I haven't processed it yet so going to give your workflow a go.  I'm using a SW 80ED and ZWO 294MC so I'm more zoomed in on the Cygnus Wall.  I only got about 5 1/2 hours and I don't have a filter.  Thinking of getting an L-Enhance but interested in your thoughts of the L-Extreme.

Neil

Link to comment
Share on other sites

Hi @Grifflin. Thanks for your comment! It'll be interesting to see how your processing turns out, do let me know if there's anything I should do differently.  When I've processed images with lower integration times without a filter, I found that it was easy to over do noise reduction, especially on the background. So I'd probably say expect to see a bit more noise in your background compared to the above, this is OK!

I opted for the L-eXtreme as this filter has a narrow bandpass on oxygen, whereas the L-eNhance has a wider bandpass which stretches out to H-beta (I think they both have the same/equivalent bandpass on H-alpha, so no significant difference there). From what I've read and seen in videos, this extra bandpass doesn't really provide better data on the majority of targets so I opted for the L-eXtreme. As I have no experience or comparison to narrowband filters, I'm really happy with it as it filters out a lot of the background light pollution which is present if I don't use it and it allows me to get good results even with a full moon out! 

The one main downside I can find is that this filter does produce halos around bright stars which you can see in the lower left of my image. People have mentioned that you can eliminate these post process in pixinsight etc but I haven't tried this yet. Apparently the L-eNhance either doesn't produce these artifacts, or they are to a lesser degree. I'd recommend having a quick read up yourself to see if that's the case!

  • Thanks 1
Link to comment
Share on other sites

Cracking image Lee! I've seen Lukomatico's tutorial but haven't found the time to use this approach just yet! Your data is very clean, but I'd expect that for 26 hours haha. At least this gives me an idea of what to aim for when I get around to it, cheers. 

  • Thanks 1
Link to comment
Share on other sites

  • 3 weeks later...

I finally got around to creating a HOO palette rendition using the same source stack as the above. I followed the excellent tutorial provided by Lukomatico on processing HOO images and I'm really happy with the end result! I took the above feedback into consideration and managed to bring out more detail and saturation in my image without overdoing it (in my opinion, anyway). Up until now, I was never really happy with using this pallete as I felt there was too much of a magenta cast, but Luke suggested a really cool tip by using the "Correct Magenta Stars" script to remove this from the nebula. A great idea, and I think it makes a huge difference.

What do you think?

 

10hr30min Pelican and North America HOO final.jpg

  • Like 1
Link to comment
Share on other sites

11 hours ago, oymd said:

Did you go by r*0.6*+g*0.4 in PixelMath, or did you change the values?

Thanks oymd!

Yes, I used this proportion as suggested in Luke's video. I didn't play around with the ratio, I think as long as you can stretch the histogram of the Green channel to match the Red channel as close as possible, then the provided ratio should work fine. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.