Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

The "No EQ" DSO Challenge!


JGM1971

Recommended Posts

6 minutes ago, Nigel G said:

Now with darks and flats my theory is..... DSS makes a master dark frame from many frames and subtracts it from your light frames. The trouble is each light frame has moved slightly from the last one so the camera noise is not in the same place. Taking loads of darks doesn't seem to be profitable in my mind, to be honest if I use 15 or 50 it doesn't seem to make a difference,  others will disagree with me but that's the unknown. Flats too, a master flat is produced by DSS and the effects are removed from the final image, the same thing has happened,  your lights have rotated  and the lens effects have rotated too, so I use just 12 to 15 flats with same effect that 50 flats has, they are worth taking though.

Nige.

Hang on, I need to get my head around this :happy3:. When you take darks, flats and biasses, they will always be matched to the sensor, pixel by pixel. Field rotation doesn't come into it. If the remaining noise in each pixel is truly random it shouldn't matter which pixels are stacked together, so this shouldn't be affected by field rotation. After all, with EQ mounts it is common practise to 'dither' and it doesn't affect the background noise. As I see it, the problem is that there will be some hot pixels which are not random noise, and as they are stacked with other rotated frames they will leave a trail of hot pixels. Unless they are first removed by using the dark frames. Or, to use kappa-sigma clipping in the stacking.

Grief, I can't keep up with all the posts during the time it's taken me to write this! Please forgive any overlap.

Ian

Link to comment
Share on other sites

2 minutes ago, The Admiral said:

Hang on, I need to get my head around this :happy3:. When you take darks, flats and biasses, they will always be matched to the sensor, pixel by pixel. Field rotation doesn't come into it. If the remaining noise in each pixel is truly random it shouldn't matter which pixels are stacked together, so this shouldn't be affected by field rotation. After all, with EQ mounts it is common practise to 'dither' and it doesn't affect the background noise. As I see it, the problem is that there will be some hot pixels which are not random noise, and as they are stacked with other rotated frames they will leave a trail of hot pixels. Unless they are first removed by using the dark frames. Or, to use kappa-sigma clipping in the stacking.

Grief, I can't keep up with all the posts during the time it's taken me to write this! Please forgive any overlap.

Ian

Actually you've just reminded me of something Olly said in another post relating to unguided imaging which benefits a natural dither due to field rotation and Earth's rotation. We track to avoid the latter but given our mounts' tracking quality we probably still get a little of that component too. In theory, hot pixels should therefore move across the image in relation to the target and be rejected by the integration as outliers.

Link to comment
Share on other sites

7 minutes ago, Filroden said:

Actually you've just reminded me of something Olly said in another post relating to unguided imaging which benefits a natural dither due to field rotation and Earth's rotation. We track to avoid the latter but given our mounts' tracking quality we probably still get a little of that component too. In theory, hot pixels should therefore move across the image in relation to the target and be rejected by the integration as outliers.

Indeed Ken. I seems to me that the tracking ability of our mount (or perhaps, slight lack of ability) means that we may be doing the equivalent of dither (is it random?), so all we should need is a defect map and a clipping algorithm in the stacking. I'm none too sure of the exact procedure though. I think that now I have AA I could be in a position to create a defect map. I know Olly doesn't recommend darks for DSLRs, but rather uses the dither/reject approach.

Does your cooled sensor need darks?

Ian

Edited by The Admiral
Link to comment
Share on other sites

3 minutes ago, The Admiral said:

Does your cooled sensor need darks?

I think it's better to correct with darks if you can match them to the lights. Given I can control my chip temperature to within 0.1C, I would always use them. I think it's just for uncooled cameras (whether DSLR or even CCD) where compromises need to be found.

Link to comment
Share on other sites

13 hours ago, parallaxerr said:

Exposure time vs CA?

Hi. I've an affordable refractor too and have managed to tame the CA [1]. The cure is a uv-ir cut filter -to help the inadequate in camera filter and prevent those wavelenhgths recording as visible- with deconvolution of the blue channel so as to push the blue back to the central star. The method is unfolding here... HTH.

[1] Which IMHO is infinitely better than having hideous spikes protruding from every star!

  • Like 3
Link to comment
Share on other sites

31 minutes ago, The Admiral said:

The 'darks' debate has been around for some time. If you are interested, here's a thread that might set you thinking https://stargazerslounge.com/topic/223975-dslr-imaging-do-you-really-need-dark-frames/?page=1

Ian

For Ken's cooled camera take darks. For the rest of us without that capability use the time to take more light frames instead :-)

Cheers,

Steve

  • Like 2
Link to comment
Share on other sites

7 minutes ago, SteveNickolls said:

For Ken's cooled camera take darks. For the rest of us without that capability use the time to take more light frames instead :-)

Cheers,

Steve

I absolutely agree Steve, but I get the feeling that one needs to replace them with something else. I'm coming to the conclusion that no-one (or at least very few) really know the answer to this. A more recent thread is also worth a read https://stargazerslounge.com/topic/276824-dithering-vs-darks-dslr-how-many-darks-anyway/

It seems the accepted approach is to use the bias as darks and use a rejection mode (e.g. kappa-sigma clipping) for stacking. A bad pixel map is also referred to. More reading is needed I think.

Ian

Link to comment
Share on other sites

1 minute ago, alacant said:

Flat and bias frames? HTH

Yes, or flat and flat-darks. But just not using darks may not be sufficient. As I say, you'd need to stack with a rejection algorithm, and possibly also a bad pixel map in order to replicate the other function of darks, to remove other defects, though I'm not clear on this.

Ian

Link to comment
Share on other sites

6 hours ago, Filroden said:

I wonder if I can blend the SCT image into this?

Well, though the blend is rough (I really need to learn Photoshop), here's a combination of my two images of M42 taken with the 80mm f5 refractor and the 235mm f10 SCT (may have been reduced to f6.7 but I can't remember) with some tweaks in Lightroom (which is much easier to use!). The Trapezium is now resolved fully :)

M042_20161126_v2.jpg

  • Like 5
Link to comment
Share on other sites

3 hours ago, The Admiral said:

It seems the accepted approach is to use the bias as darks and use a rejection mode (e.g. kappa-sigma clipping) for stacking. A bad pixel map is also referred to. More reading is needed I think.

We already use bias frames and let DSS apply kappa-sigma clipping but a bad pixel map might a learning curve step too far at least for lil' old Luddite me who likes to embrace the KISS principle. I wish you all well of course with this and I will read the thread in case it is something more I can achieve :-)

Best wishes,
Steve

Link to comment
Share on other sites

34 minutes ago, SteveNickolls said:

Synscan does the nudging pretty well all by itself

Unfortunately it always nudges in the same direction. Hence the importance of randomly. AKA dither. HTH.

Edited by alacant
  • Like 1
Link to comment
Share on other sites

53 minutes ago, SteveNickolls said:

We already use bias frames and let DSS apply kappa-sigma clipping but a bad pixel map might a learning curve step too far at least for lil' old Luddite me who likes to embrace the KISS principle. I wish you all well of course with this and I will read the thread in case it is something more I can achieve :-)

Best wishes,
Steve

Have no fear Steve, if you stick with DSS I don't think there is the option :icon_biggrin:

Ian

  • Like 1
Link to comment
Share on other sites

Well, I've bitten the bullet and started processing the fourth of my targets from the other night - the additional subs on NGC1333. Watching 439 lights being aligned and registered prior to integration is painfully slow!

By my reckoning I should now have 160 mins of L, 44m of R, 54m of G and 52m of B.

Edited by Filroden
  • Like 1
Link to comment
Share on other sites

23 minutes ago, alacant said:

Unfortunately it always nudges in the same direction. Hence the importance of randomly. AKA dither. HTH.

If you look at the subs you can see that they don't necessarily move in the same direction. I.e. the image wanders. And plus there is field rotation. And I think I read somewhere that dither just makes the rejection algorithm more effective, so I'm guessing that so long as different pixels are exposed that it must help, even if not truly random.

Ian

Edited by The Admiral
  • Like 1
Link to comment
Share on other sites

I had a really clear night and took the opportunity to add a lot more subs to this project. All told, I think this now stands at 163 min of L, 44 min of R, 54 min of B and 52 min of G. So here is version 5 of NGC1333.

All subs of either 30s or 45s at -20C, 300 gain and either 0 or 50 offset. All calibrated with bias, darks and flats and processed in PixInsight with some finishing touches in Lightroom.

large.NGC1333_20161127_v5.jpg

Version 4 for comparison:

large.NGC1333_20161120_v4.jpg

  • Like 7
Link to comment
Share on other sites

On 11/11/2016 at 17:27, Filroden said:

Likewise. I keep my tripod as low as possible. It kills my back during alignment but I think it's more stable. I also have vibration pads below the tips which speeds up how quickly it settles.

Ken, didn't you say that you'd realised that you could use your StarSense? Are you using it now and how have you found it?

Ian

Link to comment
Share on other sites

12 hours ago, Filroden said:

I had a really clear night and took the opportunity to add a lot more subs to this project. All told, I think this now stands at 163 min of L, 44 min of R, 54 min of B and 52 min of G. So here is version 5 of NGC1333.

All subs of either 30s or 45s at -20C, 300 gain and either 0 or 50 offset. All calibrated with bias, darks and flats and processed in PixInsight with some finishing touches in Lightroom.

large.NGC1333_20161127_v5.jpg

 

You must have been about the only one with clear skies last night. Looking good for tonight down south.

You're facing a challenging target here Ken, and your dogged persistence is paying off I'd say. The dark lanes look rather clearer on my tablet screen than they do on my calibrated monitor :hmh:

Ian

Link to comment
Share on other sites

I had to take a day off posting in the thread yesterday after my brain melt on Saturday night! The pace that this thread moves at can be really hard to keep up with at times, so thanks to those who commented on my rather long post, it'd be a mission to try to quote them all so I'll summarise my response.

It seems my conclusions attract mixed responses, as expected. One thing I note though is that the modus operandi of individuals seems based purely on preference, whether it be with regard to calibration frames or target selection or whatever. Everyone here is producing excellent results using some significantly different approaches and that makes me happy because it means I'm not doing anything hideously wrong!

I have gleaned further information from your responses which, whilst adding to the knowledge bank, also adds more...not confusion...but more to compute, perhaps.

One plus point that I'm really happy with is that Mr. Stark claims aperture is not all ruling in AP, so I don't have to fret any longer over 66mm apo or 120mm achro!

  • Like 2
Link to comment
Share on other sites

56 minutes ago, The Admiral said:

Ken, didn't you say that you'd realised that you could use your StarSense? Are you using it now and how have you found it?

Ian

I am using it now and it's so much quicker. My target lands just off centre and a quick fine tune gets me spot on. It also means I make less noise for the neighbours!

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.