Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Trying something new! My results so far.


assouptro

Recommended Posts

Hi Starjunkies! 

I have been playing with my Samyang 135 for a few months now and have been really enjoying the  wide field view it allows.

I thought I’d have a go at the Spaghetti Nebula ( Sh2-240 or Simeis 147) In Taurus just below Auriga 

It’s a dim target from my city skies and the Oiii is going to need a lot more data to bring it out but I can see the promise of something good with enough patience and some luck with the weather.

here is the result so far...

F7EA44E5-CE34-4E31-A696-51FE27FFC772.thumb.jpeg.267f236868ea436795dc666f0b558e63.jpeg
 

It’s just over 4 hours Ha and Oiii mainly 5 and 6 min subs 

any tips on the Oiii subs or processing would be welcome. 

Thanks for looking! 

Bryan 

  • Like 37
Link to comment
Share on other sites

Superb mate.  A difficult target and it needs hours and hours of HAOIII integration to show the details.  The Samyang 135 is a super lens for framing this object - I have done it myself too witjh mine - though will return to it when it clears to add to my data from last year.   I have seen people do twenty hours plus on this, which is a lot of investment in a target under UK skies.   You've got lots of the additional faint nebulosity around the Spaghetti as well.  Keep at it.

  • Like 1
Link to comment
Share on other sites

An heroic choice of target from a light polluted site and an even more heroic result! This is a target which will guzzle all the time you can give it so you're doing brilliantly for four hours already.

The combination of Astrodon filters and this lens is inspired because the one thing camera lenses can't do is hold down star sizes - but that's a thing Astrodon filters do to perfection. Great combination.

Olly

  • Like 1
Link to comment
Share on other sites

8 minutes ago, kirkster501 said:

Superb mate.  A difficult target and it needs hours and hours of HAOIII integration to show the details.  The Samyang 135 is a super lens for framing this object - I have done it myself too witjh mine - though will return to it when it clears to add to my data from last year.   I have seen people do twenty hours plus on this, which is a lot of investment in a target under UK skies.   You've got lots of the additional faint nebulosity around the Spaghetti as well.  Keep at it.

Thanks 😊 

It has been a labour of love so far, I have had to wait until 3am to frame it as it’s quite low until then, I am looking forward to the next moonless clear skies to allow more Oiii data 

can I ask, what length of subs have you used on your Oiii? 
 

Cheers 

Bryan

Link to comment
Share on other sites

2 minutes ago, assouptro said:

Thanks 😊 

It has been a labour of love so far, I have had to wait until 3am to frame it as it’s quite low until then, I am looking forward to the next moonless clear skies to allow more Oiii data 

can I ask, what length of subs have you used on your Oiii? 
 

Cheers 

Bryan

I only ever do 300s subs.  I remain to be convinced that going longer makes any difference in my skies.

  • Thanks 1
Link to comment
Share on other sites

Just now, kirkster501 said:

I only ever do 300s subs.  I remain to be convinced that going longer makes any difference in my skies.

I never do less than 15 minutes in NB but my skies are dark. I generally prefer 30 minutes.  I'm not doubting your choice but what do you find with longer subs?

Olly

Link to comment
Share on other sites

4 minutes ago, ollypenrice said:

An heroic choice of target from a light polluted site and an even more heroic result! This is a target which will guzzle all the time you can give it so you're doing brilliantly for four hours already.

The combination of Astrodon filters and this lens is inspired because the one thing camera lenses can't do is hold down star sizes - but that's a thing Astrodon filters do to perfection. Great combination.

Olly

Thanks Olly 😊

Kind wird’s indeed! 
I have had to work harder than ever on teasing out the detail of this interesting deep sky object. 
I can see it is going to dominate the next couple of months if the clouds allow?! 
 

Thanks again for your encouragement 

 

Bryan 

Link to comment
Share on other sites

3 minutes ago, ollypenrice said:

I never do less than 15 minutes in NB but my skies are dark. I generally prefer 30 minutes.  I'm not doubting your choice but what do you find with longer subs?

Olly

I normally do 20-30 min with my F6 100 Ed and my F6.5 ts quad but with the Samyang at 2.8 (I normally step down 2 clicks) I find the sky glow and over saturated pixels becomes a problem at about 10min so I have settled on5-6 min. Having said that, I can’t see this object over saturating my camera quickly! 

Bryan 

  • Like 1
Link to comment
Share on other sites

1 hour ago, ollypenrice said:

I never do less than 15 minutes in NB but my skies are dark. I generally prefer 30 minutes.  I'm not doubting your choice but what do you find with longer subs?

Olly

I find that there is little tangible benefit in the final, stacked outcome to warrant additional time to do longer subs Olly for broadband, due to my Bortle 5 skies.  The sky background noise starts to drown out the data and that negates going over 300s for me.  Narrowband can benefit from a bit more exposure.  My empirical experience of this fits very well with what Robin Glover discusses here in his rational treatment of this subject.  More exposures for sure, but each exposure does not need to be over 300s, again, in my Bortle 5 skies.  Indeed, according to this, even 300s is too long at the Samyang F2/F2.8 ratiios, but I cannot bring myself to lower each sub to any less.

I think we have an emotional "more is better" response to exposures and if 300s is good then 600s is better.  As Dr Glover discusses here, this extra time is often wasted.

 

 

Edited by kirkster501
Link to comment
Share on other sites

57 minutes ago, kirkster501 said:

I find that there is little tangible benefit in the final, stacked outcome to warrant additional time to do longer subs Olly for broadband, due to my Bortle 5 skies.  The sky background noise starts to drown out the data and that negates going over 300s for me.  Narrowband can benefit from a bit more exposure.  My empirical experience of this fits very well with what Robin Glover discusses here in his rational treatment of this subject.  More exposures for sure, but each exposure does not need to be over 300s, again, in my Bortle 5 skies.  Indeed, according to this, even 300s is too long at the Samyang F2/F2.8 ratiios, but I cannot bring myself to lower each sub to any less.

I think we have an emotional "more is better" response to exposures and if 300s is good then 600s is better.  As Dr Glover discusses here, this extra time is often wasted.

 

 

Very interesting. 
I need to polish up on my maths!!  👨‍🎓 

Link to comment
Share on other sites

All you need to know is your Sky Bortle number (from an online tool) and F ratio you are imaging at.  From those you can determine your optimal exposure.  Going longer than the optimal exposure, a number that is surprisingly small in suburban skies, adds almost nothing to the final image and is wasted time.  That's what Dr Glover shows in mathematical terms and it matches my own experience FWIW.  You'd do better getting more individual subs.  Of course, very dark skies and it is worth going much longer.

  • Thanks 1
Link to comment
Share on other sites

1 minute ago, kirkster501 said:

All you need to know is your Sky Bortle number (from an online tool) and F ratio you are imaging at.  From those you can determine your optimal exposure.  Going longer than the optimal exposure, a number that is surprisingly small in suburban skies, adds almost nothing to the final image and is wasted time.  That's what Dr Glover shows in mathematical terms and it matches my own experience FWIW.  You'd do better getting more individual subs.  Of course, very dark skies and it is worth going much longer.

Thanks 

I’ll give it a go! 
😊

Link to comment
Share on other sites

Very informative video @kirkster501. Thanks for posting it! This table shown in the beginning reveals why @ollypenrice benefits from longer subexposures with his CCDs than we CMOS people usually prefer. Longer exposures are needed with CCDs to suppress the effect of read noise. The recommended difference is almost 10 fold! At the end it also becomes clear that the most recent CMOS cameras that show the sudden magical drop in read noise when gain is increased (like the ASI2600 and ASI6200) works best with a gain just above that drop, actually increasing the dynamic range for the same total integration time. So for us with ASI2600 or 6200, gain should be set to 100 - nice to know I have been doing the right thing:happy72:

 

Skärmavbild 2020-09-29 kl. 13.36.23.png

Link to comment
Share on other sites

20 minutes ago, kirkster501 said:

All you need to know is your Sky Bortle number (from an online tool) and F ratio you are imaging at.  From those you can determine your optimal exposure.  Going longer than the optimal exposure, a number that is surprisingly small in suburban skies, adds almost nothing to the final image and is wasted time.  That's what Dr Glover shows in mathematical terms and it matches my own experience FWIW.  You'd do better getting more individual subs.  Of course, very dark skies and it is worth going much longer.

I'm afraid I'm not yet ready to believe this. As you're stating the formula it ignores sampling rate and surely this cannot be ignored? To give an extreme case, this formula would make no distinction between imaging in bin 1 and doing so in bin 5. This seems dubious. I cannot see what the F ratio, on its own, tells us. What we're really interested in is 'area of aperture per area of pixel.' This is what tells you how much light each pixel is getting. My objection applies not only to binning but to using larger pixels (which amounts to the same thing all being equal.)

3 minutes ago, gorann said:

Very informative video @kirkster501. Thanks for posting it! This table shown in the beginning reveals why @ollypenrice benefits from longer subexposures with his CCDs than we CMOS people usually prefer. Longer exposures are needed with CCDs to suppress the effect of read noise. The recommended difference is almost 10 fold! At the end it also becomes clear that the most recent CMOS cameras that show the sudden magical drop in read noise when gain is increased (like the ASI2600 and ASI6200) works best with a gain just above that drop, actually increasing the dynamic range for the same total integration time. So for us with ASI2600 or 6200, gain should be set to 100 - nice to know I have been doing the right thing:happy72:

 

Skärmavbild 2020-09-29 kl. 13.36.23.png

I absolutely agree that CMOS chips don't need long exposures and for the reasons you state. I remain unconvinced that we can ignore pixel size but I'm open to persuasion...

Olly

  • Like 1
Link to comment
Share on other sites

11 minutes ago, ollypenrice said:

I absolutely agree that CMOS chips don't need long exposures and for the reasons you state. I remain unconvinced that we can ignore pixel size but I'm open to persuasion...

We can't ignore pixel size or hardware binning - but it turns out that software binning does not change things.

Let's say that we take 2x2 software bin as an example.

Adding together 4 pixels as in summing values will increase background sky signal by factor of x4 - thus increasing sky noise by factor of 2 (since noise is square root of signal).

Adding together 4 pixels via software binning will increase read noise by factor of x2 (same as stacking 4 pixels - square root of 4 is two).

Both sky noise and read noise increase by same factor - so their ratio remains unchanged.

However this is not the case with hardware binning - where read noise remains unchanged while sky noise changes because sky signal per pixel changes.

It is not the case with pixel size either - small pixels require longer exposure than large pixels because they accumulate less sky signal and less sky noise while read noise has nothing to do with pixel size (it is per camera model).

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

We can't ignore pixel size or hardware binning - but it turns out that software binning does not change things.

Let's say that we take 2x2 software bin as an example.

Adding together 4 pixels as in summing values will increase background sky signal by factor of x4 - thus increasing sky noise by factor of 2 (since noise is square root of signal).

Adding together 4 pixels via software binning will increase read noise by factor of x2 (same as stacking 4 pixels - square root of 4 is two).

Both sky noise and read noise increase by same factor - so their ratio remains unchanged.

However this is not the case with hardware binning - where read noise remains unchanged while sky noise changes because sky signal per pixel changes.

It is not the case with pixel size either - small pixels require longer exposure than large pixels because they accumulate less sky signal and less sky noise while read noise has nothing to do with pixel size (it is per camera model).

Agreed. However, the OP's signature lists two CCDs so the CMOS situation isn't relevant here. That's why I'm still to be convinced that the formula can work while ignoring the sampling rate.

Olly

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

Agreed. However, the OP's signature lists two CCDs so the CMOS situation isn't relevant here. That's why I'm still to be convinced that the formula can work while ignoring the sampling rate.

Olly

Not sure what the formula is.

One that I usually use is ~ x5 ratio of LP noise to read noise.

Link to comment
Share on other sites

3 hours ago, vlaiv said:

We can't ignore pixel size or hardware binning - but it turns out that software binning does not change things.

Let's say that we take 2x2 software bin as an example.

Adding together 4 pixels as in summing values will increase background sky signal by factor of x4 - thus increasing sky noise by factor of 2 (since noise is square root of signal).

Adding together 4 pixels via software binning will increase read noise by factor of x2 (same as stacking 4 pixels - square root of 4 is two).

Both sky noise and read noise increase by same factor - so their ratio remains unchanged.

However this is not the case with hardware binning - where read noise remains unchanged while sky noise changes because sky signal per pixel changes.

It is not the case with pixel size either - small pixels require longer exposure than large pixels because they accumulate less sky signal and less sky noise while read noise has nothing to do with pixel size (it is per camera model).

Sorry for maybe taking this thread off topic, but are you saying Valiv that software binning with CMOS is useless, but a good idea for CCD? Maybe I should keep my Atik460Ex that I was about to sell.

Link to comment
Share on other sites

1 hour ago, gorann said:

Sorry for maybe taking this thread off topic, but are you saying Valiv that software binning with CMOS is useless, but a good idea for CCD? Maybe I should keep my Atik460Ex that I was about to sell.

For what it’s worth...


 https://www.atik-cameras.com/news/binnning-the-differences-between-cmos-and-ccd/
 

Atiks explanation of binning differences cmos v ccd 

There are some very intelligent and helpful people on this site that are clued up on the maths and the theory of capturing the photons in the most efficient and effective way, 

I treat most of these equations theories and rules (the ones I understand or take the time to digest) as helpful guidelines that  sometimes aren’t as critical in real life situations. I have mainly favoured the  “try it out and see what happens” rule, which is what got me into this crazy hobby in the first place! 

Having said that I truly appreciate all the information I have learned today and it may influence my imaging sessions moving forward and may result in improved imaging sessions?

I am grateful! 

Gorann, keep the 460! It’s a superb workhorse and will always be there as a 2nd camera should you need one! 😊

Cheers 

Bryan 

  • Like 1
Link to comment
Share on other sites

1 hour ago, gorann said:

Sorry for maybe taking this thread off topic, but are you saying Valiv that software binning with CMOS is useless, but a good idea for CCD? Maybe I should keep my Atik460Ex that I was about to sell.

Far from it!

Software binning is as useful as hardware binning in terms of SNR. Well maybe not 100% the same - reason is read noise. With hardware binning - pixels are actually joined to form one super pixel - which is read once. With software binning - pixels are joined together to create one super pixel - but they are each read out regardless - as joining is done in software and reading happens prior to that.

Exposure length has to do with read noise only. If read noise was 0 - there would be no difference between 3600 subs one second long versus 1 sub hour long.

This is because all signals in sub grow linearly with time. Target signal, LP sky background, thermal / dark current - all are linear functions of time. All except - read noise. Only one that happens per sub and each sub has same "amount" of it.

We determine our sub duration based on one condition - once there is other significant source of noise that is bigger than read noise, we can stop our exposure as impact of read noise becomes very small in resulting stack. We usually use LP noise as this other noise source that will become significant because:

1. Grows with time - longer the sub larger the LP signal and hence LP noise - at one point it will become large enough

2. We have cooled cameras and in most cases, thermal noise is much lower than LP noise

3. LP levels are fairly consistent for given imaging site - unlike target shot noise. Sometimes we choose bright targets and sometimes we choose faint targets (in fact, later is way more likely). In any case - we need consistent metric that will work even if we are imaging extremely faint targets (and hence shot noise is very small)

All of the above means that we really need to compare read noise to LP noise to determine limit of our exposure.

Olly rightly noted that this ratio depends on pixel size, or rather - sampling resolution and not just f/ratio of the scope used.

There are different ways to change sampling resolution, but if scope is fixed then there are only three ways:

1. change pixel size - use camera with larger pixels

2. use hardware binning if your camera supports it

3. use software binning - all "cameras" support this - because it is done in software and does not really depend on camera used to record image.

I just noted that exposure length depends on cases 1 and 2 - simply because LP signal level depends on sampling resolution and hence LP noise depends on sampling resolution and finally ratio of LP noise and read noise depends on sampling resolution.

It does not however depend on point 3.

This is due to the way software binning works. Imagine you have just image consisting out of read noise data - so bias file where bias/offset signal is perfect 0 and only random read noise is there.

Each pixel has some value of read noise N.

We add 4 adjacent pixels together - and in this case we'll be just adding 4 x read noise N.

Noise adds in quadrature - so we will have sqrt ( N^2 + N^2 + N^2 + N^2) = sqrt(4 * N^2) = N * sqrt(4) = N *2

Noise of resulting binned pixel is twice of noise of individual pixels - When binning we have to take into account this - read noise of software binned pixel is twice as large (or 3 times if we are binning 3x3 or 4 if binning 4x4, etc ...).

Now let's see what happens if we have 4 pixels that recorded sky background and have some values M (M is pretty much equal for 2x2 group of pixels since sky background does not change that fast even if there is gradient).

Resulting pixel will have 4*M as signal. But what will be noise associated with that signal? It will be sqrt(4*M) = 2*sqrt(M).

Original pixels had sky background M and hence sky background noise of sqrt(M)

Binned pixel has sky background of 4 * M and associated LP noise of 2 * sqrt(M)

In both cases - read noise and sky background, bin x2 resulted in noise increase by factor of 2. Ratio of these noise magnitudes remain the same before and after software binning:

sqrt(M) / N  = 2 * sqrt(M) / 2 * N

This does not mean that software binning does not work - it simply means following:

If you have your setup and you calculated / measured that 5 minutes exposure is good for that setup:

1. If you can hardware bin and decide to hardware bin - you need to recalculate exposure length as it will be different

2. If you software bin your data - no need to change exposure length - if 5 minutes was good for single pixels - 5 minutes will be good for software bin x2 pixels (and x3 and x4).

Hope that I now explained it better and have not created additional confusion.

  • Like 2
Link to comment
Share on other sites

20 minutes ago, ollypenrice said:

Vlaiv, regarding CMOS and software binning, wouldn't this average out the residual noise of each pixel in the group of four?

Olly

I am sure Vlaiv will answer this better but, its a tricky definition as technically you have not changed the signal to noise ratio after all the same number of photons are collected and the same amount of noise so that is fixed. You have done something along the lines of decreasing the perceived noise at the expense of resolution, which depending on some balance will or will not result in am increase in image quality.

Adam

Link to comment
Share on other sites

23 minutes ago, ollypenrice said:

Vlaiv, regarding CMOS and software binning, wouldn't this average out the residual noise of each pixel in the group of four?

Olly

Not sure what you are asking?

Software binning in most cases can be viewed as stacking - you take 4 pixel values and you average them out - it improves SNR by factor of 2 (square root of number of samples averaged). In most cases signal in those 4 pixels is the same (or very close in value).

Link to comment
Share on other sites

3 minutes ago, Adam J said:

I am sure Vlaiv will answer this better but, its a tricky definition as technically you have not changed the signal to noise ratio after all the same number of photons are collected and the same amount of noise so that is fixed. You have done something along the lines of decreasing the perceived noise at the expense of resolution, which depending on some balance will or will not result in am increase in image quality.

Adam

No - it is actual improvement in SNR - legitimate thing as hardware binning.

Software binning differs from hardware binning in only one thing - level of read noise. With hardware binning it stays the same, while with software binning it increases by bin factor - if bin 2x2 - read noise increases by factor of x2, for bin 3x2 - read noise is x3 higher, etc ...

With Poisson noise there is no difference between these two cases:

- One pixel captures 4 times as many photons

- 4 pixels capture regular number of photons and then are summed.

Let's do the math. Single pixel captures N photons. SNR is there sqrt(N) or N / sqrt(N) = sqrt(N) (noise is square root of signal).

Now we increase surface of pixel by factor of 4 and we capture 4*N photons.

What is SNR now? It is again sqrt(4*N) = 2*sqrt(N).

Capturing 4 times as many photons yields SNR that is twice as large.

Now we do second case.

We have 4 pixels each having SNR of  N / sqrt(N) (top is signal, bottom is noise associated with that signal)

Let's add those 4 together to see what SNR we are going to get.

Signal adds regularly, so we have N + N + N + N = 4*N

Noise adds in quadrature so we will have sqrt ( (sqrt(N))^2 + (sqrt(N))^2 + (sqrt(N))^2 + (sqrt(N))^2) = sqrt( N + N + N + N) = sqrt ( 4 * N) = 2 * sqrt(N)

New signal to noise ratio is therefore 4 * N / (2 * sqrt(N)) = 2 * N / sqrt(N)

Signal to noise ratio went from N / sqrt(N)  to 2 * N / sqrt(N) - it increased by factor of two.

It does not matter if you see binned pixel as having x4 as large surface and capturing x4 more photons or as a regular sum of regular 4 pixel values. SNR improvement (in all those quantities that follow Poisson distribution - Target signal, Sky signal and dark current) will be the same.

Read noise is only exception to this and only difference between software and hardware binning.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.