Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Trying something new! My results so far.


assouptro

Recommended Posts

2 minutes ago, vlaiv said:

No - it is actual improvement in SNR - legitimate thing as hardware binning

 

I am aware of the maths and agree with it 100%, just disagree with your definition of SNR within this context. In my view the signal and the noise contained within the image are identical as they always have been. I would say that what is going on is more like some sort of processing gain / latteral intergration.

Adam

 

Link to comment
Share on other sites

3 minutes ago, Adam J said:

In my view the signal and the noise contained within the image are identical as they always have been.

Here is then one thing for you to try and see

Take one of your datasets - align all the subs, normalize them but leave one sub aside after that. Stack all the other subs - use average stacking (not addition - we want to preserve signal level).

Now subtract that sub and stack. Result should be mostly the noise in that single sub (given number of subs we regularly use in our stacks - impact of noise of all other subs will be minimal).

Measure standard deviation of result - this will give you average noise level in that single sub.

Now bin 2x2 both single sub and stack in software. Subtract again and again measure standard deviation.

Explain how we just measure x2 less noise with binned versions than original if binning "inherently keeps both signal and noise the same" :D

 

Link to comment
Share on other sites

The thing is that especially for those CMOS sensors optimized for planetary imaging, read noise is so low it hardly affects the image. Other sources are orders of magnitude larger, especially sky  background in Bortle 4 or 5 skies (that is considerably larger than even the dark current in my non-cooled ASI183MC).

 

Back to the original image: Really nice result, inspitrational, even.  I am now curious what my ASI183MC with Optolong L-eNhance filter might do combined with my Sigma 50-100mm F/1.8 zoom, which really surprised me in its performance at F/1.8 on comet NEOWISE. Even with the larger sensor of the EOS 80D stars were pretty good in the corners. Might be worth having a go.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, michael.h.f.wilkinson said:

Back to the original image: Really nice result, inspitrational, even.  I am now curious what my ASI183MC with Optolong L-eNhance filter might do combined with my Sigma 50-100mm F/1.8 zoom, which really surprised me in its performance at F/1.8 on comet NEOWISE. Even with the larger sensor of the EOS 80D stars were pretty good in the corners. Might be worth having a go.

Completely agree.

Can't wait to receive my Samyang 85mm F/1.4, hopefully tomorrow, to see what it can do

  • Like 1
Link to comment
Share on other sites

25 minutes ago, vlaiv said:

Here is then one thing for you to try and see

Take one of your datasets - align all the subs, normalize them but leave one sub aside after that. Stack all the other subs - use average stacking (not addition - we want to preserve signal level).

Now subtract that sub and stack. Result should be mostly the noise in that single sub (given number of subs we regularly use in our stacks - impact of noise of all other subs will be minimal).

Measure standard deviation of result - this will give you average noise level in that single sub.

Now bin 2x2 both single sub and stack in software. Subtract again and again measure standard deviation.

Explain how we just measure x2 less noise with binned versions than original if binning "inherently keeps both signal and noise the same" :D

 

Like i say I am not debating the maths.

The better way to think about it is that if you litterally increase SNR you will increase the amount of extractable information within the image and the best way to do that is by say stacking more images. If you bin a CCD 2x2 then you will do the same as the base noise per unit area of the chip is reduced. But if you bin a CMOS you are not gaining any information about the object you are imaging as you are losing as much in reduced resolution as you gain in "SNR".

Now that might well produce a better looking image but its a better looking image that contains no additional information.

Adam

Link to comment
Share on other sites

46 minutes ago, michael.h.f.wilkinson said:

I also have an older Carl Zeiss 85 mm F/1.4 (Contax mount), but to date have had better results with the Sigma

This Samyang lens that I purchased (very nice second hand price, I might add) has very interesting feature. It is cinematic lens rather than photographic - which means it has continuous aperture ring rather than one with "clicks". It is actually T1.5 rather than F/1.4 model.

In most cases fast lens need to be stopped down somewhat, depending if narrow band filter is used or OSC camera. Continuous aperture enables me to try F/1.6 and F/1.7 or perhaps F/2.2 and see which one is acceptable. With regular lens - one is limited to choices like F/2 and F/2.8 or similar and nothing in between.

39 minutes ago, Adam J said:

Now that might well produce a better looking image but its a better looking image that contains no additional information.

What sort of additional information does regular stacking provide?

Do you think that signal is somehow different in single 1 minute sub in comparison to 60 stacked 1 minute subs?

Both images captured same photon flux - if you measure brightness of a patch of nebula - you'll get same answer - different only by amount of uncertainty / level of noise. It is not that signal changed between the two - it is level of noise polluting that signal that changed.

Link to comment
Share on other sites

39 minutes ago, vlaiv said:

This Samyang lens that I purchased (very nice second hand price, I might add) has very interesting feature. It is cinematic lens rather than photographic - which means it has continuous aperture ring rather than one with "clicks". It is actually T1.5 rather than F/1.4 model.

In most cases fast lens need to be stopped down somewhat, depending if narrow band filter is used or OSC camera. Continuous aperture enables me to try F/1.6 and F/1.7 or perhaps F/2.2 and see which one is acceptable. With regular lens - one is limited to choices like F/2 and F/2.8 or similar and nothing in between.

What sort of additional information does regular stacking provide?

Do you think that signal is somehow different in single 1 minute sub in comparison to 60 stacked 1 minute subs?

Both images captured same photon flux - if you measure brightness of a patch of nebula - you'll get same answer - different only by amount of uncertainty / level of noise. It is not that signal changed between the two - it is level of noise polluting that signal that changed.

From you responce you are not understanding my argument on a conceptual level. I will see this as my failing to be able to properly articulate it.

You will get more infromation by increased intergration. But at any given intergration you will not gain any infrormation about the object you are imaging by binning the image in software. Detail that is being resolved at a pixel level is information. By software binning you lose that information its just gone, but you do gain other information as a result of increase in "SNR" showing fainter objects in reduced detail, hence information within the image is conserved litterally.

So increasing SNR by getting 4 x the number of subs is not the same as increasing the SNR by binning 2x2 in terms of information within the image as in the first instance you will end up with more information and in the second you will not, that is because in the first instance you have increased the true SNR of the image but in the second you have only increased the preceived image quality. Yet in your argument they are end up with the same mathematically calculated SNR in both cases.

Taking this to an extream if you average every pixel in the image you will have a really very very accurate idea if the average illumination of the sensor, but you will have no information at all on what it just took a image of. Hence you have exchanged one type of infrormation for another type of information. But of course its not a better image, so there is a balance between resolution and "SNR" and eventually lower resolution and better SNR will start to reduce image quality.

Have have you increased the SNR of a noisy picture printed and hung on the wall of your living room by looking at it from twice the distance, so you cant resolve the noise anymore? Of course not, but thats the argument you are making. Now does that image look better for standing further away, yes. 

Hence I will say again, your maths is as alway impecable but its the way SNR is being used in this context that I have an issue with. You are trying to counter a perceptual interpretation of how people make use of the term SNR within astro imaging with a mathamatical argument. 

Adam

 

 

Edited by Adam J
Link to comment
Share on other sites

1 hour ago, Adam J said:

You will get more infromation by increased intergration.

You will again have to explain this bit, since what you are saying does not add up.

First you are talking about information being SNR. then you use this definition:

1 hour ago, Adam J said:

Detail that is being resolved at a pixel level is information

Increased integration will not resolve finer detail at pixel level.

One 1s sub carries same signal as one hour of integration if we use term signal to describe average photon flux. Only difference being precision to which that photon flux is recorded - level of noise.

Since longer integration does not change photon flux being recorded, nor does it help resolve finer detail, I'm asking again - what sort of additional information longer integration or stacking provide?

1 hour ago, Adam J said:

So increasing SNR by getting 4 x the number of subs is not the same as increasing the SNR by binning 2x2 in terms of information within the image as in the first instance you will end up with more information and in the second you will not, that is because in the first instance you have increased the true SNR of the image but in the second you have only increased the preceived image quality. Yet in your argument they are end up with the same mathematically calculated SNR in both cases.

and

1 hour ago, Adam J said:

Hence I will say again, your maths is as alway impecable but its the way SNR is being used in this context that I have an issue with. You are trying to counter a perceptual interpretation of how people make use of the term SNR within astro imaging with a mathamatical argument. 

I now understand the issue. Issue is that SNR is inherently mathematical thing and not defined by perceptual interpretation.

It is ratio of two quantities - above mentioned flux intensity and noise where noise is statistical deviation in measured values from true flux intensity. This is only definition of signal to noise ratio - and it's name is given accordingly signal to noise ratio.

There is only one way SNR can be used in this or any other context - the way it is defined and what it represents.

Does stacking improve SNR? Yes it does. Does binning improve SNR? Again, yes it does.

Same mathematical process is responsible for both.

Does this mean that one is "true" SNR improvement and other is somehow fake SNR improvement? No, since they both conform to same definition of what SNR is - signal to noise ratio.

Does binning lower detail in the image? Well that depends. Was original image over sampled? If so, then, no - it can't reduce detail that was not there to begin with. But even if image is properly sampled or under sampled to begin with - binning will still improve signal to noise ratio as signal being the photon flux over certain area - small or large pixel, is constant irrespective of selected pixel size or math we do with measured value. Signal is always the same - recorded on one sub or on 10 subs stacked. It is the noise that we manipulate by taking averages and that is what improves SNR.

Link to comment
Share on other sites

2 hours ago, vlaiv said:
2 hours ago, vlaiv said:

You will again have to explain this bit, since what you are saying does not add up.

First you are talking about information being SNR. then you use this definition:

1) Increased integration will not resolve finer detail at pixel level.

2) One 1s sub carries same signal as one hour of integration if we use term signal to describe average photon flux. Only difference being precision to which that photon flux is recorded - level of noise.

3) Since longer integration does not change photon flux being recorded, nor does it help resolve finer detail, I'm asking again - what sort of additional information longer integration or stacking provide?

and

I now understand the issue. Issue is that SNR is inherently mathematical thing and not defined by perceptual interpretation.

4) It is ratio of two quantities - above mentioned flux intensity and noise where noise is statistical deviation in measured values from true flux intensity. This is only definition of signal to noise ratio - and it's name is given accordingly signal to noise ratio.

5) There is only one way SNR can be used in this or any other context - the way it is defined and what it represents.

6) Does stacking improve SNR? Yes it does. Does binning improve SNR? Again, yes it does.

Same mathematical process is responsible for both.

Does this mean that one is "true" SNR improvement and other is somehow fake SNR improvement? No, since they both conform to same definition of what SNR is - signal to noise ratio.

Does binning lower detail in the image? Well that depends. Was original image over sampled? If so, then, no - it can't reduce detail that was not there to begin with. But even if image is properly sampled or under sampled to begin with - binning will still improve signal to noise ratio as signal being the photon flux over certain area - small or large pixel, is constant irrespective of selected pixel size or math we do with measured value. Signal is always the same - recorded on one sub or on 10 subs stacked. It is the noise that we manipulate by taking averages and that is what improves SNR.

 

 

Forgive the numbering I dont want to end up with multiple quotes.

1) No actually i say information and SNR are not linked. Or at least thats what I wanted to say i cant be bothered looking.

2) Increased integration will resolve finer detail as it increases SNR allowing faint signal (detail) to rise above the noise floor.

3) Longer integration improves your estimate of photon flux but yes it doesn’t change the photon flux. But again longer integration 100% does help resolve detail in the object being imaged due to noise averaging out or what are we all doing?

4) Yes I know what it is and what it stands for lol

5) Yes

6) Yes but the two processes can lead to the same SNR but one (increased subs) will always improve image quality and allow you to extract more information on the target being imaged. The other will result in the same SNR but no additional information being derived about the target, all you did is make a trade between two different measures of image quality the underlying data didn’t get any better.

So fundamentally they are two different processes in terms of the quality of the image produced. With one you are simply trading resolution for SNR with the other you are improving SNR and keeping resolution constant. I would say stacking more images always produces a better quality image, but binning in my experience can be detrimental especially when you have some areas with good signal and some with poor signal.

So yes you can improve SNR by binning but in the end the quality of image you produce is still fundamentally limited by the SNR of the original stack and the SNR of a software binned image is simply derivative of that and not a primary measure of the quality of your data as it is prior to image processing. If all you really want to do is occlude the noise you would as well to maintain resolution and applying a Gaussian blur to the entire image, that really will reduce SNR too and so by that measure you will pretty much have done the same thing, but the problem is you have not done the same thing have you, so what the point in SNR as a measurement at this point? Hence the only meaningful place to measure SNR is directly after stacking and talking about SNR of a software binned image is not nessaserily helpful.  The only place this falls over as you say is when very oversampled....but most people spend lots of time choosing camera and scope combinations for exactly this kind of reason.

In the end you are always limited by the SNR of the stacked image prior to processing, a high SNR means you have more head room in processing the image, anything you do to manipulate the image after that becomes a totally meaningless measure in terms of image quality.

So yes there is only one definition and meaning of SNR but you can increase it in a number of ways that result in different outcomes. So in a way the means by which you are increasing the SNR is more important than the raw measure itself. We don’t talk about how a noise reduction algorithms increase SNR do we? or how a Gaussian blur increases SNR. Yet that is all binning is, blunt force noise reduction that comes at a cost.  Hence, I feel that the data you have to work with is the data you have to work with and any reduction in SNR derived by software manipulation of the image is purely academic and almost certainly meaningless in terms of a measure of the end product quality.

I use SNR to inform me of things like how much more integration I need to see a significant improvement in my data or compare the performance of different cameras, sky conditions, filters, gain setting, sub length....etc etc. All of those are valid uses, but once you are into processing software binning will not improve the quality of your data that is now set. So for me I fundamentally do not consider SNR at all when choosing if I should SW bin its not relevant, I just look at the image and decide if it made it better or worse and often with long integrations the answer is worse. So for me I just do not see SNR as applied to software binning in the same way and believe that you cant view SNR while processing in the same way as you view it during image capture. 

I think you are looking at this in a very pure way SNR is SNR is SNR, but for me I am also considering it in terms of how I am making use of that measurement within my workflow and it means less as a measure once you are into processing and you have finished capture as improvements in SNR do not always equal improvements in my image from that point onwards. So for me SNR improvement after stacking is not the same thing as improving SNR before and during stacking. So thats why I do not think of the SNR improvement in SW binning as the same thing as an SNR improvement made during stacking or capture.

If you don’t get my perspective on this then you don’t get it, that’s ok. But as I continue to point out my point of view is not about the maths, I can do all those calculations myself, I have modelled it all in Excel etc. So forgetting my inadequate attempts to explain myself above I hope this better explains my thinking.

But perhapse we should move the discussion to another venue as its no longer relevant to the OPs thread. Hence my last post on this here.

Adam

 

Edited by Adam J
Link to comment
Share on other sites

The science of this has gone over my head somewhat guys, I was basing what I said on my own experience allied top what Dr Glover says.  

Back to the Samyang 135mm, I am considering using  a step down ring to lower the aperture to F2.8 (the sweet spot in my example) and leave the lens' iris wide open so as to minimise the slight star spikes.  They are minimal on the 135mm anyway but be good to remove them completely.

Link to comment
Share on other sites

1 minute ago, kirkster501 said:

The science of this has gone over my head somewhat guys, I was basing what I said on my own experience allied top what Dr Glover says.  

Back to the Samyang 135mm, I am considering using  a step down ring to lower the aperture to F2.8 (the sweet spot in my example) and leave the lens' iris wide open so as to minimise the slight star spikes.  They are minimal on the 135mm anyway but be good to remove them completely.

Now that’s piqued my interest! 
 

would you make one ? Or can you buy one for the job? 
 

Link to comment
Share on other sites

You can buy them from Amazon for about £12 for a pack of them of different size.  We need a step down ring(s) to slightly lower the aperture to what would be the diameter of the open lens aperture at F2.8 (or whatever you want to use).  Look at the Samyang's internal thread diameter on the lens and make sure the step down you buy has one in the step down kit matching that diameter.  Of course, if your example works fine at F2.0 -i.e. fully open - then no need to do this.  However, with mine I find my corners are off at F2 and I need to stop it down to F2.8

 

  • Thanks 1
Link to comment
Share on other sites

^^^^ and the reason for this is to prevent the blades of the iris causing diffraction spikes.  The Samyang Iris design is pretty good (though not perfect) anyway.  Some other lenses - like th eCanon 50mm nifty-fifty - can be quite bad.  So we leave them wide open and use a step down ring to lower the aperture.

  • Thanks 1
Link to comment
Share on other sites

1 minute ago, kirkster501 said:

You can buy them from Amazon for about £12 for a pack of them of different size.  We need a step down ring(s) to slightly lower the aperture to what would be the diameter of the open lens aperture at F2.8 (or whatever you want to use).  Look at the Samyang's internal thread diameter on the lens and make sure the step down you buy has one in the step down kit matching that diameter.  Of course, if your example works fine at F2.0 -i.e. fully open - then no need to do this.  However, with mine I find my corners are off at F2 and I need to stop it down to F2.8

 

Cheers for that.

I only used it once or twice at F2, I seem remember having some strange shaped stars when I did so favoured stepping down 

I’ll take a look at Amazon 😊

Bryan

Link to comment
Share on other sites

3 minutes ago, kirkster501 said:

... and finally just to add, I love this widefield AP.  If, god forbid, I had to divest myself of my telescopes and obs, I'd just keep the Samyang and mount to continue with this type of AP.  and a pair of large bins for visual.

I have fallen for it too! 
I keep thinking, when should I change my rigg and try a different perspective, but there’s so much to image atm that fills the fov of this neat little lens and it copes with wind so much better than some of my other options! 
😊

Link to comment
Share on other sites

11 minutes ago, assouptro said:

Now that’s piqued my interest! 
 

would you make one ? Or can you buy one for the job? 
 

As Steve says, you can buy them but they end up being further and further from the lens as you screw them together. One of my robotic clients asked me to make a flat one to try on his camera lens and he finds it works better. Making them is dead easy. The flat lid of a freezer ice cream tub is ideal!  Use stove or BBQ paint because it uses pigments rather than dyes and is less reflective. The compass-cutter comes from a graphics outlet.

spacer.png

Oh, and cut out the outer circle first... 🤣

Olly

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

20 minutes ago, ollypenrice said:

As Steve says, you can buy them but they end up being further and further from the lens as you screw them together. One of my robotic clients asked me to make a flat one to try on his camera lens and he finds it works better. Making them is dead easy. The flat lid of a freezer ice cream tub is ideal!  Use stove or BBQ paint because it uses pigments rather than dyes and is less reflective. The compass-cutter comes from a graphics outlet.

spacer.png

Oh, and cut out the outer circle first... 🤣

Olly

 

Thanks Olly 

it’s been a while since I made something for astrophotography, I’ll try this! 
 

Bryan

  • Like 1
Link to comment
Share on other sites

47 minutes ago, assouptro said:

I have fallen for it too! 
I keep thinking, when should I change my rigg and try a different perspective, but there’s so much to image atm that fills the fov of this neat little lens and it copes with wind so much better than some of my other options! 
😊

There is indeed.  There is enough in the sky to do all the AP you could ever want with the Samyang 135mm. I have also done Virgo cluster and M81/M82 with mine plus tons more stuff.  Also constellations and asterisms.  I did one of the Coathanger last week that I shared on here as well as the Veil nebula.  To do the full Veil you'd need to do a 4523412341 panel mosaic with a telescope!  With the Samyang you can do it in one FoV.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.