Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ZWO120MC Camera


Barv

Recommended Posts

Interesting reading, though the wiki entry on Nyquist went rather over my head and will have to be studied.     I have seen/read quite a lot of posts where this F16/17 is mentioned, but also a number of excellent images where it has been exceeded, in good seeing and usually with bigger apertures (10"+).   Presumably higher QE and sensitivity from mono cameras can extend this?

I agree with Michael's calculation and probably approached things the same way myself in terms of the maths.  Where I think we differ is that I have assumed that the longest dimension of the pixel should be used.  That is not the edge length, but the diagonal.  Using the diagonal dimension changes the figures by a factor of the square root of 2, or roughly 1.4.

So, where Michael has f/25 to f/30 I would have f/35 to f/42, and where he has f/16.7 I would have f/23.4.  It's quite pleasing to see that our figures are consistent in this way :)

I'm open to being persuaded that using the diagonal is wrong, but here's my thinking...

The starting point for these calculations is matching the pixel size of the camera to the resolution of the telescope.  That boils down to achieving a given focal ratio.  Let's assume we use the edge length of the pixel for this calculation, giving us f/x as the focal ratio required.  We only really address the simplest bit of Nyquist's theory which is that you actually need no fewer than two "samples" of the input to be able to recreate it, or in our case we need two pixels per smallest resolvable detail.  That means we need a focal ratio of f/2x.  But that still doesn't give us two pixels per smallest resolvable detail diagonally.  To do that we need to apply the square root of 2 factor as above, giving f/2.8x.

James

Link to comment
Share on other sites

  • Replies 64
  • Created
  • Last Reply

Good discussion on this. I usually try to search for past (SGL forum) references - But his time I was a (also) "lazy" re. a question (elsewhere) on this (camera). But evidently the subject is topical? FWIW, I am happy I now have a starting point for experiment. My only regret is selling a perfectly good Barlow (2" GSO ED w/detachable element etc.) which (thanks to filter-thread extensions) could have explored the range 1.5x to 2.5x. etc. etc. Hey Ho.  :o

Aside: The whole thing fitted INSIDE the (rather huge) SCT Monorail focusser... This was, of course, always my intent etc. :p

http://www.teleskop-express.de/shop/product_info.php/info/p3947_TS-2--MONORAIL-Dual-Speed-Focuser-for-SC-Telescopes.html

Link to comment
Share on other sites

My feeling is that the desirable focal ratio for the ASI120 should be higher than f/15.  I think f/25 is nearer the mark and f/20 to f/25 is what I aim for.  I believe that Stuart and Neil are both working at around the f/20 to f/25 mark, too.

James

Yes Stuart images at around f20 with his QHY5L (same sensor) and gave Harvey (Barv) extensive advice on this very subject a few days ago.

Link to comment
Share on other sites

I agree with Michael's calculation and probably approached things the same way myself in terms of the maths.  Where I think we differ is that I have assumed that the longest dimension of the pixel should be used.  That is not the edge length, but the diagonal.  Using the diagonal dimension changes the figures by a factor of the square root of 2, or roughly 1.4.

So, where Michael has f/25 to f/30 I would have f/35 to f/42, and where he has f/16.7 I would have f/23.4.  It's quite pleasing to see that our figures are consistent in this way :)

I'm open to being persuaded that using the diagonal is wrong, but here's my thinking...

The starting point for these calculations is matching the pixel size of the camera to the resolution of the telescope.  That boils down to achieving a given focal ratio.  Let's assume we use the edge length of the pixel for this calculation, giving us f/x as the focal ratio required.  We only really address the simplest bit of Nyquist's theory which is that you actually need no fewer than two "samples" of the input to be able to recreate it, or in our case we need two pixels per smallest resolvable detail.  That means we need a focal ratio of f/2x.  But that still doesn't give us two pixels per smallest resolvable detail diagonally.  To do that we need to apply the square root of 2 factor as above, giving f/2.8x.

James

I also considered using the diagonal of a pixel but that is not correct. Consider the pixels labelled in a chessboard fashion, with alternating white and black fields, and assume the pixel width (length of a side of the square) is chosen to be optimal according to Nyquist. Two consecutive dark diagonals may be 1.4x the pixel width apart, but there is a white diagonal in between. A sine wave with 0.7x the pixel width could be represented by this configuration, so the spatial frequencies of diagonal waves of shorter wavelengths than the Nyquist limit.

This does not mean that using slower focal ratios is wrong, as Freddie's example clearly shows. If S/N is good enough (and the ASI 120MC is impressive in this respect, as are the ICX 618-chip cameras like the new DMKs), it should work. What Freddie's image does not show (by itself) is whether more detail is captured. What you should not do is go below F/16 for the ASI 120, or about F/25 with the DMK21, or you will lose detail. In my experience in microscopic imaging, going slightly over the Nyquist rate (i.e. using higher magnification) was often better than precisely at the Nyquist rate. For low-surface brightness objects (like Saturn) I prefer to stick to lower magnifications (closer to Nyquist optimum) to keep S/N acceptable.

Link to comment
Share on other sites

I need to have a better think about what Michael has said so I understand it properly, but on the assumption he is correct it would be interesting to know why the likes of Damian Peach were preferring to work at around f/40 when using 5.6um pixel cameras.  If it wasn't getting any more detail, there must surely have been some good reason.  Perhaps because they just felt the oversampling was helpful in processing?  I don't know enough about signal processing really.  I hardly know enough even to be dangerous.

James

Link to comment
Share on other sites

Signals near the Nyquist frequencies are attenuated by the sampling, so if you sample at a higher frequency you are probably capturing borderline detail better. Furthermore, the square shape of the pixels is convolved with the image, leading to slight blurring. Going beyond the Nyquist frequency can reduce its effect on the final image.  My old SPC900 is the only one I have used on Saturn, and F/30 was a bridge too far in that case. The much higher QE of the newer DMK should allow me to use F/40 without degrading S/N compared to an SPC900 at F/25. Due to rotten weather I have not yet been able to test the DMK on anything besides the sun and a single outing on Jupiter (under HORRENDOUS seeing). I hope I can put it through its paces under good conditions some time soon. I might try and use the 3x TeleXtender besides the 2.5x powermate to compare the results.

Link to comment
Share on other sites

Yes Stuart images at around f20 with his QHY5L (same sensor) and gave Harvey (Barv) extensive advice on this very subject a few days ago.

Hi again Stuart.

I am grateful for the advice and help that you have given me, I put the post up on the forum because you suggested that I do so!

Kind regards

Harvey (Barv)

Link to comment
Share on other sites

  • 3 weeks later...

Hi all,

Looking into depth at all the comments in this ongoing thread, am I to assume on the whole that if I where to purchase an ASI120MC then coupled to a C9.25 with a 2.5X powermate would be pushing the limits with regard to imaging resolution going somewhere close to F25?

Would then the use of a ASI120MM be better suited with the appropriate filter wheel?

I just can't decide between the two. I am reading other threads that tell me an OSC is around 95% as good as a mono given the extra exposure time that you can give to an OSC by not having to restrict timings through filters and the 2 types almost level out over a given time?

I have also read other peoples conclusions that unless the seeing is absolutely spot on the benefits of a mono with filters are lost over the simplicity and ease of use of an OSC.

I want to upgrade from my current Toucam and I like what I'm hearing about the ASI120 cameras but am stuck as to which one to go for.

If it were to be mono then the obvious learning curve doesn't worry me and I'm not a big fan of convenience over flexibility, but I am now also concerned over what's really being said in this thread about detail gathering against image size and F ratios

Thanks

Nick

:confused:

Link to comment
Share on other sites

The ASI120MM and MC have essentially the same resolution, so the barlow/Powermate choices would be similar. I am thinking of getting an ASI120MC, or a DFK21 to have the option of one-shot colour on planets as well as my current monochrome plus filter-wheel approach. The latter does have the edge in sharpness, and allows you to use IR as well, but under conditions of intermittent cloud or other weather related problems you sometimes just want the hassle-free option of a colour camera.

Link to comment
Share on other sites

Would then the use of a ASI120MM be better suited with the appropriate filter wheel?

I have "reserved funds" for the colour version next month - If there are any left? :p

The mono version commits you to some sort of (USB) *automated* filter wheel - At least for planetary imaging? A TOTAL exposure (LRGB) of "about a minute" for Jupiter... avoiding rotation blurring etc. ;)

Far beyond my [single] scope (sic!) - But combining "twin" exposures from mono and color versions?

http://www.zwoptical.com/Eng/Galleries/ASI035.asp - Certainly some nice "combination" images?  :cool:

Also an interesting comparison? 

Link to comment
Share on other sites

I got the mono cam a couple of weeks ago, with the ZWO filter wheel. Using the manual wheel isn't a problem, at least not for me, shooting about 30 seconds for each of the LRGB filters. I wanted the mono for the additional flexibility, such as an IR pass filter for Lum in poorer seeing or just a mono image, but also for the option to use a UV pass for the clouds of Venus.

 

These are some of my first images, LRGB, RRGB, RGB

jupiter2014_9.jpg

 

jupiter2014_10.jpg

 

and this was with a borrowed colour version

jupiter2014_3a.jpg

Link to comment
Share on other sites

Hi

There's some lovely images and detail. A testament to your patience and good seeing.

You state that you took 30 second AVi's through each filter. What frame rates were you running at? must have been quite high and you must be pretty quick  with that manual filter wheel, being able to take all that without seeing rotation.

Cheers

Nick

Link to comment
Share on other sites

I bow to your talent. Thanks for the information. I think my mind is as good as made up now. It's the mono and the filter wheel for me going forward. Shame there won't be anyone selling them at Astrofest, so it looks like mail order.

Thanks again and well done.

Cheers

Nick

Link to comment
Share on other sites

I think it may be fair to say that even if the pixels in the colour and mono cameras are the same size, the mono does give slightly better definition than the colour.  Because of the way the colour data is interpolated there is always going to be a bit of a "smoothing" effect with a colour camera.

I posted this example in another related thread recently and it's a little extreme because it made it easy for me to work out, but serves to illustrate the effect.  Let's say you have a colour camera and part of your image ought to look like this:

mono-raw.png

What the colour camera actually receives in terms of signal is affected by the colour mask and might look more like this:

mosaic-raw.png

There are lots of ways to attempt to regenerate the colour at this point, but bilinear interpolation is quite common because it's reasonably quick.  That would leave you with a final result looking something like this:

demosaic.png

So the colour and shape of the original section of the image (which would remain the same had a mono camera been used with RGB filters) has been smoothed out and colours distorted.

As I said, this is an extreme example and there are better algorithms for handling the colour interpolation (though they can take significantly more time), but it serves to illustrate what happens to small details in a colour camera.

I wouldn't suggest that this should put someone off buying a colour model.  It just helps to be aware of their limitations.

James

Link to comment
Share on other sites

Oh, and regarding the ASI120MM and MC specifically, from memory I believe that the difference in sensitivity is more like a 14% to 15% drop when comparing colour to mono, even after allowing for the filters (of reasonable quality) required for mono imaging.

James

Link to comment
Share on other sites

while we are on the subject of the camera ASI120MM, whats best the colour or mono version???? As im gonna go for this, ive got a colour filter wheel, and filters.......is the sensitivity much better??? Very little difference in price, wanna go for what gives me best images, and is it much harder process to image tri-colour??

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.