Jump to content

Mono cameras for EAA?


Look left

Recommended Posts

I,ve been reading about Mono cameras for EAA due to having a couple of DMK mono,s.

However  a lot of people are saying one of the benefits of EAA is the ability to see coloured images.

 Relise  it's up to the individual to make their choice however are there any benefits between them?

Link to comment
Share on other sites

If I am doing EAA with my mono camera I like to go for difficult targets such as distant galaxies and galaxy clusters which would not normally show much colour and I know that I am getting as much detail as possible from a live image. So I am normally happy with the mono image I get. I have not tried EAA with a colour camera but I suspect it would be suited to the showpiece objects such as the Orion Nebula which would possibly bring out some of the colour it contains.  I am thinking that your choice of colour of mono camera might depend on what you intend to view.

Link to comment
Share on other sites

I love my mono ASI290MM for eaa more than my colour camera (294Pro which is about 3x the price...). It was a surprise to me, but the main reason is that it has shows better detail and can see in the dark better. The lack of colour wasn’t an issue after I started noticing the lower resolution on the larger pixels camera. I’m a firm believer now in oversampling, but that’s a bit off topic. The main thing for me is that the mono 290mm is a bit more plug and play too because it’s a smaller sensor, needs less light and you don’t have to fiddle with the colour sliders to balance your image.
 

Most mono cameras will see more with better resolution than a similar colour sensor - for the reason than barkingsteve said. I do tend to look for galaxies and other small objects so the small sensor size isn’t an issue for me most of the time. 

Link to comment
Share on other sites

Sure you can use mono camera for EEVA - if that suits your "observing" style.

Some people like to see the color of the objects and are happy to sacrifice some sensitivity in order to get that.

Another important point is that you can get very large OSC sensor for reasonable money - as mono sensors tend to be more expensive. Sensor size = speed.

12 minutes ago, London_David said:

Most mono cameras will see more with better resolution than a similar colour sensor - for the reason than barkingsteve said. I do tend to look for galaxies and other small objects so the small sensor size isn’t an issue for me most of the time. 

This is because of the way colour sensors are used. If you use them properly they will can out perform mono sensors (if they are larger enough).

Let me give you an example. First on resolution.

In order to fully exploit resolution, you must understand that OSC is in effect - sparse sensor. Every other pixel is capturing information that you want to put together. Take the red channel for example and look at Bayer matrix:

Bayer-pattern-filter-kernel.png

R kernel is what is captured for R channel (Similarly for other channels) - there is pixel than "empty" then another pixel, then empty - you are effectively sampling every other pixel and your sampling resolution is halved.

This is not issue as you can match your sensor to wanted focal length to get resolution you want. If your mono camera is sampling at for example 2"/px then you need to choose your focal length to be 1"/px using same formula for OSC camera pixel size. This will ensure that you are effectively sampling again at 2"/px (1" - red, 1" - none, 1" - red, 1" - none, every 2" is red pixel so 2"/px is red resolution).

Second important thing is that your EEVA stacking software knows how to debayer in super pixel mode to exploit this (or better yet - to split your bayer matrix into 4 subs, each behaving as mono - one Red, one Blue and two Green subs - with half of resolution of the chip).

Now, I'll show you how can ASI294 be more sensitive than ASI290 although it has lower peak QE 75% vs 80% for ASI290 and it's OSC so each pixel is effectively using only 1/3 of spectrum.

Let's say you are using ASI290 with F/5 newtonian like SW 130PDS. Take your ASI294 and pair it with 10" F/8 RC scope.

image.png.ce74f124a216a4fa9755849b1a44b2fe.png

With ASI290 and SW130PDS your resolution will be 0.92"/px.

With ASI294 and 10" RC, per pixel resolution will be 0.47"/px, but since we said color is sampled at twice lower resolution - actual color pixels will be sampled at 0.94"/px - almost the same as ASI290.

How about light gathering?

One pixel of ASI290 captures full spectrum of 130mm on 0.92"/px with 80% QE

Four pixels of ASI294 capture part of spectrum of 250mm of aperture at same surface (same effective pixel size) but 75% QE.

To sum up photons of ASI294 we add 1/3 * 1/4 + 1/3 * 1/4 + 1/3 * 1/4 + 1/3 * 1/4 (each quarter of whole pixel captures only 1/3 of spectrum). That equals to 1/3 of whole surface.

So we have:

130^2 * 0.8 = 13520

vs

250^2 * 0.75 / 3 = 15625

So if you sum R, G, G and B you will get more photons, better SNR, and also RGB information.

As far as cameras go, ASI294 if paired with suitable telescope will perform better than ASI290 when paired with telescope of your choice.

But again, you really need different telescope to pair ASI294 with and you need to have EEVA software that can properly debayer image and extract luminance from color pixels by adding them together and then apply RGB information on top of that. Resolution will be the same of course as both will sample at same sampling rate.

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Let's say you are using ASI290 with F/5 newtonian like SW 130PDS. Take your ASI294 and pair it with 10" F/8 RC scope.

While I largely don’t disagree with the maths or description, as a practical matter, I’m a bit confused. You’re essentially saying with extra aperture the £1k 294Pro on a 3k 10” will work similarly (a bit over 10% better) but in colour, to the £300 ASI290 can on a £179 5” telescope. The colour sensor and the bayer matrix, don’t play a large part in that performance change.

If you put the ASI290 on the 10” RC it will still detect fainter objects, with more detail than the ASI294Pro. Or, if you are limited by the atmosphere and you are oversampling on the 10” f8 with 2.9 um pixels - you could bin it to 5.8um square pixels (bringing the pixel area size closer to the 294 anyway). That will detect much fainter things than the 4.64um pixel size colour bayer ASI294  o matter what you do since the ASI290 still has a higher QE.

To be specific on the IMX290 too, we may be talking at cross purposes here, but it’s a quad bayer design which isn’t quite the same as the layout as the diagram you show. The quad bayer is physically laid out with 2x2 grouped colour pixels that are binned in normal mode and used as double exposure in HDR mode. So in your diagram each pixel on your bayer diagram should actually show 4 pixels in it, and the sensor in HDR does double exposures two short and two long on opposite cell sites then combines them for read out. Is that the proper debayering you are referring to? While I have read talk about reprogramming the SoC to get access to the underlying quad pixel structure, as I understand it that requires more than just post processing and software debayering.

Generally the best regarded debayering algo for the back illuminated Sony chips like the IMX294 in RGGB order is the adaptive airy disk on APP, however I’ve not seen that implemented on any real time program. Do you have a recommendation - I am alway looking to test new options?

Going back to the sensors, I’ll clarify what I meant - if two sensors are the same chip design and one is mono and one is colour, given all else is equal and existing technology, the mono sensor will always outperform the colour in terms of detecting photos and resolution. This is because the matrix colour filters block light, reducing the photos hitting the actual photo site.  When manufacturers change sensors for mono they remove the filters so each pixel can detect all photons, not just those of the right wavelength. 

If you bring in cost into the equation, mono sensors are indeed generally more expensive than colour sensors of the same design for this reason. Mono sensors are a niche market and don’t have the same economies of scale. However, mono sensors of a different design - especially if the sensor size is smaller - can be cheaper and have a better ability to detect photons. Generally, given other things being similar, you alway pay for more for larger sensor size.


So to bring it all back to the OP’s question: it is individual choice. Some people like colour, some don’t care one way or another and others prefer clarity of shape and detail over colour.

I surprised myself by discovering I am in the “don’t care” category. I thought I needed colour, but I found speed and detail more fun about 60% of the time. On any given scope the 290 gives me a more detailed picture faster, so it’s the one I tend to grab unless there’s a specific reason for colour.

The main point for me is that I do this for fun, so if the camera is more fun to use, the observing is more fun to do. But what you find fun may change, and you just have to figure out for yourself!

 

Link to comment
Share on other sites

6 hours ago, London_David said:

If you put the ASI290 on the 10” RC it will still detect fainter objects, with more detail than the ASI294Pro. Or, if you are limited by the atmosphere and you are oversampling on the 10” f8 with 2.9 um pixels - you could bin it to 5.8um square pixels (bringing the pixel area size closer to the 294 anyway). That will detect much fainter things than the 4.64um pixel size colour bayer ASI294  o matter what you do since the ASI290 still has a higher QE.

What I was saying is: For any given telescope that you pair with ASI290, there is alternative telescope that you can pair ASI294 that will provide better SNR if correct processing is applied.

This is true simply because surface of ASI294 is more than 3 (and difference in QE accounted) times larger than the surface of ASI290. All that surface can simultaneously capture light - as if you had same sensor size but larger number of sensors (multi scope rig).

6 hours ago, London_David said:

To be specific on the IMX290 too, we may be talking at cross purposes here, but it’s a quad bayer design which isn’t quite the same as the layout as the diagram you show. The quad bayer is physically laid out with 2x2 grouped colour pixels that are binned in normal mode and used as double exposure in HDR mode. So in your diagram each pixel on your bayer diagram should actually show 4 pixels in it, and the sensor in HDR does double exposures two short and two long on opposite cell sites then combines them for read out. Is that the proper debayering you are referring to? While I have read talk about reprogramming the SoC to get access to the underlying quad pixel structure, as I understand it that requires more than just post processing and software debayering.

No, Bayer pattern is exactly the same as shown in diagram - every Bayer pattern looks like that and only difference is in order of pixels. You don't necessarily need to look at it as groups of 2x2 pixels - you can look at it as interspersed sparse matrices of red pixels with blue pixels with green pixels.

I do all my OSC processing like that - I don't debayer into RGB image but rather split bayer matrix into mono R subs, mono B subs, and two times the number of G subs that I stack as mono subs and later do color combine.

6 hours ago, London_David said:

Generally the best regarded debayering algo for the back illuminated Sony chips like the IMX294 in RGGB order is the adaptive airy disk on APP, however I’ve not seen that implemented on any real time program. Do you have a recommendation - I am alway looking to test new options?

Yes, problem is EEVA software support. Best is not to debayer by interpolation at all - that just blurs data whichever way you choose to interpolate missing values. Best is to just simply say - we don't have missing data to be sampling at resolution given by pixel side length - but we are perfectly sampling at twice that resolution. No need to try to go higher resolution by interpolation - if we need higher resolution we can always increase focal length (even if by using barlows, but best to choose suitable scope in the first place).

There is alternative algorithm that is sort of "have your cake and eat it too" - that would be Bayer drizzle. However, that is best applied in planetary / lucky imaging approach when working at extremely high resolutions where atmospheric movement and mount precision (or rather imprecision) guarantees good dither between subs. Bayer drizzle works much better than regular drizzle, since pixels are already "shrunken" (size of pixel vs sampling rate) and you don't need to be under sampling to use it.

6 hours ago, London_David said:

Going back to the sensors, I’ll clarify what I meant - if two sensors are the same chip design and one is mono and one is colour, given all else is equal and existing technology, the mono sensor will always outperform the colour in terms of detecting photos and resolution. This is because the matrix colour filters block light, reducing the photos hitting the actual photo site.  When manufacturers change sensors for mono they remove the filters so each pixel can detect all photons, not just those of the right wavelength. 

Completely agree with that.

6 hours ago, London_David said:

If you bring in cost into the equation, mono sensors are indeed generally more expensive than colour sensors of the same design for this reason. Mono sensors are a niche market and don’t have the same economies of scale. However, mono sensors of a different design - especially if the sensor size is smaller - can be cheaper and have a better ability to detect photons. Generally, given other things being similar, you alway pay for more for larger sensor size.

I believe it is economies of scale here. If one wants to do EEVA on a budget - then I would recommend used mirrorless camera with APS-C sized sensor. That will be both cheap and more sensitive (when paired with suitable scope) than any mono offering out there.

Again, software side of things is lacking - but there is ASCOM driver for Canon and also for Nikon if I'm not mistaken (maybe there is even one for Sony cameras, but I'm really not sure on that one).

6 hours ago, London_David said:

So to bring it all back to the OP’s question: it is individual choice. Some people like colour, some don’t care one way or another and others prefer clarity of shape and detail over colour.

I surprised myself by discovering I am in the “don’t care” category. I thought I needed colour, but I found speed and detail more fun about 60% of the time. On any given scope the 290 gives me a more detailed picture faster, so it’s the one I tend to grab unless there’s a specific reason for colour.

The main point for me is that I do this for fun, so if the camera is more fun to use, the observing is more fun to do. But what you find fun may change, and you just have to figure out for yourself!

Completely agree with you on that.

I did not want to turn this into something too serious and take fun out of it - just wanted to point out that things can be looked at from a different perspective which would give you more possibilities.

Link to comment
Share on other sites

With a filterwheel you can have your cake and eat it. I often take the decision on whether to use colour literally half-way through an observation of an object. Sure, this approach adds to the initial expense, but it isn't going to need upgrading ever unlike the camera (or at least, there is no temptation to upgrade).

Other benefits of mono include being able to use it (more efficiently  at least) with narrowband filters and spectral gratings, should your interests move in that direction.

Martin

 

  • Like 3
Link to comment
Share on other sites

You've mentioned filters. If I,m Mono with a DMK41 I haven't done it yet due to weather but I have a Astronomix Planetary 642 filter. Yes it more for Planets naturally but it does cut out unwanted light and goes into some narrowband areas. 

Would I be  cutting out on too much of the imaging info from the source to get any other benefit?

i,m using a Celestron 6SE SCT with a 6.3 reducer to bring it to f6.3

Edited by Look left
Added OTA info
Link to comment
Share on other sites

Hi 'look left',

May I suggest you are making life complicated for yourself. Enjoy mono first and get into the buzz of EEVA. No doubt you will have looked at  the many post in this EEVA section - we achieve very good mono EEVA images with decent detail. EEVA is about 'observing' using electronically enhanced aids. I have now observed several hundred DSOs with several thousand still on my target lists. Colour rarely adds any extra detail unless you wish to get into Ha work or star colours (both very interesting). Colour makes it look pretty but for me I like to 'see' the DSOs as if I was using a really big scope.  Filters will mean even longer time to get a result and noting the aperture of your scope you need as many photons as possible unless you are happy to sit there patiently waiting. 

I have played now and again with filters but it slows the observing down, adds another layer of complication and for me gets in the way of 'observing'. Having said all of this, I am currently thinking about colour and other filters to aid me in the observational details to be gained on certain objects. Colour per say, - no big deal for me unless it adds detail, but I do enjoy seeing coloured EEVA style images

Have fun

Mike

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.