Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Mono or Colour?


Jez

Recommended Posts

10 minutes ago, toxic said:

you cant really compare a dslr to an astro ccd/cmos camera but here is a comparison any way:tongue:.

Certainly not fair to compare an unmodified 1100D on an largely ha target!

OK, this is stacked and processed, but made from 60-second exposures with a modified 450D and under light-polluted skies.

Veil.thumb.png.7e76d0d825e67eb49d03fff6c0c73995.png

Edited by Stub Mandrel
  • Like 3
Link to comment
Share on other sites

This might be of interest, it's certainly thought-provoking.

Especially that we are woprking in eth dark(!) when debayering in ignorance of the true sensor response curves.

A CGMY bayer pattern could be very good for astronomy. The CMY pixels collect twice the photons of RGB ones.

 

Link to comment
Share on other sites

I think that with a new generation of CMOS cameras inbound, and the advances in sensitivity evident in the cmos chips in phones (my s8+ is great in low light), the "rules" may change again in the not too distant future.

At the moment, OSC cameras, and DSLRs, even modified ones, are pretty sucky at capturing Ha emission nebulae. When you split apart the results of a OSC image into R, G, and B, you see that very very little of the Ha signal makes it onto 3/4 of the result. Oiii imaging is different, with good signal collected by both green and blue sections, but still nowhere near the result you get with a mono camera. Personally I'd like to try a colour astro camera with an RGBR matrix.

How are you getting on Jez? Are we helping or hindering :)

  • Like 1
Link to comment
Share on other sites

16 hours ago, DaveS said:

 

 

9 hours ago, ollypenrice said:

 

@DaveS & @Olly,

Well, I could be biased since I only have a dslr on my 1st day learn AP. If I have both dslr & mono ccd/cmos, will I find mono will be easier? I don't know :D

I'm still waiting for my ASI1600-MM, it's still on the way like I said. I may find it easier than dslr when it comes, but it could be bias also since I already been doing AP with my dslr for 2yrs :D 

At least, the OP is in right direction. He puts small refractor on his AVX, not EdgeHD 8" :D


Ketut

  • Like 1
Link to comment
Share on other sites

12 hours ago, Stub Mandrel said:

Here's an interesting graph for Canon 350D, 450D appears to be close, other cameras similar:

images?q=tbn:ANd9GcSP70pKYKPBxxMqD8q8oVL

We forget that OSC bayer filters are NOT narrowband filters, instead they mimic our eyes, and with the IR cut filter removed they are even wider band.

Imagine you are imaging a lager Ha nebula with an astro-modded DSLR. Its maximum red sensitivity is pretty much lines up with Ha. Green registers about 1/6 of the Ha signal and blue approaching 1/10.

So sensitivity at Ha is about 1+1/6+1/6+1/10 = 1.4 times what it would be if it was just from the red pixels.

If you look across the graph adding up B+2xG+R for each wavelength - you will see sensitivity is rather more than the 1/4 you would expect.

Add in the recovered set from a decent debayering algorithm and I think the valid signal might be closer to the sensitivity of mono+filters than people expect.

I look at that graph and see a mess, with colour information mixed up.

Compare a typical LRGB set

http://www.firstlightoptics.com/rgb-filters-filter-sets/baader-lrgbc-ccd-filter-set.html

Also the example you've given is a NB target, the red channel mixes up HII, [NII], and [SII] information into one undiferentiated "red".

Just my opinion of course.

Edited by DaveS
Link to comment
Share on other sites

23 minutes ago, DaveS said:

I look at that graph and see a mess, with colour information mixed up.

It's interesting to compare it to the eye's spectral response:

287px-Cones_SMJ2_E.svg.png

This is one reason why we are so sensitive to 'green' light - we see it almost as well with our red receptors!

OSC is a much closer analogue to how we 'see' than RGB imaging with 'straight edged' filters. There is nothing intrinsically wrong with using debayering, saturation and curves and other techniques to exaggerate and extract narrowband colours to simulate LRGB results.

This doesn't say anything which is 'better' but Toxic's DSLR image is probably a lot closer to how we would perceive a nebula if we could see it, than the RGB one.

It explains why LRGB images have much greater saturation, which no doubt, is why they are 'easier to process' as there isn't bleed across between different colour channels. ON the other hand, it also explains why the colour palette tends to be a bit 'flat'.

Ha or any other narrowband 'red' light will just come out as the same 'red' with LRGB filters.

With the eye or a OSC each different 'red' will generate different balance of RG & B.

This isn't a problem for narrowband sources like nebulas, except you need to choose WHICH red, green or blue you will use to represent each colour.

For things like stars which have a black-body spectrum, you will get a balance between RG & B with straight edge filters, but it will tend to exaggerate the saturation, which I think is why LRGB images have stars that tend towards just three shades of orange, yellow or blue.

I think it would be very interesting to compare two images of a colourful 'everyday scene' such as a funfair, one taken with a mono camera using Baader type 'square' filters, and one taken using a DSLR under the same conditions.

A good example might be someone wearing an orange lifejacket sitting in a red canoe. With the Baader filters I bet the canoe and lifejacket would appear virtually the same colour.

 

Summary

LRGB makes more efficient use of photons and produces more accurate luminance data.

OSC  collects vastly more information about actual colour, but it is debatable whether or not trhis matters for most astronomical subjects.

 

 

Link to comment
Share on other sites

This discussion always focuses on OSC versus Mono.   But is this the only, let alone the right question/comparison?  Many of us use DSLRs because of convenience, ease of use, reliability, cost and because they provide surprisingly rewarding results  in the time available to us, especially considering astrophotography is not what they're designed for. 

However, DSLRs fall down because of noise.  A cooled OSC performs better in this regard. So the question might be: should the DSLR user move up to an OSC as a better way to do the sort of astrophotography they are already doing? 

  • Like 2
Link to comment
Share on other sites

4 hours ago, Ouroboros said:

So the question might be: should the DSLR user move up to an OSC as a better way to do the sort of astrophotography they are already doing? 

 

Good question indeed!

I'd be intrigued to see how a CMYL sensor would perform (bear in mind that 'L' is essentially the inverse of 'K' - black).

It would have sensitivity of (2+2+2+3)/12 per unit exposure = 3/4 compared to (3+3+3+3)/12 for a mono sensor - that's 3/4 the performance of mono! But then you can collect much more data as you don't need to do the RGB runs, plus you are collecting twice as much R and B data as with an RGGB sensor.

I think it would knock both RGGB-type OSC and LRGB/mono into a cocked hat.

Link to comment
Share on other sites

On 17/06/2017 at 11:31, Stub Mandrel said:

 

OSC  collects vastly more information about actual colour, but it is debatable whether or not trhis matters for most astronomical subjects.

 

 

By 'actual colour' you mean 'perceived colour,' I guess. :icon_mrgreen::evil4: If, by 'actual colour' you meant the light sent out by the object, then the huge gaps in the RGGB passbands mean that you would really get vastly more information out of a filter set without gaps. No? 

I think that, since the OP asked about mono or colour, we should not turn it into DSLR or CCD. That's for another thread.

Olly

 

Link to comment
Share on other sites

Jez, I have am lucky enough to have both a mono and an OSC astro camera.  I have used the OSC a few times, on carefully chosen, broad band targets, and have been pleased with the results.  However, despite having a smaller sensor my mono camera has had far more use.  It is my goto camera.  I now only tend to use the OSC if I need the benefit of the larger sensor.  I find mono quicker, more flexible and a world ahead when it comes to narrow band.  I could go into great detail as to why this is the case but there is plenty of discussion on this  already

  • Like 1
Link to comment
Share on other sites

1 hour ago, ollypenrice said:

By 'actual colour' you mean 'perceived colour,' I guess. :icon_mrgreen::evil4: If, by 'actual colour' you meant the light sent out by the object, then the huge gaps in the RGGB passbands mean that you would really get vastly more information out of a filter set without gaps. No? 

No :icon_albino:

The gamut of a 'square curve' sensor  is vastly smaller than that of a 'curved curve' one, even if its sensitivity to absolute luminosity is lower.

  • Like 1
Link to comment
Share on other sites

3 hours ago, DaveS said:

Well, if you really must have curved curves then astronomic will do you a set

astronomik-lrgb-typ2c_trans.png.a82a2577335f4f94c383dc1c0944d434.png

No need for a OSC :D:evil4:, and no doubled green from the Bayer matrix.

Potentially much better than the Baader ones, but no doubt you need a second mortgage to buy a set!

Link to comment
Share on other sites

  • 3 weeks later...
On 16/06/2017 at 21:21, toxic said:

you cant really compare a dslr to an astro ccd/cmos camera but here is a comparison any way:tongue:.

 

 

 

NGC-6960.jpg

Here is my effort with a 1000D, it did not seem fair to compare to a unmodified DSLR.

faulse green 08072017.jpg

In terms of comparison you got more detail but less depth, though i have been way more aggressive in processing. I would expect more detail though given the longer focal length (mine is cropped allot) and the mono should improve resolution also.

Edited by Adam J
  • Like 1
Link to comment
Share on other sites

18 hours ago, Adam J said:

Here is my effort with a 1000D, it did not seem fair to compare to a unmodified DSLR.

faulse green 08072017.jpg

In terms of comparison you got more detail but less depth, though i have been way more aggressive in processing. I would expect more detail though given the longer focal length (mine is cropped allot) and the mono should improve resolution also.

hi Adam nice picture :)

just thought i would let you know that the 1100d is a single sub and the atik is 3 single subs combined r-g-b no processing was done to either.

Link to comment
Share on other sites

Having used both DSLR and Mono CCD cameras, my comments are as follows:

A DSLR is certainly a lot cheaper and probably a lot less to learn for a beginner.  In particular processing is probably more straightforward.

BUT  Apart from the fact that a Mono CCD camera shows much more detail, it is far easier to see if you have the target framed in the FOV and get focus.  

I have been helping a newbie recently with a DSLR and I had forgotten how difficult it was to find the point of focus with a DSLR, if you're not near focus you won't even see it on live view, and you certainly can't see the nebulosity.  So from that point of view Mono cameras are easier for a beginner.

Therefore in summary, I recommend a beginner (if they can afford it) should start with a mono camera and just do Mono imaging for a short while until they get the hang of it all.  A filterwheel and filters can be added afterwards.

Carole 

 

  • Like 2
Link to comment
Share on other sites

I have a low end cooled CCD color camera. I long for a Mono camera because I think it would be nice to be able to choose each color, no color, or distinct filtering like HA without the matrix.

But my draw for the color camera originally was ease of imaging. Much like what I was use to with my DSLR with it's matrix R-G-G-B.

However, having a mono camera, and a filter wheel or even individually applied color filters, imparts maintenance of the filters. I'd still like to be able for the ability to choose.

My next camera (if such a blessed day ever arrives) will be a Mono and electronic filter wheel. As a package. Thus I could control the filtering remotely, as I do my other functions at the moment.

I may be wrong, but I think I could reduce my exposure times since all pixels would be being bathed with one of the colors, intensifying the effect for stacking... thus shorter exposure times. (Possibly?)

Anyway, my dream camera would be mono with an EFW.

Too soon oldt, too late schmart. :wink2:

  • Like 1
Link to comment
Share on other sites

3 minutes ago, carastro said:

Having used both DSLR and Mono CCD cameras, my comments are as follows:

A DSLR is certainly a lot cheaper and probably a lot less to learn for a beginner.  In particular processing is probably more straightforward.

BUT  Apart from the fact that a Mono CCD camera shows much more detail, it is far easier to see if you have the target framed in the FOV and get focus.  

I have been helping a newbie recently with a DSLR and I had forgotten how difficult it was to find the point of focus with a DSLR, if you're not near focus you won't even see it on live view, and you certainly can't see the nebulosity.  So from that point of view Mono cameras are easier for a beginner.

Therefore in summary, I recommend a beginner (if they can afford it) should start with a mono camera and just do Mono imaging for a short while until they get the hang of it all.  A filterwheel and filters can be added afterwards.

Carole 

 

I am not sure about this, I can see nebulosity in liveview with a DSLR and it takes no more than 20 seconds to hit perfect focus just using liveview first at normal magnification then X5 then X10 and certainly without any tools or bat mask, I expect that the fast optics I am using helps though.

Alan

Link to comment
Share on other sites

Quote

I expect that the fast optics I am using helps though.

Well maybe fast optics might make a difference, but I could never see nebulosity with my DSLR, even after doing a long exposure I might only see a faint hint of it.  Only processing would bring it out.  

I coped getting focus by marking the drawtube so I knew where the point of focus roughly was each time, but doing it from scratch I used to use the Moon to get rough focus.  Once I was close to focus I could actually see a star on live view and then could complete focusing.   Well that was my experience.  

Carole 

 

 

 

 

Edited by carastro
  • Like 1
Link to comment
Share on other sites

And this is my very next plan, a mono camera, i have many DSLRs but none are modified, and nothing i can get out of them in my urban, and someone just told me i shouldn't waste time with it as it won't help me with anything, either i modify one which is not in my plan or go directly to a mono camera, or even a color or OCS astro camera because it is designed for astro, otherwise i will waste my time for maybe 1 year with unmodded camera and i won't get good results, only many bad and few good results that aren't a winner after 1 year, so i now definitely saving to buy that mono camera i hope very soon [this month if budget is coming on time]

Link to comment
Share on other sites

8 hours ago, toxic said:

hi Adam nice picture :)

just thought i would let you know that the 1100d is a single sub and the atik is 3 single subs combined r-g-b no processing was done to either.

The only thing that I am not understanding about that statement is that your image looks very blue for an RGB image, when I have seen this before OIII has always been green / teal not electric blue. Mine is of course a bi-color narrow band image and hence the OIII is assigned to blue. But then you do have star color to I was assuming it was OIII + Ha + RGB .

Edited by Adam J
Link to comment
Share on other sites

18 hours ago, Adam J said:

The only thing that I am not understanding about that statement is that your image looks very blue for an RGB image, when I have seen this before OIII has always been green / teal not electric blue. Mine is of course a bi-color narrow band image and hence the OIII is assigned to blue. But then you do have star color to I was assuming it was OIII + Ha + RGB .

sorry Adam it is just 1 red at 300 second -  1 green at 300 second and 1 blue at 300 second and then combined in photoshop exposure there is no HA or OIII in it, and the other is just a single 900 sec exposure from the eos 1100D (the top image is the eos)

Link to comment
Share on other sites

  • 3 years later...
On 15/06/2017 at 20:25, ollypenrice said:

2) The increase in resolution by using all the pixels in a mono is actually trivial. The debayering routines are very sophisticated and interpolate (make an educated guess) about the 'missing' information remarkably well. I advocate mono but not for reasons of resolution because I have found very little or no gain in resolution when using mono over OSC on the same make of chip in the same telescope. 

3) You do not need more time with a mono camera, you need less. An OSC camera shoots through colour filters all of the time so it can never capture more than a third of the incident light, ever, under any circumstances. However, when a mono camera is working in its luminance mode it is capturing all of the incident light  and obtaining a massive speed advantage over colour. This cannot be less than a 6 to 4 time advantage and can easily rise to being twice as fast. The LRGB system was invented to save time.

6) Mono cameras can capture narrowband efficiently, Ha opening up many nights of moonlight to the imager.

 

Hi

Sorry to dig this up from the past. however, i am trying to decide between qhy268c/qhy268m (my move from D7000 which uses sony 071 chip) so i was doing my research when i noticed your post. firstly thank you for articulating your view so nicely.

I was trying to connect the two points that you make and to me they seem to be contradictory to each other. Maybe you would be kind enough to clarify.

I follow that one in every four channel is attributed to red. and i agree that when doing narrowband, we are only using 1/4 of the sensor and the rest is just dead space. However, if you also claim, rightly so, that the debayering routines are able to interpolate the red values for the "dead" pixels well enough, how can you say that mono is 4 times faster than, i.e. a 10 second exposure on mono is equivalent to 40 sec in a OSC, because the debayering algorithm has filled in the missing gaps. Yes photons are wasted, and we are making approximations, and given that a debaying routines are so good, that a 26 MPixel camera will probably behave comparible to 16+ MPixel mono in terms of resolution (maybe more - maybe a resolution comparison between qhy268c and asi1600m would confirm this).

However I donot follow why mono needs less time. in my view, the intensity registered in the R channel will be quasi-similar between an OSC and mono /w red filter. is that not so? 

Dont get me wrong, i want mono to be faster as then i would get the mono camera and get the benifit of higher resolution (i can get a EFW and i made already myself a autofocuser and processing seems similar except for one one extra step of LRGB combination module in PI). i am hoping you could clarify your statement. Ideally if someone has access to both qhy268m and qhy268c could probably do a shootout and put to rest this question.

Thanks

Rashed

 

 

Edited by rsarwar
Link to comment
Share on other sites

1 hour ago, rsarwar said:

Hi

Sorry to dig this up from the past. however, i am trying to decide between qhy268c/qhy268m (my move from D7000 which uses sony 071 chip) so i was doing my research when i noticed your post. firstly thank you for articulating your view so nicely.

I was trying to connect the two points that you make and to me they seem to be contradictory to each other. Maybe you would be kind enough to clarify.

I follow that one in every four channel is attributed to red. and i agree that when doing narrowband, we are only using 1/4 of the sensor and the rest is just dead space. However, if you also claim, rightly so, that the debayering routines are able to interpolate the red values for the "dead" pixels well enough, how can you say that mono is 4 times faster than, i.e. a 10 second exposure on mono is equivalent to 40 sec in a OSC, because the debayering algorithm has filled in the missing gaps. Yes photons are wasted, and we are making approximations, and given that a debaying routines are so good, that a 26 MPixel camera will probably behave comparible to 16+ MPixel mono in terms of resolution (maybe more - maybe a resolution comparison between qhy268c and asi1600m would confirm this).

However I donot follow why mono needs less time. in my view, the intensity registered in the R channel will be quasi-similar between an OSC and mono /w red filter. is that not so? 

Dont get me wrong, i want mono to be faster as then i would get the mono camera and get the benifit of higher resolution (i can get a EFW and i made already myself a autofocuser and processing seems similar except for one one extra step of LRGB combination module in PI). i am hoping you could clarify your statement. Ideally if someone has access to both qhy268m and qhy268c could probably do a shootout and put to rest this question.

Thanks

Rashed

 

 

Hi Rashed, I'm not sure what the two contradictory points are...? (I'm not being evasive, After re-reading the thread I just don't know.)

I never said that mono is four times faster, however. (At least I hope I didn't! Because it isn't.) The broadband equation is roughly thus: Luminance: All object photons. Each colour, 1/3 of all object photons. So in 4 hours LRGB you have 3+1+1+1 = 6. In RGB/OSC you have 1+1+1+1 = 4.  That makes the mono advantage 6 to 4, not 4 to 1. It is not that simple, though. The OSC filters don't, in fact, cut off sharply between colours so each one does pass more than 1/3 of the signal. What adds complexity is that some targets, with extremely faint parts, are not going to yield any colour with present technology but may yield a bit of signal - which they will do best in luminance, not through colour filters. The LRGB speed advantage is not going to go away but it is target-variable.

Meanwhile, back in the real world, there are stunning new OSC CMOS cameras and ingenious dual band filters for OSC which have changed the OSC/mono debate since I made the earlier points in this thread. Would I buy a modern OSC CMOS?  You bet I would.  I had the good fortune to be invited to post-process Yves Van den Broek's mega-mosaic of the galactic equator. Even without a dual band filter his CMOS/OSC data went deeper than my CCD in HaLRGB, though he did not find any significant OIII. (The Squid in what is otherwise his image came from my CCD data.) With a dual band filter he would have found the Squid for sure. See Gorann's RASA images on here.

Yves' image with my scrap of OIII thrown in:  https://www.astrobin.com/g82xf7/B/?nc=user

I'm happy to clarify further any earlier points I made if I can.

Olly

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.