Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Mono or OSC, your suggestions


smr

Recommended Posts

I'm fairly sure I'm in the mono camp, but I think the question maybe comes down to "How do you want to spend your time?" or, perhaps, how do you define "better"?  Perhaps if you don't enjoy the processing side, then having a set of full colour images off the camera that can be stacked, fiddled about with relatively minimally and presented as a final image is what's best for you.  If processing is something you enjoy working at, then perhaps mono makes more sense.  If imaging/processing time is limited for whatever reason and you're happy to accept lower quality images knowing that you will have some sort of contribution across the full spectrum then OSC may make more sense.

If you understand the limitations and requirements of both then perhaps it's easier to say "This is what suits me best", which may not be the same as what suits the next person.  We're not engaging in a competition here.  Who can produce the highest quality (however you choose to measure that) images is not a big issue.  What is important is what makes you happy about what you're doing.  That may not be something you can get right first time, regardless of how much advice you receive.

James

  • Like 2
Link to comment
Share on other sites

I started with a DSLR (bought a brand new Canon 750d Modded),about 3 years ago with a SW120 and Eq6-r, and got a good grounding. Then last year i got a ZWO ASI ASI183MM Pro with EFW and LRGB filters and used this since with a dual rigged Altair Astro ED66-R and have been amazed with the results and have since gone NB with Ha and Oiii and jumped up to another level although processing is a hugh learning curve, and still got to crack NB star colour.

Roger

  • Like 2
Link to comment
Share on other sites

7 hours ago, JamesF said:

I'm fairly sure I'm in the mono camp, but I think the question maybe comes down to "How do you want to spend your time?" or, perhaps, how do you define "better"?  Perhaps if you don't enjoy the processing side, then having a set of full colour images off the camera that can be stacked, fiddled about with relatively minimally and presented as a final image is what's best for you.  If processing is something you enjoy working at, then perhaps mono makes more sense.  If imaging/processing time is limited for whatever reason and you're happy to accept lower quality images knowing that you will have some sort of contribution across the full spectrum then OSC may make more sense.

If you understand the limitations and requirements of both then perhaps it's easier to say "This is what suits me best", which may not be the same as what suits the next person.  We're not engaging in a competition here.  Who can produce the highest quality (however you choose to measure that) images is not a big issue.  What is important is what makes you happy about what you're doing.  That may not be something you can get right first time, regardless of how much advice you receive.

James

The time difference difference between stacking LRGB and OSC is less than you might suspect. I doubt that it adds 5 minutes, especially if you use one flat for everything, as I usually do (because it hardly ever makes any difference.)

After this, the stronger signal from the luminance tends to make mono post processing easier. My own experience of OSC was that it was harder to draw out the faint features and colour gradients were, if anything, worse than they were with RGB. This may be because the mono camera's colour filters were of higher quality than those fixed over OSC pixels and/or because you collect RGB and not RGGB with a mono.

Overall is OSC easier to do than mono? I didn't find that it was. In both methods colour balance and colour calibration need attention, gradients need sorting, etc.

Olly

 

  • Like 1
Link to comment
Share on other sites

I think it is important to also stress the evolving nature of people's experiences and motivations for imaging. Extrapolating from a very scientific sample of one (me), we want to get better - much better. James is right in the sense that this hobby is not a competition against others, but I am certainly trying to compete against previous images and efforts of my own. Even my goals have evolved as I have gained those experience and honed some skills, and for this reason I think mono imaging is far superior, as the opportunities for that growth are far greater than OSC in my view.

Edited by MattJenko
I -> we
  • Like 3
Link to comment
Share on other sites

Using both ZWO ASI 183mm Pro Mono and Atik 4120ex OSC cameras. I must say that I do find the Mono much more versatile. I also agree with Olly that the OSC camera is more difficult to pull out detail and feature, as I discovered yesterday when processing images of IC 1318. Images taken on a dual rig at the same time. The OSC could not bring out the faint dust lines that the Mono camera with specific filters could.

I do like both cameras and would not part with either. If I just want at basic RGB image then the Atik would be OK. When its NB then it has to be the ZWO. I have also merged RGB from the Atik with Lum and/or Ha, from the ZWO, with surprisingly good results.

  • Like 2
Link to comment
Share on other sites

I would like to make an assumption based on a previous comment about mobile imaging. For deep sky imaging with a dslr you would have a laptop for guiding, collecting data and running camera software. You would have a power source for the camera, guide cam and mount. So as a setup subject moving from a dslr to a dedicated camera is no different as the imaging train would not be stripped down every time (would it?)

  • Like 1
Link to comment
Share on other sites

On possible start for you is to go for a mono camera like the ASI1600MM and then initially use it to shoot Lum and/or Ha  to improve the RGB images that you already have from your DSLR days. Then you have many possibilities for the future, and could either get RGB and NB filters for your mono camera and trust Olly that it will be faster than OSC, or still use your DSLR to collect RGB. Currently I have a dual rig where I often collect RGB with an ASI071 in one scope and lum or Ha with an ASI1600MM on the other, and then combine the data.

  • Like 1
Link to comment
Share on other sites

I went from DSLR to mono CCD.  I haven’t really managed a great RGB image since I switched due to focusing issues between filters. There are more things to go wrong IMHO.  

I have combined DSLR data with mono luminance, with reasonable results.

I don’t think I would buy a colour CCD. Narrowband is great and I don’t want to compromise. 

  • Like 1
Link to comment
Share on other sites

2 hours ago, spillage said:

I would like to make an assumption based on a previous comment about mobile imaging. For deep sky imaging with a dslr you would have a laptop for guiding, collecting data and running camera software. You would have a power source for the camera, guide cam and mount. So as a setup subject moving from a dslr to a dedicated camera is no different as the imaging train would not be stripped down every time (would it?)

Not necessarily. For mobile imaging I have a DSLR with an intervalometer and a standalone guider. Therefore no computer.

  • Like 1
Link to comment
Share on other sites

This is a nice topic to read through whilst on my hols. I'm currently a DSLR imager with plenty of headroom to get better with my current setup, but I can see myself trying the mono route at some point.

The point I really wanted to make, though, was how rare it is for a specialist forum with lots of passionately held views to debate an old chestnut like this without at some point descending into bickering. It really is refreshing.

As you were...

  • Like 2
Link to comment
Share on other sites

I'll just pop in to make things more difficult. Whilst I am aware that mono is more efficient than OSC when the sensors are about the same size, how about a full frame DSLR compared to a camera with a small sized mono sensor? The IMX183 sensor is perhaps the cheapest decent sized mono sensor available and the Panasonic in the ASI1600 is a tad more expensive, but larger. If the optics allow using a large sensor, would it still be more efficient to use a small sensor with a smaller scope or a larger scope with the larger sensor, even with a Bayer matrix?

A ZWO183 colour/mono behind a RedCat records a smaller FOV than a Canon 6D behind an Esprit 100. If I were to image RGB only, I'd rather get the much larger scope + a full frame DSLR than a ZWO183 mono + a small scope.

A used full frame DSLR can be found rather cheap.

Helpful, eh? :D

  • Like 1
Link to comment
Share on other sites

17 hours ago, StarDodger said:

A DSLR is one shot colour imaging.... :)

They are, but as mentioned OSC (usually) refers to a dedicated astro-imaging  camera with cooling. I like the self-containedness of my DSLR. I can use it with or without a computer. I can focus it at the telescope.  I don't have to provide it with external power. It's well behaved. I'm very familiar with it. It's got a nice big sensor. I can use it for general photography, and I already owned it before getting into this mad game. 🙂 I'm sure I don't need to spell out the various advantages of DSLRs. Pity they're noisier than OSC cameras. 

PS Oh, and because there's always this ongoing debate about OSC or Mono I am permanently in an indeterminate quantum state of indecisiveness between the two cameras ..... so I do nothing. 🙂 

Edited by Ouroboros
  • Like 2
  • Haha 2
Link to comment
Share on other sites

1 hour ago, alexbb said:

I'll just pop in to make things more difficult. Whilst I am aware that mono is more efficient than OSC when the sensors are about the same size, how about a full frame DSLR compared to a camera with a small sized mono sensor? The IMX183 sensor is perhaps the cheapest decent sized mono sensor available and the Panasonic in the ASI1600 is a tad more expensive, but larger. If the optics allow using a large sensor, would it still be more efficient to use a small sensor with a smaller scope or a larger scope with the larger sensor, even with a Bayer matrix?

A ZWO183 colour/mono behind a RedCat records a smaller FOV than a Canon 6D behind an Esprit 100. If I were to image RGB only, I'd rather get the much larger scope + a full frame DSLR than a ZWO183 mono + a small scope.

A used full frame DSLR can be found rather cheap.

Helpful, eh? :D

That one is indeed difficult.

I prefer set point cooling because it allows for proper calibration. That sort of rules out DSLR type cameras, but you have a very good point.

Larger sensor with larger scope will have "speed" advantage over small sensor with a small scope providing the same FOV. Even if sampling rates are mismatched, in principle, after use of fractional binning to bring things to equal with regards to sampling rate - aperture will win. How much, and whether it will overcome advantage that Mono offers in terms of SNR per unit time - I guess that depends on actual numbers (QE of sensors involved, their total surface, scopes involved, etc ...).

But if we go there, we will start discussing proper mounts needed for handling large scope+large sensor vs smaller combo, price/performance ratios and all the nice things we tend to discuss anyway in separate topics :D

  • Like 1
Link to comment
Share on other sites

29 minutes ago, vlaiv said:

That one is indeed difficult.

I prefer set point cooling because it allows for proper calibration. That sort of rules out DSLR type cameras, but you have a very good point.

Larger sensor with larger scope will have "speed" advantage over small sensor with a small scope providing the same FOV. Even if sampling rates are mismatched, in principle, after use of fractional binning to bring things to equal with regards to sampling rate - aperture will win. How much, and whether it will overcome advantage that Mono offers in terms of SNR per unit time - I guess that depends on actual numbers (QE of sensors involved, their total surface, scopes involved, etc ...).

But if we go there, we will start discussing proper mounts needed for handling large scope+large sensor vs smaller combo, price/performance ratios and all the nice things we tend to discuss anyway in separate topics :D

For the particular example of an ASI183 + a RedCat, besides going past the diffraction limit, the aimed resolution is less than 2 arcsec/px. You'd probably want a HEQ5 class mount for that resolution at least, mount which should be able also to carry an Esprit 100 too, aiming for a resolution of ~2.4 arcsec/px with a Canon 6D.

In order to match the Esprit 100 + the Canon 6D resolution wise and FOV, you'd need a 200mm F/4 scope/lens + an ASI183. That relaxes a bit the requirements from the mount though. However, the 100mm aperture captures 4 times the amount of light a 50mm lens captures.

The ASI183 mono pro + electronic filter wheel + ZWO LRGB filters + RedCat combo costs a tad more than 2200€ at FLO. An Esprit 100 + flattener + a SH Canon 6D estimated at ~400€ would result in a bit less than 2400€. Not such a big difference. But narrowband imaging with the DSLR is less efficient.

Of course, you're right about the QE, and the above are valid only with my example and a few other combinations. Depending on the purpose, a smaller more sensitive mono sensor with smaller pixels might be much more desirable.

I wouldn't worry so much about set point cooling on a DSLR with big pixels. The sensor doesn't heat that much and I found that shooting the calibration frames at a couple degrees warmer or colder doesn't make any difference.

Edited by alexbb
  • Like 1
Link to comment
Share on other sites

6 hours ago, Whistlin Bob said:

This is a nice topic to read through whilst on my hols. I'm currently a DSLR imager with plenty of headroom to get better with my current setup, but I can see myself trying the mono route at some point.

The point I really wanted to make, though, was how rare it is for a specialist forum with lots of passionately held views to debate an old chestnut like this without at some point descending into bickering. It really is refreshing.

As you were...

Excellent point and, for this, we must thank those who run, maintain and moderate SGL. It makes a refreshing break from the rest of the internet in which things are often rather different.

5 hours ago, alexbb said:

I'll just pop in to make things more difficult. Whilst I am aware that mono is more efficient than OSC when the sensors are about the same size, how about a full frame DSLR compared to a camera with a small sized mono sensor? The IMX183 sensor is perhaps the cheapest decent sized mono sensor available and the Panasonic in the ASI1600 is a tad more expensive, but larger. If the optics allow using a large sensor, would it still be more efficient to use a small sensor with a smaller scope or a larger scope with the larger sensor, even with a Bayer matrix?

A ZWO183 colour/mono behind a RedCat records a smaller FOV than a Canon 6D behind an Esprit 100. If I were to image RGB only, I'd rather get the much larger scope + a full frame DSLR than a ZWO183 mono + a small scope.

A used full frame DSLR can be found rather cheap.

Helpful, eh? :D

You're right that the DSLR often gives a cheap way into a large chip but the abiding problem is that not many instruments will cover these chips correctly. Let's be honest: some manufacturers, including Takahashi, tend to be a bit optimisitic on this matter.

Olly

  • Like 1
Link to comment
Share on other sites

On 21/08/2019 at 08:15, ollypenrice said:

That is why the professionals invented the LRGB system. It saves time.

Please sir,

Why don't we image LRG and compute B as L-(R+G) ?

Or even LMY?

Then G =  L-M, B = L-Y and R= L -G +B   or  R = (M+Y)-L

Much quicker, but would need filters with sharp cutoffs... I'm sure someone will be along soon to say why it won't work!

 

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, alexbb said:

For the particular example of an ASI183 + a RedCat, besides going past the diffraction limit, the aimed resolution is less than 2 arcsec/px. You'd probably want a HEQ5 class mount for that resolution at least, mount which should be able also to carry an Esprit 100 too, aiming for a resolution of ~2.4 arcsec/px with a Canon 6D.

In order to match the Esprit 100 + the Canon 6D resolution wise and FOV, you'd need a 200mm F/4 scope/lens + an ASI183. That relaxes a bit the requirements from the mount though. However, the 100mm aperture captures 4 times the amount of light a 50mm lens captures.

The ASI183 mono pro + electronic filter wheel + ZWO LRGB filters + RedCat combo costs a tad more than 2200€ at FLO. An Esprit 100 + flattener + a SH Canon 6D estimated at ~400€ would result in a bit less than 2400€. Not such a big difference. But narrowband imaging with the DSLR is less efficient.

Of course, you're right about the QE, and the above are valid only with my example and a few other combinations. Depending on the purpose, a smaller more sensitive mono sensor with smaller pixels might be much more desirable.

I wouldn't worry so much about set point cooling on a DSLR with big pixels. The sensor doesn't heat that much and I found that shooting the calibration frames at a couple degrees warmer or colder doesn't make any difference.

Ok, let's do a comparison of "features" Canon 6D vs ASI183mmc (mono cooled version) with appropriate scopes so that each gives nice resolution of 2"/px and is light weight enough to be handled by HEQ5?

I'm going to "assume" "lower end" scopes here, so no Taks and such :D.

In order to sample at 2"/px with color sensor that has 6.5um pixel size, you need ~1350mm of focal length. I know, I know, many will point out that 6.5um at 1350mm will give 1"/px, but that is for mono sensor where each pixel counts, with OSC sensor if you want true sampling rate of 2"/px and pixels are spaced in bayer matrix, you need twice the focal length. You can do interpolation debayering that many do, but that is not real sampling, but rather interpolating sparse samples - you won't record actual information, but rather make it up, and due to Nyquist - you won't get higher frequencies and detail that way.

There is decent "candidate" scope to match this focal length, but I'm not sure if it can really cover full size sensor. 6" RC has 1370mm focal length, and according to TS website, there is x1.0 field flattener for RC scopes that should correct up to 45mm circle. Not sure if will produce acceptable results with full size sensor and 6" RC, but let's suppose it will.

So we sorted Canon, we found combination that will make it truly sample at 2"/px and it will gather 6" worth of light.

Let's see what we can pair up with ASI183 to give same FOV and sampling rate. Due to very small pixel size of 2.4um, we are limited to ~250mm of focal length, and that is very hard to get with with any but smallest scopes. But we are going to use a trick here. Because ASI183 has very large pixel count - 5496*3672 and since we decided to use "super pixel mode" on our canon to get true sampling rate and thus effectively cut down pixel count by 4 (2 times in width and 2 times in height), we can do the similar for ASI183 - we can choose to bin x2 in software, so we can afford to use 500mm focal length. Let's see what sort of scope can we use then.

We could choose some wacky hyperbolic newtonian at F/2.8 or something like that, but since we said no Taks, we will also disregard this option (although I've seen very interesting offering from TS for half a price of Tak Epsilon), and let's also skip Boren Simon astrographs, although 6" F/3.6 would fairly interesting in this combination.

I think only "realistic" option here would be 4" refractor reduced to about 500mm (thus being F/5 - I don' think there should be any problems with getting there), although maybe 130PDS with x0.73 CC would be a cheaper option, but that would result in ~475mm and that is sort of "cheating", as we would sample at ~2.1"/px rather than 2".

Anyways, let's go with that Canon 6D at 6" vs ASI183 at 4".

Canon 6D:

Light gathering of scope: 225%
Effective resolution - 2736 x 1824 px
Read noise - 4.8e (at ISO800) (source: https://clarkvision.com/articles/evaluation-canon-6d/ )
QE: peak of 49% (according to this: https://www.dpreview.com/forums/post/53054826 )
Remarks: no set point temperature control / questionable calibration

ASI183mm:

Light gathering of the scope: 100%
Effective resolution:  2748 x 1836 px
Read noise - 4.4e (2.2 at unity gain, binned x2 gives 4.4e read noise per "pixel")
QE: peak of 84%
Remarks: proper calibration.

With regards to FOV:

image.thumb.png.1fe8c8a02268ca0b8e96350ae683ebd6.png

Virtually the same, so no difference there.

If we assume nebula target, than Canon is at disadvantage even with the scope with 2" more of aperture.

QE accounts for 170% of light, while you will be capturing Ha with only 1/4 of pixel in case of OSC - so that is additional 400% difference. Together 680% - that is x3 225% difference that comes from aperture. ASI183 will also have lower read noise, and will calibrate out properly.

There ya go - if you aim for fixed sampling rate and Heq5, for imaging Nebulae, ASI183 is better choice with 4" scope vs Canon 6D on 6" scope.

 

 

 

  • Like 2
Link to comment
Share on other sites

7 minutes ago, Stub Mandrel said:

Please sir,

Why don't we image LRG and compute B as L-(R+G) ?

Or even LMY?

Then G =  L-M, B = L-Y and R= L -G +B   or  R = (M+Y)-L

Much quicker, but would need filters with sharp cutoffs... I'm sure someone will be along soon to say why it won't work!

 

 

I have one better for you :D

Let's do LRGB, but to get max out of data, lets compute luminance as stack of L and (R+G+B), R stack as R and (L-(G-B)), B stack as B and (L-(G-R)), and so on ... I think that would be best use of the data.

CMY approach is in principle doable, and even has been tried and measured for performance, look here:

http://www.astrosurf.com/buil/us/cmy/cmy.htm

I'll quote just conclusion at the end for those who don't want to read whole page:

Quote

In conclusion, the use of CMY subtractive filter in the place of RGB filter is a valid option to carry out realistic color images of astronomical objects. It is noted that the first configuration allows a benefit in signal to noise ratio only for very faint objets and in my peculiar case of transmission filter (these is dispersion in the transmission curve of different sets filters). However this gain is not high in this peculiar situation. The reason comes from the contribution of noise because of the arithmetic operations between images to extract RGB components starting from the observations carried out through CMY filters. In the example shown here it appears that the chromatic balance is more homogeneous with filters RGB than with CMY filters (that indicates that it would be necessary to integrate signal longer through filters cyan and yellow compared to the magenta filter). The problem involved in the recovery of the spectral lines in the case of RGB imagery is not necessary real. Lastly, whatever the technique is used, it is very simple to calculate the calibration coefficients to be applied to the images starting from a star similar to the Sun to obtain images having a pleasant visual aspect having a colors balance close to the visual apparence that a human eye would perceived it (but remember reserve for scientific use of the CMY data...).

 

  • Like 2
Link to comment
Share on other sites

34 minutes ago, Stub Mandrel said:

Please sir,

Why don't we image LRG and compute B as L-(R+G) ?

Or even LMY?

Then G =  L-M, B = L-Y and R= L -G +B   or  R = (M+Y)-L

Much quicker, but would need filters with sharp cutoffs... I'm sure someone will be along soon to say why it won't work!

 

 

Please, Sir, why do you choose blue? Couldn't the same argument be advanced for any of the three filter colours? I haven't considered this question before but it's an interesting one.  One answer might be that the colour you don't measure might not be all that is left from the luminance when you have subtracted the signal from the other two filters. There might be gaps which did not come from the wavelengths passed by the unused filter.

However, this does leave me wondering if we really need to shoot through three colour filters. I can see an argument for shooting through two and luminance with a colour combining algorithm properly calibrated to identify the signal from the missing third. Another side of me says, 'Forget it, we work by making precise measurements, not by extracting estimates of missing measurements.'

Has anyone tried subtracting G and B from L and comparing it with B? I wouldn't be averse from trying it once the moon is gone.

Olly

  • Like 1
Link to comment
Share on other sites

Might be an interesting experiment, Olly.  Though perhaps there are complications if the losses through the different filters are not the same, so it's not necessarily the case that R + G + B = L in terms of photons actually captured?

James

  • Like 2
Link to comment
Share on other sites

6 minutes ago, JamesF said:

Might be an interesting experiment, Olly.  Though perhaps there are complications if the losses through the different filters are not the same, so it's not necessarily the case that R + G + B = L in terms of photons actually captured?

James

Depends on response curves of the filters.

L = R+G+B in a "special" case - where L for example covers 400-700nm range, and B, G and R, 400-500, 500-600, 600-700nm respectively. There should be no gaps and no overlaps, and QE of filters should be the same.

However, technique that I described above can be used regardless if filters subdivide 400-700nm range precisely.

If one has algorithms that:

a) normalize frame intensity (meaning equalize both signal strength and background, and such algorithm should be part of processing workflow because as is, even single filter will have difference in intensity and background levels between subs in single evening because target moves and attenuation is not the same and depends on air mass, and also LP levels change during the course of the night, both with time and target position in the sky)

b) good SNR based sub weighting (again - one should use it anyways because of a) )

then stacking L together with synthetic L subs made out of R+G+B and similarly stacking colors with subs made out of L minus other colors will improve SNR and still create perfectly acceptable results.

In both cases - regular LRGB and this sort of mixed LRGB one needs to do color calibration to get proper color, so there is no difference there if one color is a bit stronger or weaker than single filters (their strengths depend on QE of filters and camera QE in the first place - so they are not "proper" either).

L itself depends on QE of sensor, so it is not uniform but we don't have trouble using it like that. If there is some data missing, or some overlap, it is like using different QE curve sensor with regular L filter - again results will be acceptable in the same way they are in the first place - depending on QE curve of the sensor.

  • Like 2
Link to comment
Share on other sites

27 minutes ago, ollypenrice said:

Has anyone tried subtracting G and B from L and comparing it with B? I wouldn't be averse from trying it once the moon is gone.

Surely you already have LRGB data to play with? Can you use 'pixel math' in Pixinsight to do the work?

  • Like 1
Link to comment
Share on other sites

All the talk of RGB addition\subtraction takes me back to the days of colour television, where to actually be able to transmit a 'colour' signal, various processes were devised, that split and transmitted a 'full' bandwidth luminance signal, together with an additional signal containing the colour difference signal(s), where everything was re-assembled at the receiver....  (https://en.wikipedia.org/wiki/PAL)

 

 

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.