Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Recommended Posts

I'm imaging with an ASI 1600MM-Cool and the ZWO mini filter wheel and their matched set of LRGB filters. Given that I take all my sub-exposures all at full resolution - i.e. NO binning, why should I bother with L as the RGB pretty much cover the whole of the visible spectrum anyway? This way I would save 25% of my imaging time.

thanks in advance

Neil

Link to comment
Share on other sites

Neil,

I only shoot LRGB at the moment (I don't bin any of my subs) so I can't comment from direct experience. There's an earlier discussion (see below) in which some opinions seem to be that, for stars and star clusters at least, RGB is just fine. 

My interpretation is as follows, but I'm open to correction. In L, your system is wide open to all wavelengths. Even if the throughput of your system at R+G+B is equal to the throughput of the L filter, each minute you expose at L requires at least 3 minutes (1 per R, G and B ) to match in terms of signal to noise. Since the overall signal level in an LRGB image comes from your L, and the colour information from RGB, you achieve a colour image at a given S:N quicker than you would if you were generating solely with RGB.

So it may be the case that rather than saving 25% of your imaging time by dropping the L, you might be spending more imaging time than you need, by driving up your  R+G+B exposure times to get good S:N, when in fact you could reduce each of those by more time than you need to add in L to achieve the same quality.

I stand ready to be corrected.

Regards


Nigel (see other thread below)

 

 

Link to comment
Share on other sites

3 hours ago, ngwillym said:

I'm imaging with an ASI 1600MM-Cool and the ZWO mini filter wheel and their matched set of LRGB filters. Given that I take all my sub-exposures all at full resolution - i.e. NO binning, why should I bother with L as the RGB pretty much cover the whole of the visible spectrum anyway? This way I would save 25% of my imaging time.

thanks in advance

Neil

Hi Neil

For broadband objects, I always image in LRGB rather than RGB.  My three reasons are:

1. For the same exposure times, the Lum subs will achieve a higher SNR than the individual colour sub frames - the result is that faint objects are revealed more clearly.

2. I always process the Lum and colour data separately since the way the eye perceives monochrome and colour is different.  It therefore makes sense to process the two data sets differently in a way that maximizes this difference in perception.  For instance,  in Lum I attempt to maximize detail and contrast. In particular, I find that that applying de-convolution to the high SNR areas of the Lum data is very effect for making the image less blurry.  Even for starfields (eg Globular clusters) this is important since it helps resolve individual stars.

3. On a personal note, I prefer an LRGB to an RGB star since you can obtain a more natural "glowing" effect.  

If you have only RGB data and you are not concerned about faint objects, then I'd suggest creating a synthetic luminescence and then processing the two sets of data separately. Generally, you should find that you obtain superior results with this methodology.  

Alan

Link to comment
Share on other sites

Yes, for a given level of signal shooting in LRGB is much faster than shooting in RGB-only because, as Alan says, the L filter is passing about three times as much light as the colour ones. Indeed, the LRGB system was invented, purely and simply, to save time.

If I compare three hours of RGB synthetic luminance with one hour of true luminance I might expect them to have comparable signal - but they just don't. The lum goes deeper. If you are after dark, dusty of very faint targets then it's lum all the way. You will do best to take enormous amounts. It matters less on star clusters, though.

Unlike Alan I often prefer to remove the luminance from stars.

Olly

Link to comment
Share on other sites

thanks everyone. The reason for asking was that the 'optimised' LRGB filter set from ZWO are set up to provide almost equivalent exposures for each filter - but as you say - if I just image in clear - i.e.no filter - for L, then yes , the photon density is much higher, so I suppose the corollary is - why should I be using the L filter rather than just no filter at all?

Link to comment
Share on other sites

1 hour ago, ngwillym said:

thanks everyone. The reason for asking was that the 'optimised' LRGB filter set from ZWO are set up to provide almost equivalent exposures for each filter - but as you say - if I just image in clear - i.e.no filter - for L, then yes , the photon density is much higher, so I suppose the corollary is - why should I be using the L filter rather than just no filter at all

Cameras are sensitive to non visible light eg UV/IR - effectively, this is light pollution. In addition, if your set up includes a refraction component before the light hits the camera sensor, then this unwanted UV/IR will be refracted differently to the wanted light and so it will come to a focus at a different point to visible light, creating star bloat.  Lum filters are manufactured such that the bandpass is for the visible spectrum and so they largely exclude UV/IR light.

Alan

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.