Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

L-RGB relative exposure ?


Recommended Posts

Hi

I have some RGB filters on their way, being a noob to RGB I wondered what the relative split and duration of exposures is?

Eg Luminance: I assume this captures everything, so is this a shorter exposure compared to R, G & B or do you shoot the same, to balance exposure? (Exposure based on luminance and avoid saturation)

Also I have read that you should spend more time on L than R, G & B, so for example 2 hr L, 40m R, 40m G, 40m B?

  • Like 1
Link to comment
Share on other sites

24 minutes ago, tomato said:

On my dual rig the RGB channels get one third of the time than the Lum, this ratio seems to work well for me.

Do you mean for both total integration and individual subs?

 

Link to comment
Share on other sites

I wouldn’t normally keep changing filters on the RGB scope as I would lose time on re-focusing, and I always try to collect the blue data when the target is highest in the sky. So I might run two RGB channels in a single session, but overall aim for a total integration time ratio of 3:1:1:1 for LRGB, in practice it is usually the blue channel that falls a bit short.

  • Thanks 1
Link to comment
Share on other sites

I work on the same length of Lum as the total RGB, but collect the Lum as Bin 1 and the RGB as Bin 2, working on the principle that the Luminance is where the detail is, while the RGB just paints a colour layer over the top.

I schedule my RGB as either RGB or BGR depending on how bright the background sky is likely to be. I may also split the RGB to go RGBGR as 1:1:2:1:1 if the Blue is going to be collected when the target is highest in the sky.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

I generally do 2-3x extra luminence but it depends on what you're trying to do. I think to reveal faint tidal tails for example will need much more of a factor increase in lum depending on the signal of the target and separation from background LP where you're imaging from.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

I agree with Elp in that it's target-specific. There is no point in chasing faint tidal tails in RGB, so put the time into the luminance. Where you're not chasing faint stuff it tends to be easier to combine the L and the RGB if you have a similar total exposure. Pixel by pixel, though, L still gives more light  per pixel than RGB combined - and that's the whole point. Time saving is the reason why the LRGB system was invented.

My focus with Tak FSQ106 and TEC I40 did not change perceptibly between LRGB so I shot RGB, RGB, RGB etc. I also used Lum flats for everything, again with no perceptible consequences. Quite a few of our expert guests do likewise. The L channel is going to define RGB brightnesses anyway but if you do have a problem you might need flats per filter. I didn't so I just got on with it!

Olly

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

I'm still not clear on relative exposure.. do you expose RGB for say 3x the lum? I'm not talking about overall integration, but the exposure of individual subs.

For example, say that a test frame on lum gave 1 minute as a good sub exposure. Is it then typical to do 3 min subs for R, G and B? 

Link to comment
Share on other sites

As long as you're swamping signal I don't think it matters. I usually keep exposures the same, 60s lum? Then 60s r/g/b. The beauty with mono is you can pick and choose how you do it.

  • Thanks 1
Link to comment
Share on other sites

Best approach (not necessarily the one you'll end up using) would be to measure things on specific target and go by that.

Measure background levels in each channel and then determine suitable exposure length that swamps read noise. Measure target signal and aim for specific SNR in total stack for that channel.

  • Like 2
Link to comment
Share on other sites

I generally use nights of better seeing for L, and nights of poorer seeing for RGB. On bright targets, I may even opt to not collect L at all. My philosophy is that a longer integration time gives a clearer signal: depth in L, and colour separation in RGB. As for individual exposure times, I avoid any pattern or banding in the stacked subs. That's my indicator of "drowning the read noise". Apart from that, I avoid blown out stars as much as possible. So, individual exposure time to keep colour in the stars, and total integration time for depth.

L: 180 s at 0 gain

RGB: 300 s at 0 gain

Ha: 240 s at 200 gain

All at f/5.3

  • Thanks 1
Link to comment
Share on other sites

Yes, I think that being cautious not to over-expose in RGB pays off. The higher you go in the brightness the less colour you have. It is also possible to use the RGB as a 'short exposure' set to use in layer masking short and long exposures when the target image requires this. (HDR imaging.)

For me star removal and replacement is now an obligatory aspect of processing in which all you need for the stars is a gently exposed RGB layer. They will be small, tight and colourful. Bingo.

Olly

  • Like 3
Link to comment
Share on other sites

32 minutes ago, ollypenrice said:

Just what I've found to be the case. Bright tends towards white when I process data.

Olly

I think that is artifact of processing and not higher signal in RGB.

Both sensor and light behave linearly (at least they should - light does, and sensor if properly designed end executed) - so ratio of components is preserved regardless of how long you expose for.

It is this ratio that should be preserved in processing and often it's not - which leads to "desaturation" of color when stretching. Ideally - one should really only stretch brightness component and use RGB information as RGB ratio to "color" luminance after stretching.

  • Like 1
Link to comment
Share on other sites

FWIW.....

- I use a 2:1:1:1 ratio, but I think that's a minimum for Lum and 3:1:1:1 is probably better.

- In the age of CMOS and BlurXterminator, binning color is so passé'........

- I expose Lums for 240s and R, G, & B for 300s. LOTS of opinions on exposure times, but I go with as long as possible without saturating any (or, too many...) stars. I should note though, that my site is Bortle 3/2, so I don't have to worry about light pollution.

- Flats for each filter are necessary. Filters are where dust is most likely and using just the Lum flat for the other filters won't take care of most dust motes.

- I also have highly variable and generally poor seeing. Shooting Lums when it's best and R, G, & B when less optimal is a great strategy.

Cheers,
Scott

  • Thanks 1
Link to comment
Share on other sites

Posted (edited)
On 31/05/2024 at 16:51, Scott Badger said:

 

- Flats for each filter are necessary. Filters are where dust is most likely and using just the Lum flat for the other filters won't take care of most dust motes.

 

I have many hundreds of hours exposure time which contradict this view and a number of experienced imagers who have worked from my place can add still more exposure time to mine. The filters are not, in my experience, the primary source of dust bunnies. The worst ones come from closer to the chip, the chip window being the usual culprit. The objective is way too far out of focus to add any. Of course, if your filters add dust bunnies then they do, and you need flats per filter. However, I think that literally none of my images here on AB has flats per filter.

https://www.astrobin.com/users/ollypenrice/

Later ones are OSC but the earlier ones are from mono using just a luminance flat.

Olly

 

Edited by ollypenrice
Grammar: oops!!!
  • Like 4
Link to comment
Share on other sites

Posted (edited)
3 hours ago, ollypenrice said:

The filters are not, in my experience, the primary source of dust bunnies.

 

I did not need to open my 7x2" filter wheel for nearly half a year. Once I needed to unscrew my 2600MM the dust moths appeared immediately. So, I can rather confirm the above. Also, the dust has a little chance to land on the camera if playing with it is unnecessary. Even with that, I do flat frames for each used filter after every session. :) 

Edited by Vroobel
  • Like 1
Link to comment
Share on other sites

I agree that dust on the sensor window is worse than on the filters, but I have a harder time keeping dust out of the filter wheel than out of the camera, so that’s where it’s more likely to be. And no matter how careful I am, I never seem to get it completely clean…..If you’re able to keep your filters (in a filter wheel?) dust free and use one flat for all, that’s great and I envy you.

Cheers,

Scott

  • Like 1
Link to comment
Share on other sites

26 minutes ago, Scott Badger said:

I agree that dust on the sensor window is worse than on the filters, but I have a harder time keeping dust out of the filter wheel than out of the camera, so that’s where it’s more likely to be. And no matter how careful I am, I never seem to get it completely clean…..If you’re able to keep your filters (in a filter wheel?) dust free and use one flat for all, that’s great and I envy you.

Cheers,

Scott

Each dust particle casts a shadow on the sensor. A particle that is further from the sensor has a larger penumbra than a particle close to the sensor. But that larger penumbra is also much weaker. Dust on a filter has such a weak penumbra, that it’s hardly noticable, unless the image is very deep. The size of the shadow also depends on the f-number of the scope. A fast scope creates a different (wider) shadow than a slow scope. So the question ”should you take flats per filter?” isn’t a simple yes or no question. As always: it depends. I take flats per filter, but have never investigated if I need to. I want to go as deep as possible in my images, and have adopted the philosophy ’better safe than sorry’. I reuse flats for most of a season, and the extra time I need to invest is negligible. Also, since dust shadows affect lightness more than colour, clean L subs are more important than clean RGB subs.

  • Like 2
Link to comment
Share on other sites

Posted (edited)

This exactly, bad flats can ruin an image, for precaution best to take them after each filter session, even if you think you don't need them.

At F2 dust motes are next to invisible but I still take them, the most prominent features are the vignetting and you can also see the central obstruction (with my SCT anyway).

(This has gone a bit off topic).

Edited by Elp
  • Like 2
Link to comment
Share on other sites

Posted (edited)

For my setup, C9.25 edge at f10, the dust mote on my luminance filter is quite noticeable and would be more so after integration if I didn’t correct it with a flat. Flats made with any of the other filters wouldn’t correct it, and if I used the lum flat for other filter subs, it would overcorrect.

Flats also correct for PRNU, which is wavelength dependent.

Other than for cleaning, I don’t change or open my imaging train, so flats last me several months and seems like a flat per filter is a small investment to make for cleaner subs.

And yes, bad flats can do as much damage as good flats can repair, but the solution is easy enough, just take good flats.

Cheers,

Scott

Edited by Scott Badger
  • Like 2
Link to comment
Share on other sites

On 22/05/2024 at 12:14, 900SL said:

Hi

I have some RGB filters on their way, being a noob to RGB I wondered what the relative split and duration of exposures is?

Eg Luminance: I assume this captures everything, so is this a shorter exposure compared to R, G & B or do you shoot the same, to balance exposure? (Exposure based on luminance and avoid saturation)

Also I have read that you should spend more time on L than R, G & B, so for example 2 hr L, 40m R, 40m G, 40m B?

It seems simple but is actually a difficult question, not because of the math involved but because of the accuracy of the parameters that have to be used in the math. In order to get an accurate exposure time for a given SNR, the most important factor is the sky brightness.

Many professional observatories have tables relating the sky counts to moon illumination in different filters. To further complicate things, sky brightness changes during the night and the proximity of the object being imaged to the moon is also important.

I started making tables relating the sky rate to moon illumination, and in makes a substantial difference than just relying on a ballpark sky mag chart.

From what I found if you want to have equal SNR in all channels, you will have to expose more for RGB than L, contrary to what people are used too. A ratio for 1:3:3:3 is a good starting point (L:R:G:B)

 

If someone is interested on how to get the sky counts from your images during a night session, the basics steps are simple:

  • Calibrate the subs using darks and bias (flat is optional)
  • Ideally mask the sources in your image (stars, galaxies)
  • Compute the median value of the background 
  • Optionally you can get sky magnitude from counts, but not really needed for anything since we already have the sky count rate
  • Do this for every frame acquired during the night, then take a median value (you will notice the sky rate changes during the night)

I suppose the steps can be done in Pixinsight, maybe involving pixel math, but I work in python

Here are the main steps in python:

def calibrate_image(raw_frame, master_bias, master_dark):
    """
    Preprocess a raw image frame by applying bias and dark corrections.
    
    Parameters:
    -----------
    raw_frame : array-like
        The raw image data.
    master_bias : array-like
        The master bias frame.
    master_dark : array-like
        The master dark frame.
    
    Returns:
    --------
    corrected_frame : array-like
        The preprocessed image data.
    """
    # Bias correction
    bias_corrected = raw_frame - master_bias

    # Dark correction
    corrected_frame = bias_corrected - master_dark

    return corrected_frame

def sky_brightness(sky_rate_electrons, gain, exposure_time, zero_point=27.095, image_scale=0.92):
    sky_rate_adu = sky_rate_electrons / gain
    total_sky_counts_adu = sky_rate_adu * exposure_time
    mag = -2.5 * np.log10(total_sky_counts_adu) + zero_point
    pixel_area_arcsec2 = image_scale ** 2
    surface_brightness = mag + 2.5 * np.log10(pixel_area_arcsec2)
    return surface_brightness


def calculate_sky_electron_rate(corrected_frame, exposure_time, gain):
    bkg_estimator = MedianBackground()
    bkg = Background2D(corrected_frame, (50, 50), filter_size=(3, 3),
                   bkg_estimator=bkg_estimator)
   
    median_sky_level = bkg.background_median

    sky_electron_level = median_sky_level * gain
    sky_electron_rate = sky_electron_level / exposure_time

    return sky_electron_rate

 

 

Edited by dan_adi
  • Thanks 1
Link to comment
Share on other sites

6 minutes ago, dan_adi said:

It seems simple but is actually a difficult question, not because of the math involved but because of the accuracy of the parameters that have to be used in the math. In order to get an accurate exposure time for a given SNR, the most important factor is the sky brightness.

Many professional observatories have tables relating the sky counts to moon illumination in different filters. To further complicate things, sky brightness changes during the night and the proximity of the object being imaged to the moon is also important.

I started making tables relating the sky rate to moon illumination, and in makes a substantial difference than just relying on a ballpark sky mag chart.

From what I found if you want to have equal SNR in all channels, you will have to expose more for RGB than L, contrary to what people are used too. A ratio for 1:3:3:3 is a good starting point (L:R:G:B)

 

If someone is interested on how to get the sky counts from your images during a night session, the basics steps are simple:

  • Calibrate the subs using darks and bias (flat is optional)
  • Ideally mask the sources in your image (stars, galaxies)
  • Compute the median value of the background 
  • Optionally you can get sky magnitude from counts, but not really needed for anything since we already have the sky count rate
  • Do this for every frame acquired during the night, then take a median value (you will notice the sky rate changes during the night)

I suppose the steps can be done in Pixinsight, maybe involving pixel math, but I work in python

Here are the main steps in python:

def calibrate_image(raw_frame, master_bias, master_dark):
    """
    Preprocess a raw image frame by applying bias and dark corrections.
    
    Parameters:
    -----------
    raw_frame : array-like
        The raw image data.
    master_bias : array-like
        The master bias frame.
    master_dark : array-like
        The master dark frame.
    
    Returns:
    --------
    corrected_frame : array-like
        The preprocessed image data.
    """
    # Bias correction
    bias_corrected = raw_frame - master_bias

    # Dark correction
    corrected_frame = bias_corrected - master_dark

    return corrected_frame

def sky_brightness(sky_rate_electrons, gain, exposure_time, zero_point=27.095, image_scale=0.92):
    sky_rate_adu = sky_rate_electrons / gain
    total_sky_counts_adu = sky_rate_adu * exposure_time
    mag = -2.5 * np.log10(total_sky_counts_adu) + zero_point
    pixel_area_arcsec2 = image_scale ** 2
    surface_brightness = mag + 2.5 * np.log10(pixel_area_arcsec2)
    return surface_brightness


def calculate_sky_electron_rate(corrected_frame, exposure_time, gain):
    bkg_estimator = MedianBackground()
    bkg = Background2D(corrected_frame, (50, 50), filter_size=(3, 3),
                   bkg_estimator=bkg_estimator)
   
    median_sky_level = bkg.background_median

    sky_electron_level = median_sky_level * gain
    sky_electron_rate = sky_electron_level / exposure_time

    return sky_electron_rate

 

 

I think that, in this analysis, you are looking at raw data and not at data as is it will be, or can be, processed for a final image. My capture procedure is based on what I will do with the data in processing, not on its relationship with raw signal coming from the sky.

I do not want equal SNR in all channels. If my data were to remain unprocessed then, yes, I might want that - but my data will be processed. If I expose for faint tidal tails, rarely seen, in luminance then my stars in luminance will be over exposed. This is not a problem because I will not use luminance on my stars, I will use RGB only for stars. If galaxy cores are, in the same way, over-exposed in luminance, I won't use them, I will use the RGB-only cores.

One of my fundamental principles in imaging is to ask myself, What am I going to do with this layer?  The answer to this question determines how I will shoot the layer.

Olly

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.