Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

RGB as a LUM layer querey


Ewan

Recommended Posts

Hi, i recently read that you could do LRGB + RGB to improve the overall picture or RGB + RGB, the RGB beeing all the Red, Green & Blue frames stacked as one image then applied as a Lum, now when creating what i would call a 'false' Lum layer would i first process the RGB channels into colour first then combine into a single RGB file OR load all the R G & B files into DSS & use the resulting tif as a Greyscale 'false' Lum ?

If the weather improves i will collect the Lum data but have not at the moment, one other thing, Lum sub length i have been doing 300 sec's do you think this is too long or can you go longer or will the stars get oversaturated ? 

Or any other method you may know of that would better suit.

Link to comment
Share on other sites

Hi Martin, i thought they were valid but im guessing it will be down to trial & error, i have tried stacking the RGB then using it as a 30% top layer in PS, it works ok but it will mean a saturation boost to get the colours back up is needed.

Out now doing 1200 sec R subs on NGC 7023 to quieten my image down.

Guiding very well with a combined RA / DEC error or 0.15 & OSC 0.42

Link to comment
Share on other sites

If you take an RGB image and apply the same image over the top as lum you will get... the same image! Where is the new information? There isn't any.

The point about lum is that it contains three times the light obtained when shooting through a coloured filter (mono) or a Bayer Matrix (OSC.) Lum captures R,G and B simultaneously on every pixel. OSC captures R, G or B on a pixel by pixel basis but any one pixel collects only one colour (a third of the light) of luminance.

What you can do, though, with an RGB image or equivalent OSC one, is isloate the synthetic luminance which exists in any OSC or RGB image and process it differently. That makes perfect sense since you'd process it for sharpness and contrast while you'd process the RGB for low noise and high colour intesity. Then you'd lose the RGB synthetic lum and replace it with the dedicated synthetic lum.

Also, if you shoot LRGB, you can extract a synthetic luminance from your colour layer, divide its exposure time by three, and weight it accordingly when combining it with a luminance layer. Well, so much for the theory. What I've found in practice is that sometimes this is worth doing and sometimes it ain't! I wish I knew why this is so. This afternoon I tried it on a guest's image and the synthetic lum from RGB made no useful contribution to the dedicated Lum layer other than helping to tame a very bright star. (One third of the light means smaller star...)

Olly

Link to comment
Share on other sites

Maybe i should have said Olly i was ONLY using the RGB as a synthetic lum just for detail & sharpness, i knowthere would be no additional info as no extra data was aquired.

A bit like using synth green in NB imaging, i dont mind doing it BUT you should still aquire the S2 regardless or at least attempt to, which is what i do, if the signal is too weak then yes i can use Bi Colour instead, this is why i tend to shoot S2 last in the NB range.

Lum length is another story, now would i be right in doing say 40 x 360 sec exp as opposed to 20 x 720 exp as i would have thought things could get a little saturated with long lum subs ?

Link to comment
Share on other sites

Maybe i should have said Olly i was ONLY using the RGB as a synthetic lum just for detail & sharpness, i knowthere would be no additional info as no extra data was aquired.

A bit like using synth green in NB imaging, i dont mind doing it BUT you should still aquire the S2 regardless or at least attempt to, which is what i do, if the signal is too weak then yes i can use Bi Colour instead, this is why i tend to shoot S2 last in the NB range.

Lum length is another story, now would i be right in doing say 40 x 360 sec exp as opposed to 20 x 720 exp as i would have thought things could get a little saturated with long lum subs ?

I'm firmly of the 'long sub' persuasion and use 30 minute subs increasingly frequently from our dark site. The problem of saturation concerns the background sky for most people, not the brightest parts of the image. You should try to expose to your sky fog limit. There are targets which will saturate for me in half hour subs but a few 'shorts' can sort that out. Lum can saturate stars, too, but I try to remove it from stars when possible, which is most of the time. In order to bring some practicality to the sub length debate I've put a couple of files in Dropbox. M31, one set of 7x30 minutes and one set of 7x 2 minutes for the core. Have a play and see what you think.

https://dl.dropboxusercontent.com/u/63721631/M31%207X1800%20L.tif

https://dl.dropboxusercontent.com/u/63721631/L%207X120.tif

Olly

Link to comment
Share on other sites

........

Lum length is another story, now would i be right in doing say 40 x 360 sec exp as opposed to 20 x 720 exp as i would have thought things could get a little saturated with long lum subs ?

As Olly said, this depends very much on your local sky conditions/ fog limit and you have to experiment to find out what works best (or there's a method described on the Starizona pages.) I've imaged at a very dark site (Olly's) and am convinced that long subs really work better in these conditions. But at my suburban home site, I get no benefit from very long subs; all that happens is the sky background gets brighter - I get better results with many shorter subs. At the dark site, after 30 minutes the sky background count is probably still below 1000 ADU whereas at home I'll usually reach 1000 after 90 seconds and my typical Lum subs are 3-5 min.

Adrian

Link to comment
Share on other sites

This will sound daft but how do i find my background count ?

Image processing programs usually have a cursor or aperture that you can place on an empty background area of your (calibrated) raw image and get a count. Best to get an average of a block of pixels, and to take several such measurements from different parts of the image. In Maxim for example, the Information window gives min, max and average pixel values for the area sampled in Area or Aperture mode.

Strictly speaking, you should measure a fully calibrated image to ensure that the camera dark signal and bias/ pedestal level is subtracted first, and any uneven illumination corrected with flats.

More info and some theory here:

http://starizona.com/acb/ccd/advtheoryexp.aspx

Adrian

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.