Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Red, Green and Blue subs? What do you mean?


Recommended Posts

In many of the brilliant images that I see on this site, people say that they took a number of red, blue and green subs and give the total time.

What does that mean? Is it that the red filter for example is used and subs taken, then the same for blue and green? Then are they all merged together?

I keep seeing this, but not totally sure what it means or how it works.

Link to comment
Share on other sites

You are basically right - Red, Green and Blue filtered images can be combined to produce what is known as an RGB (full colour) image. The combination of these three colours in the right percentages will produce all the other colours you can see - this is how a TV or computer monitor works too.

Quite often, another filter - plain this time - is used to capture the general essence of the object called a 'Luminance' channel and this goes into the mix to produce an LRGB image.

Edited to add this link which may assist

Link to comment
Share on other sites

Light is a mixture of red, green and blue light. By taking a series of monochrome images through red, green and blue filters the component parts of the visible spectrum can be recorded on a monochrome camera, each filter only recording the same colour as the filter. ie red only records red and filters out green and blue etc. The monochrome images can then be combined in Photoshop using the RGB channels to create a colour image.

Peter

Link to comment
Share on other sites

So, can you take red, green and blue subs using filters and an unmodded DSLR?

Will take a read of the link, thank you.

Sorry for the questions, but I like to know what's going on and how I can best achieve the best possible images.

Link to comment
Share on other sites

So, can you take red, green and blue subs using filters and an unmodded DSLR?

You are already doing so with a DSLR (modified or not) because a DSLR camera has a mono sensor with a set of filters printed on its surface in a matrix called a Bayer Matrix.

Link to comment
Share on other sites

But with a DSLR you can't get the different times across the different channels can you? What about Luminance, can you do that with a DSLR?

Can I just say Steve, your book is great, it's now starting to make sense!! I looked at it for 8 months before I got my scope out .............. petrified!!!

Link to comment
Share on other sites

But with a DSLR you can't get the different times across the different channels can you? What about Luminance, can you do that with a DSLR?

Perfectly true, each gets the same exposure - although technically the green channel gets twice as much exposure as the other channels as there are twice as many green pixels as there are red and blue in the Bayer Matrix.

With regard to the 'luminance channel', this is derived from the RGB channels when the Bayer Matrix is decoded.

Link to comment
Share on other sites

With a one shot colour system using a Bayer Matrix you cannot give red longer than, for example, blue, no. But this is not terribly important. You can always adjust the balance in graphics programmes afterwards.

You cannot exactly shoot luminance either, but the software that interprets your DSLR RGB data creates a synthetic luminance channel and you can process this differently from the RGB channels, with good effect. In a nutshell you can, for example, blur the RGB channels to reduce noise and yet sharpen the synthetic luminance channel in order to enhance detail. A true luminance channel is probably better but the advantage, (I use one shot colour and mono CCD cameras) is not huge.

Since this is a jolly old game you can further enhance images using narrowband filters. The famous nebulae, for example, often shine in the light of ionised hydrogen, a deep red. You can, if your DSLR is modified, shoot a layer filtered to select Ha and combine it with your red channel (and maybe your luminance, gently) in order to enhance structural detail.

Olly

Link to comment
Share on other sites

No you cannot normally single out the Luminance, it essentially remains part of the decoded matrix. However, quasi luminance channel can be extracted using PhotoShop but there is no particular advantage in doing so.

It is possible to take reasonable Hydrogen Alpha (Ha) filtered images using a DSLR and on certain Nebulae (emission nebulae), there is an advantage in doing so and then adding this data to the RGB mix but this is a whole new subject. If you want to know more about it, I can happily fill you in.

Link to comment
Share on other sites

That all sounds really interesting and certainly something that I would like to know more on. I find the images on here very inspirational and would love to learn new techniques to attain something half as good as what I am regularly seeing.

Olly, I would love to know about blurring the RGB and sharpening the luminance channel. Can you point me in the right direction?

Thanks guys - A million times over.

Link to comment
Share on other sites

With a DSLR then can you single out the luminance channel? As in when people post their subs, there are x seconds RG and B and x seconds luminance.

How do you capture seperate RGB channels? Using a CCD?

Black and white (mono) CCD with colour filters at the front. The corresponding colours are then added back at the processing stage.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.