Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Mono for luminance, DSLR for colour?


BrendanC

Recommended Posts

Hi all,

I've been imaging with an EOS1000D with IR filter removed for the past couple of years. Had a whale of a time with it, and it works very nicely in my Bortle 4 skies.

However, I may be moving to London soon, and I'd very much like to continue this hobby, even though it's probably the worst place in the world to do it!

So, I've been considering narrow band imaging, cooled mono cameras, filters, filter wheels, all that gubbins. But I recently had a thought: instead of going straight from DSLR to mono with filters etc, and all the steep learning curve and cost that entails, could an intermediate step be to just get a cooled mono camera for the luminance, and combine that with shots from the DSLR for colour?

I just tested this by combining my shot of the Horsehead - one of my first ever DSOs and I know there's a ton wrong with it, but it's just for the colour - and combined it with a shot taken from https://www.astrobin.com/full/397555/B/ for the luminance, using the Luminance layer blending mode in Photoshop.

My colour version (over exposed, too much stretch, noisy, etc etc):

1574260260_oldandnew.jpg.b265d88d23891722c4370725caac552b.jpg

Rick Wayne's mono version, which I've linked to above:

-HsbqjsJu5Fn_1824x0_Sw0CZcaC.thumb.jpg.e5a680c61feaf0ee91bd9f6ed20b9a84.jpg

Combined - just very roughly, just as very quick test:

1935561118_DSLRwithmonotest.jpg.30a67cc7b3fba8f4e2824db2351ffc7a.jpg

So in theory this is possible, even if the test image above is a little rough and ready. For example I can see that it's a bit noisier than I would like, but I don't know whether that's just me being a bit fast and loose with the overlaying etc.

Is this a 'thing'? Do other people do this? Is it a valid intermediate step between OSC with a DSLR, and mono? Or should I just take the full step and go for filters too?

I tend to shoot nebulae and galaxies, although I'm prepared to believe that one or the other (or maybe both) won't be possible from The Big Smoke.

Thanks, Brendan

Link to comment
Share on other sites

Here's another example, using a slightly better image of mine.

Colour:

808504027_Autosave002-DeNoiseAI-low-light(1).thumb.jpg.91c6cfb62f82390d2cd2e91e48f7764b.jpg

Mono (from here: https://stargazerslounge.com/topic/326197-ngc-7000-hyperstar-c925/)1622665891_Haminimal.thumb.jpg.c366d9ef37d152b97dc5c92cf6ab81ac.jpg.5485f9d2a79e58daa86c647244f9cfbe.jpg

Combined:

2096545495_DSLRwithmonotest2copy.jpg.bb4a552f1400644c43afc97eedfe7013.jpg

It works better when I blur the colour image behind, and overlay the mono image.

Now I look at this, I'm not sure it'll work. But I'd still be interested if anyone else has taken this route, and with what level of success. It would just be good to keep using existing equipment and upgrade one step at a time rather than wholesale change. I've come across some threads on Cloudy Nights where people say this works, but nothing here.

Thanks, Brendan

Edited by BrendanC
Link to comment
Share on other sites

I plan to do this on my dual rig in the autumn. I will shoot Lum on one scope and use a CMOS OSC camera on the other to capture RGB. I can get very similar FOVs and resolutions by using a focal reducer on the Lum scope.

I will process the data separately then extract the RGB channels from the colour data and combine these with the Lum, just as I would if I had captured separate RGB with a mono camera and filters.

  • Like 1
Link to comment
Share on other sites

Thanks! I can see that it's definitely better with two scopes, but I'm unlikely to be doing that. Also, getting the same FOV and orientation would be tough (it took a while to align the examples I gave) but I daresay the colour shot doesn't have to be pixel-perfect. Hmmm, food for thought...

Link to comment
Share on other sites

4 hours ago, BrendanC said:

OK, so it's a bad idea. 

Yes, that's my honest opinion.

1 hour ago, tomato said:

I plan to do this on my dual rig in the autumn. I will shoot Lum on one scope and use a CMOS OSC camera on the other to capture RGB. I can get very similar FOVs and resolutions by using a focal reducer on the Lum scope.

I will process the data separately then extract the RGB channels from the colour data and combine these with the Lum, just as I would if I had captured separate RGB with a mono camera and filters.

At one time I had the mono and OSC versions of the same camera and did combine their data into single images sometimes, but not by using one for lum and one for colour. That will not work well, in my view, in equivalent cameras. Why not? Because an hour of lum and an hour of OSC are not remotely compatible. What you really have after an hour with an OSC is 15 minutes of red, 15 minutes of blue and half an hour of green. An hour of lum will completely overwhelm that.  Which begs the question, 'When will you capture enough colour to fill the lum?' 

I attacked this in two different ways. I shot NB in the mono to add to the OSC and/or I shot LRGB in one camera and OSC in the other. But L in one and OSC in the other over the same time would not work.

What if the cameras are not equivalent? For Brendan the problem will be even worse than it was for me because his lum camera will be more sensitive and have better SNR than his colour. For tomato (I'll use that name :D) I don't think we know, because the OSC camera my have a higher performance than the mono. I still bet on the OSC being unable to support the luminance over the same exposure, though.

Olly

Edit. Another thought: how will you extract the colour from the OSC?  It will have been through the algorithm which corrects the loss of resolution from the 4-pixel 'blocks' of the Bayer matrix. This, in essence, is a process which creates a synthetic luminance layer. Merely splitting the OSC channels will not remove this 'virtual luminance.' It will be present in each colour channel just as the luminance is present in each colour channel in an LRGB image.

Edited by ollypenrice
Afterthought.
  • Like 1
Link to comment
Share on other sites

Great response, thanks Olly.

Quick thought though: when you say that the luminance data would come through from the OSC, would that still be the case if the OSC image was blurred? Because that's what I've done in the examples above, precisely to get around that problem.

I can still see that this idea might not work though. Thanks for the honest opinion!

Link to comment
Share on other sites

37 minutes ago, ollypenrice said:

What you really have after an hour with an OSC is 15 minutes of red, 15 minutes of blue and half an hour of green. An hour of lum will completely overwhelm that.  Which begs the question, 'When will you capture enough colour to fill the lum?' 

I have just started using a OSC and Mono dual rig set up, both the same IMX571 CMOS sensor, and I’ve found the OSC to be really pretty good even without the luminance. I’m really considering why I spent £2k on a mono. 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, BrendanC said:

Great response, thanks Olly.

Quick thought though: when you say that the luminance data would come through from the OSC, would that still be the case if the OSC image was blurred? Because that's what I've done in the examples above, precisely to get around that problem.

I can still see that this idea might not work though. Thanks for the honest opinion!

However you look at it the debayering process does manipulate the OSC data by estimating what the red and blue channel would have found under the green filtered pixels and ditto for all three colours. This is very clever and the best algorithms all but eliminate the loss in resolution created by the block of four RGGB. I never argue that this significantly impairs OSC resolution because I don't think it does. It was a passing thought following on from tomato's point.

However, the signal strength of OSC remains at 50% for green and 25% for red and blue in my opinion. (Roughly, at least, since OSC filters have more colour overlap than astro RGB filters.)

You are certainly right that one way to tackle this is to blur the colour to reduce colour noise and allow higher saturation. I do this myself, as do many people, when blending luminance with RGB. I add some of the luminance (that's to say all of it but at partial opacity), blur the luminance, flatten onto the RGB and repeat. In the final iteration I don't blur the luminance and so restore all/most of the resolution.  This is pretty much what you're trying and it works up to a point but, like every step in processing, it shouldn't be over done. I think you're pushing your luck in using DSLR colour under mono luminance. It's not the wrong step, it's just a step too far - in my view.

Olly

  • Like 1
Link to comment
Share on other sites

3 minutes ago, tooth_dr said:

I have just started using a OSC and Mono dual rig set up, both the same IMX571 CMOS sensor, and I’ve found the OSC to be really pretty good even without the luminance. I’m really considering why I spent £2k on a mono. 

You're finding what I found, though I didn't buy two cameras in order to use one for lum and one for colour. I always expected that this would produce unbalanced data. In your shoes I'd shoot LRGB in the mono and OSC in the OSC and combine those. You could slew the mono exposures somewhat in favour of the lum and also use it to add NB on appropriate targets.

In the end I went for dual monos as more productive but I think CMOS OSC seems better than CCD OSC and the pairing you have might still prove ideal, though not for adding equal timings of L to OSC to make LRGB.

Olly

  • Like 1
Link to comment
Share on other sites

I will give you my view point from someone who lives in SE London. 

I find luminance to be absolutely awful from here, but what I find I can do quite well is HaRGB.  

The main problem will be FOV and orientation,  you will need software that will re-size and register the two images to each other.

Ha is inclined to wash out the colour a bit and you'll have to fiddle around a bit in the post-processing to get the colour back to normal.  But I think what you are planning is quite an acceptable idea as a "stepping stone" set up.

So instead of buying a luminance filter I would buy an Ha filter.

I attach a link to a website I created for images done purely from Bortle 8 to give you some idea of what can be achieved.  You will find a few images where I have combined DSLR and Ha data on this link.

https://sites.google.com/view/carastroimaging/home

Having said all that, you will benefit from having a complete set of narrowband filters once you have the funds to do so. 

Carole  

 

Edited by carastro
  • Thanks 1
Link to comment
Share on other sites

It will be a trusty KAF8300 taking Lum and a QHY268c taking the colour. Previously I have used two 8300 cameras, one taking lum and the other divided equally between RGB, so this is a ratio 1:0.3:0.3:0.3. I have used this capture ratio successfully on dual CMOS cameras but thinking about it the only project the dual CCD cameras has done to date was a M31 mosaic and the colour channels were a struggle on that one.

As Olly says with the mono and OSC this will go roughly to 1:0.25:0.5:0.25, but the CMOS camera is quite a bit more sensitive than the CCD, so wouldn't that work in my favour? This set up has worked ok on NB with the CCD taking Ha and the OSC working with a dual band filter, but LRGB is a different kettle of fish. It won't be a big deal if it doesn't work, the OSC was purchased specifically for WF with a RASA, and it will be fun to try. 

 

Link to comment
Share on other sites

Brendan, sorry to hi-jack this thread but on Olly's point re channel extraction, I have loaded a OSC broadband stacked image into PI and using the Channel Extraction process tool, have created a  Red, Green and Blue file, the images look to me like they have extracted the relevant data for each colour channel. Could I not now compose a LRGB image using these?

image.thumb.png.fc0276ba6b61408fa2d1759fa682b2f5.png

  • Like 1
Link to comment
Share on other sites

10 hours ago, tooth_dr said:

I have just started using a OSC and Mono dual rig set up, both the same IMX571 CMOS sensor, and I’ve found the OSC to be really pretty good even without the luminance. I’m really considering why I spent £2k on a mono. 

I will experimet a bit with this when astro darkness returns. I also have both the colour and mono versions of the IMX571 (in my case ASI2600MC and QHY268M) but the colour version is so very very good that I may also end up wondering why I got the mono. Remains to be found out😉

Link to comment
Share on other sites

From what I understand, it's not a requirement to have exactly the same amount of signal in each LRGB channel. Many people throw more time into the L channel in order to get the geometry correct, and the RGB data just "colorize" the L channel. So, running an OSC camera for the same amount of time as a monochrome version running the L channel isn't that bad.

Of course, you could try a Ha filter instead of L (depends on your target), then you mix the multiple channels to your preferences/palette.

N.F.

 

Link to comment
Share on other sites

1 hour ago, gorann said:

will experimet a bit with this when astro darkness returns. I also have both the colour and mono versions of the IMX571 (in my case ASI2600MC and QHY268M) but the colour version is so very very good that I may also end up wondering why I got the mono

Same here - 268M and 2600MC.  I havent really given it a fair chance.  Not a big fan of galaxy season, but I think the mono camera (with narrowband) will be good when the nebulae come back. 

  • Like 1
Link to comment
Share on other sites

7 minutes ago, tooth_dr said:

Same here - 268M and 2600MC.  I havent really given it a fair chance.  Not a big fan of galaxy season, but I think the mono camera (with narrowband) will be good when the nebulae come back. 

Yes, at least the mono version should beat the colour one on NB, unless there is something seriously wrong with physics....

  • Like 1
Link to comment
Share on other sites

8 hours ago, gorann said:

Yes, at least the mono version should beat the colour one on NB, unless there is something seriously wrong with physics....

The reason I bought the colour was your successful NBX images with the colour camera

  • Like 1
Link to comment
Share on other sites

12 hours ago, nfotis said:

From what I understand, it's not a requirement to have exactly the same amount of signal in each LRGB channel. Many people throw more time into the L channel in order to get the geometry correct, and the RGB data just "colorize" the L channel. So, running an OSC camera for the same amount of time as a monochrome version running the L channel isn't that bad.

Of course, you could try a Ha filter instead of L (depends on your target), then you mix the multiple channels to your preferences/palette.

N.F.

 

You can indeed have more luminance than colour, even a lot more, but it makes processing considerably more difficult. I only overdose on luminance when I have to, which is when searching for the ultra-faint stuff like tidal tails, IFN etc. I much prefer to have equivalent time per channel.

Ha can be used as luminance but it's not what I want to do because it simply isn't true to the target. You will end up lighting blue parts of the target as if they were deep red, etc.

Olly

Link to comment
Share on other sites

@ollypenrice can i ask a technical question of you please? I'm about to get into LRGB imaging myself for the first time, and i was wondering, what happens if you have significantly less exposure in one of the colour channels? I know the answer is of course just to shoot more of it, lol, but for the sake of argument, let's say you wanted to make an image with what you had, but you only had 25% of Blue vs the other channels. When you do colour calibration (e.g PCC in P.I) will this account for things, to the extent that the image will have the 'correct' colour, but will just need the Blues saturated more, or will the colour balance be way out of whack? Or alternatively, would the Blue data need boosting at all? I know APP has the option to add a multiplication factor to any channel when combining the RGB together.

ps - Like Adam, i too bought a 268M but i haven't had a chance to use it yet. I'm already starting to wonder if i should have gone for the Colour version instead (not for quality reasons, just mainly so i can get some sleep while imaging! I don't have a permanent setup, so have to resort to using a filter drawer). 

Edited by Xiga
Link to comment
Share on other sites

51 minutes ago, Xiga said:

@ollypenrice can i ask a technical question of you please? I'm about to get into LRGB imaging myself for the first time, and i was wondering, what happens if you have significantly less exposure in one of the colour channels? I know the answer is of course just to shoot more of it, lol, but for the sake of argument, let's say you wanted to make an image with what you had, but you only had 25% of Blue vs the other channels. When you do colour calibration (e.g PCC in P.I) will this account for things, to the extent that the image will have the 'correct' colour, but will just need the Blues saturated more, or will the colour balance be way out of whack? Or alternatively, would the Blue data need boosting at all? I know APP has the option to add a multiplication factor to any channel when combining the RGB together.

ps - Like Adam, i too bought a 268M but i haven't had a chance to use it yet. I'm already starting to wonder if i should have gone for the Colour version instead (not for quality reasons, just mainly so i can get some sleep while imaging! I don't have a permanent setup, so have to resort to using a filter drawer). 

It's a good question and, since I wouldn't start to construct an image with a significant inequality between colour channels, I have no experience on which to draw. (Sorry if that answer sounds a bit 'holier than thou' but it's the truth. I don't recall trying this. I'll work on an image with, say, one sub short or maybe even two in a stack of twelve per channel but I don't think I've tried to fix a greater imbalance than that. )

I think, technically, that you can stretch the short channel to the same point as the others and will find that it's all there but will have far more noise. You could then noise reduce it and hope for the best. I'd be interested to see if @vlaiv agreed and suspect that he would, but I don't want to speak for him.

There's also a bit more to it, sometimes, viz, 1) using your RGB-only as you would use short subs for controlling a very bright part of the image, overexposed in your luminance. Yes, you could shoot short luminance but you may already have what you need in the RGB itself. Apart from something with the dynamic range of M42 I often find this works very sweetly, but it does require full quality RGB and precludes binning the RGB in most cases. 2) Stars are often much better in RGB than LRGB. They are both smaller and more colourful but, again, they need to be from a full quality RGB layer rather than a 'fixed' one.

Olly

  • Like 1
Link to comment
Share on other sites

41 minutes ago, ollypenrice said:

It's a good question and, since I wouldn't start to construct an image with a significant inequality between colour channels, I have no experience on which to draw. (Sorry if that answer sounds a bit 'holier than thou' but it's the truth. I don't recall trying this. I'll work on an image with, say, one sub short or maybe even two in a stack of twelve per channel but I don't think I've tried to fix a greater imbalance than that. )

I think, technically, that you can stretch the short channel to the same point as the others and will find that it's all there but will have far more noise. You could then noise reduce it and hope for the best. I'd be interested to see if @vlaiv agreed and suspect that he would, but I don't want to speak for him.

There's also a bit more to it, sometimes, viz, 1) using your RGB-only as you would use short subs for controlling a very bright part of the image, overexposed in your luminance. Yes, you could shoot short luminance but you may already have what you need in the RGB itself. Apart from something with the dynamic range of M42 I often find this works very sweetly, but it does require full quality RGB and precludes binning the RGB in most cases. 2) Stars are often much better in RGB than LRGB. They are both smaller and more colourful but, again, they need to be from a full quality RGB layer rather than a 'fixed' one.

Olly

Apologies Olly if I keep the Imaging Surgery theme going, but I wonder do you have any thoughts on my post 10 above on using channel extraction in Pixinsight to obtain RGB channels from a OSC image?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.