Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Stacking luminance from a mono camera with RGB from OSC camera via a dual rig


Recommended Posts

Anyone any tips or guides they follow to help with the processing of mono and OSC data?  I'm shooting this data simultaneously on a dual rig.

Currently I'm using APP, and it gets you to choose the algorithm, and whether to debayer the data or not.   I cant therefore stack the colour data and the mono data at the same time as I get an error (data is either 1 channel or 3 channel).

I also own PI, so could use it for stacking if better.

 

Thanks.

Edited by tooth_dr
  • Like 1
Link to comment
Share on other sites

  • tooth_dr changed the title to Stacking luminance from a mono camera with RGB from OSC camera via a dual rig
9 minutes ago, david_taurus83 said:

Would you not process the colour data to how you like, same with the Lum separately, and them add the Lum to the RGB with LRGBCombination in Pixinsight? Or are you trying to do something else?

I think that's how I would do it, or more probably use PixelMath to have more control over how much Luminance I add.

Steve

  • Thanks 1
Link to comment
Share on other sites

Hi Adam..   I process the RGB and Lum separately then combine both using either the LRGB script in Pi or else in Photoshop by pasting the lum on the RGB setting blend mode to luminance, opacity to 20%..  increase saturation slightly in the RGB, Gaussian blur of 0.6 on the RGB then flatten .. repeat 5 times..  may not need to increase saturation in the later stages..

Dave

 

  • Thanks 1
Link to comment
Share on other sites

Don't you have an option in APP to stack multiple stack registered to same reference frame?

That should be option since you can shoot HaLRGB data - which would mean 5 different stacks - all need to be aligned.

As for combining the data - I have very good method, but it's rather complicated, so better stick to simpler techniques (unless you really want to give it a go - then I'll explain).

Link to comment
Share on other sites

I use StarTools for my main image processing and tend to get best results when using the Compose module to combine separate L,R,G,B channels (and NB if available). 

So I channel extract the OSC stack in PI then analyse, register and normalise the LRGB channels in APP, before loading them into StarTools.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Laurin Dave said:

Hi Adam..   I process the RGB and Lum separately then combine both using either the LRGB script in Pi or else in Photoshop by pasting the lum on the RGB setting blend mode to luminance, opacity to 20%..  increase saturation slightly in the RGB, Gaussian blur of 0.6 on the RGB then flatten .. repeat 5 times..  may not need to increase saturation in the later stages..

Dave

 

Hi Dave,

I am qurious about the rationale (I am sure there is one) about this approach. Why not just add Lum as luminance once at 100% (or less if the RGB data is good and contributes to keep noise low)?

Link to comment
Share on other sites

46 minutes ago, vlaiv said:

Don't you have an option in APP to stack multiple stack registered to same reference frame?

That should be option since you can shoot HaLRGB data - which would mean 5 different stacks - all need to be aligned.

As for combining the data - I have very good method, but it's rather complicated, so better stick to simpler techniques (unless you really want to give it a go - then I'll explain).

Hi Vlad

You can definitely combine data in APP, but typically HaLRGB data will be composed of mono data in each of the respective filter and therefore it works ok.  When I try to use a OSC (3 channels), it isnt compatable with mono (1 channel) and vice versa, and APP seems to require you to pick one of the other before calibrating.   I'll stick to the simpler techniques but thanks for the offer :D

 

Adam

Link to comment
Share on other sites

3 minutes ago, gorann said:

Hi Dave,

I am qurious about the rationale (I am sure there is one) about this approach. Why not just add Lum as luminance once at 100% (or less if the RGB data is good and contributes to keep noise low)?

Daves technique is similar to that used by Olly.  That is how I add it myself, saves washing out the data if the Lum is very strong.

Link to comment
Share on other sites

9 minutes ago, tooth_dr said:

Daves technique is similar to that used by Olly.  That is how I add it myself, saves washing out the data if the Lum is very strong.

You live and learn! Good that I usually only work with RGB data.

Link to comment
Share on other sites

47 minutes ago, tooth_dr said:

Hi Vlad

You can definitely combine data in APP, but typically HaLRGB data will be composed of mono data in each of the respective filter and therefore it works ok.  When I try to use a OSC (3 channels), it isnt compatable with mono (1 channel) and vice versa, and APP seems to require you to pick one of the other before calibrating.   I'll stick to the simpler techniques but thanks for the offer :D

 

Adam

It should be fairly easy to debayer each sub after calibration and then split them to have R, G and B mono subs - and they would act as normal R, G and B subs produced with mono camera and filters.

  • Like 1
Link to comment
Share on other sites

3 hours ago, tooth_dr said:

 

Currently I'm using APP, and it gets you to choose the algorithm, and whether to debayer the data or not.   I cant therefore stack the colour data and the mono data at the same time as I get an error (data is either 1 channel or 3 channel).

 

 

Thanks.

Have you tried splitting the RGb channels in Tab 2 (you can also choose align channels, save calibrated frames then split them into fully calibrated, RGb aligned and then split channels. Then you have R, G, B and Ha, all single channel.

Link to comment
Share on other sites

25 minutes ago, vlaiv said:

It should be fairly easy to debayer each sub after calibration and then split them to have R, G and B mono subs - and they would act as normal R, G and B subs produced with mono camera and filters.

I repeated this suggestion in earlier post just now. Split RGB works in APP with other mono channels of any kind. Stacking algorithms in any software may see the star quality slightly differently form weighting depending on their shape/size from RGB, Ha, but it should work.

  • Like 1
Link to comment
Share on other sites

27 minutes ago, vlaiv said:

It should be fairly easy to debayer each sub after calibration and then split them to have R, G and B mono subs - and they would act as normal R, G and B subs produced with mono camera and filters.

How would you go about doing this?

Link to comment
Share on other sites

1 minute ago, tooth_dr said:

How would you go about doing this?

->

4 minutes ago, GalaxyGael said:

Have you tried splitting the RGb channels in Tab 2 (you can also choose align channels, save calibrated frames then split them into fully calibrated, RGb aligned and then split channels. Then you have R, G, B and Ha, all single channel.

 

Link to comment
Share on other sites

2 minutes ago, GalaxyGael said:

I repeated this suggestion in earlier post just now. Split RGB works in APP with other mono channels of any kind. Stacking algorithms in any software may see the star quality slightly differently form weighting depending on their shape/size from RGB, Ha, but it should work.

Need to look at this later, but in essence i process OSC first up to tab 2, save the channels then go again with all channels L R G B?

Link to comment
Share on other sites

Yes, force CFA debayer using whatever your bayer pattern is, go to Tab 2, save split channels (you can align them also if you wish, or not) and you have calibrated lights in R, G and B separately. Then go back to tab 0, remove the force debayer tick (already done), and go from there.

  • Like 1
Link to comment
Share on other sites

21 minutes ago, GalaxyGael said:

Yes, force CFA debayer using whatever your bayer pattern is, go to Tab 2, save split channels (you can align them also if you wish, or not) and you have calibrated lights in R, G and B separately. Then go back to tab 0, remove the force debayer tick (already done), and go from there.

Thanks for explaining it!

The only other thing to ask, should I then use all the data to create a super lum as well?

It took about 11 hours to full process the 250 OSC subs as a mosaic with advanced normalisation and LNC, can’t wait to see how long it will take all 600 

Link to comment
Share on other sites

You probably could. I never use a synthetic lum from OSC, but if you wanted to there is one thing to consider in terms of speed. With the RAM allocation maximized in APP, be sure to keep your files on the same hard drive as APP, likely the C drive, unless you are using solid state drives. Even with 2nd degree LNC over 2 iterations to smooth out gradients in the mosaic stitch, the bottleneck is always transfer of data form one drive to another via the .dat files as APP goes through it processing. If you have spinny drives, chuck everything temporarily onto the C drive and run from there, many times faster (so long as the space is there for a 600 sub integration!) 

It might be faster to do the 600 form the split RGB files, save out as 32 bit mono, and then just rerun the integration tab using the saved data up to that point for the color?

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.