Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

RGB 2x2 & Lum 1x1 - at what point resampling happens


Recommended Posts

Hi guys.

I have been experimenting with M13 (http://www.astrobin.com/full/196099/B/) and binning the RGB 2x2. Hit a few issues while staraligning, as pixinsight was making dark halos around stars. Fixed this by changing algoritm but:

I now have 2 paths i followed, and i can not exactly decide on which one is better.

1) Calibrating LRGB separately.

2) Staraligning L, Integrating (all 1x1)

Path 1:

Staralign all calibrated RGB files to the integrated Lum (it will be upscaled and aligned then)

Integrate R, B, G, separately and make masters

LRGBCombination (Without L)

Further processing

Path2:

Staralign all calibrated RGB data to the best frame of R, B, or G (not Lum, so it will still be binned 2x2)

Integrate R G B separatly

LRGBCombination (Without L)

Staralign RGB master to Luminance master.

Further processing

I can find both ways when researching this subject, but can not decide on 'the way to go'.

Any infos from other RGB binners out there?

Regards, Graem

Link to comment
Share on other sites

Ah another path that also exists, its nearly the same as Path2, just that you:

Path3:

Staralign all calibrated RGB data to the best frame of R, B, or G (not Lum, so it will still be binned 2x2)

Integrate R G B separatly

Staralign R G B masters to Luminance (will be upscaled now)

LRGBCombination (Without L)

Further Processing

 

 

I have now tested all 3 paths, and the outcome from my point of view is similar if not the same, my eye can not see any difference.

But there MUST be a logical explanation for a certain path, nobody got any ideas?

Link to comment
Share on other sites

I'm afraid I don't combine in PI but in Ps. I don't bin colour these days either, but when I did I resized the RGB image to fit the L once it (the RGB) was first put together at the linear stage. I then processed them entirely separately because they have different requirements. L needs to be sharp (RGB doesn't) and contrasty. RGB needs to have high colour saturation, low colour noise and low normal noise.

I then blend the L iteratively, adding a little at at time, boosting the colour saturation each time and putting a slight blur on the L (very slight) until the last application when it goes in fully sharp and at 100%.

Olly

Link to comment
Share on other sites

Hi Olly.

Thx for the reply (finally somebody! :) )

I think its not really a matter of PI or PS, more about if you resize the single subs (while registering to LUM) or do you register single subs to each other without resizing, then integrate, and register the masters to the LUM.

I have spent the last 3 hours processing various data in all 3 ways described at the top.

The differences are marginal, it seems Path1 leads to a slightly more noisy background, but this could be totally random as i only was able to reproduce this in one data set.

It seems the larger differences occur when having problems staraligning where the algoritm (in PI standard lanczos 3 mostly) has problems with clamping. 

In these cases, Path 2/3 are much better, or you will integrate all the clamping problems to a 'super clamping problem' so to say :)

I'm worried that this is always dependant on the data you have, and its going to drive me nuts for the next months until i find 'the way to go'.

Interestingly was that the main difference between Path1 & Path2/3 is different RGB weights, i had more red in one, more green in the other versions. I can not explain that currently at all.

Path1 has the advantage that you only register RGB files once.

Path2/3 you register RGB to oneanother, and then register the masters to LUM, so you're registering twice, having more chances of registration problems & artifacts. However Path2/3 makes more sense to me, as you're integrating the raw data, not rescaled/interpolated data.

Driving me nuts! :)

Regards, Graem

Link to comment
Share on other sites

Hi Graem,

Sorry for the late reply to your PM. The way I do things is a tiny bit differently:

1. Calibrate and stack L, R, G and B all completely separately from each other to get a master L, master R, master G and master B. These masters are 1x1 for L and 2x2 for R, G and B. 

2. Use StarAlignment on the R, G and B masters with the master L as reference (this up-scales the R, G and B masters to the same image resolution as L - StarAlignment interpolates for this up-scaling so the end result is good). 

3. Further post-processing (DynamicCrop, DynamicBackgroundExtraction and LinearFit to prepare them for combination followed by the rest of the workflow to get a final image). 

I feel this is the easiest way to do things as you're calibrating and stacking the L, R, G and B images separately and in their own right, before you re-align the masters to each other. I hope this is helpful! :)

Best Regards,

Kayron

Link to comment
Share on other sites

Hi Kayron.

This is exactly Path3 i described in my second post, and also my preferred version.

But interesting turn of events (just because we're all agreeing too much :p )

PM'd Pixinsights team, and they prefer Version 1! (where you align all RGB-Data to L, right after calibration, and before integration)

-----------------------

(MSG from Pixinsight Team)

Always try to minimize the number of transformations applied to your data. In the case of LRGB sets, just register everything (all individual R, G, B and L frames) to one of the L frames. This can be done very easily with the BatchPreprocessing script. Once you have everything calibrated and registered, you can integrate the L and RGB masters to perform the LRGB combination procedure.

-----------------------

And now i'm back to the beginning.

I've got the feeling, that they are thinking, that the advantage of integrating non-upscaled data is smaller, than the advantage of not aligning data twice.

Regards, Graem

Link to comment
Share on other sites

Hi Graem,

It really does seem to me that it doesn't make a visible difference though, having tried these methods. Literally, no visible difference I can pick up even if I zoom in aggressively, especially with DrizzleIntegration being used for the final result in each colour channel. I think this is a typical case of PixInsight Ultra-Perfection. It's all good if you're willing to follow the steps, but the visible difference made to the image is null in my experience. Personally I still prefer your Path 3 as described by myself as well! :)

Best Regards,

Kayron

Link to comment
Share on other sites

Hey Kayron.

Cool stuff. I'll compare it with my next complete dataset, but also i can not see any huge difference.

Especially that the RGB will not carry any detail, i really think i'm splitting hairs here....

But, always something to worry & improve right?! :)

Link to comment
Share on other sites

  • 1 year later...

Hi Everybody.

I just wanted to quickly update this thread, as i did my tests a while ago, but forgot to tell you what my findings were and show you the examples in images:

Path1 (upscaling to 1x1 in the initial Staralignment of every frame individually, see my initial post)

PixInsight_1_8.png

Path2 (staraligning each set to itsself, keeping data in 2x2, and then staraligning the master 2x2 to lum to get it to 1x1)

PixInsight_1_8 2.png

 

I guess this depends on the algorithm you take, but in my case i need to take the Bicubic Spline filter with 0 clamping in PI, as if not, i get black dark pixels around the stars. I suspect that this is different for every resolution (px / " ).

In my case Path1 (upscaling in the initial align directly to LUM, or HA or whatever you have as you're 1x1) is clearly better in terms of quality of stars & nebula but also noisier. I will be continuing to using this path when using binned data for either narrowband or broadband.

 

Kind regards, Graem

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.