Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

scotty38

Members
  • Posts

    1,926
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by scotty38

  1. You should build templates and use those for your imaging runs then you should have little. if any, issues weather excepting of course.

    I have templates built for LRGB, SHO and also HaLRGB so depending which target I select in the Framing tab I just send that to the template that I already loaded. The templates already have a target included  (hence the RGB etc designation) so I can just "Update" the target and it's ready to go.

    Minor point is that I save my sequences for each target anyway so just loading a sequence and updating its target is what I actually do in practice. I reset the sequence of course.....

    • Like 3
  2. 6 hours ago, CCD Imager said:

    I like the image and for just the colour component, it could pass as a completed image!

    My only comment would be that the background has a green cast. I imported into Photoshop and using the colour sampling tool, typical background numbers were RGB: 31:41:30. Correcting the colour balance gives an even more pleasing image..

    Adrian

     

    I must admit I'd not noticed but now you mention it...

    Is this one better, certainly seems to have removed the green in a side by side comparison?

    M31a.thumb.jpg.13eb21c57e0a56157a4f7c9d709133a2.jpg

    • Like 5
  3. I took these images about 10 days ago but had been struggling to get any colour at all. I've just had another play with it using a few extra processes that are new to me including ArcsinhStretch and LocalHistogramEquilization. No BlurXTerminator though 🤣 🤣

    This is just the RGB but I have some L and Ha to add to it at some point. It's not as vivid as some I've found on Astrobin but it's an improvement on my first few efforts.....

    M31.thumb.jpg.47f5b65e31ac393d89831d80d2ef9d2c.jpg

    • Like 14
  4. 15 hours ago, tomato said:

    My door faces West which was the only convenient orientation for access. No problem with rain coming in around the door, the worst gap for this is the top of the shutter, but this can be oriented so it’s on the lee side of any bad weather.

    Yes I learned this early on and now my Park position is South-West too against the prevailing weather so the opening doesn't let heavy rain in....

  5. 10 minutes ago, tooth_dr said:

    I use SGPro which doesn’t do synchronous dithering so I just accept loss of a sub.  Two minute subs makes this ok. In the past with my CCDs and 20 minute subs, I used APT to synchronise dithering, since losing a long sun was a disaster. However APT doesn’t offer the framing wizard so I’m sticking with SGPro.  I have briefly tried NINA but didn’t stick with it. 

    There's been quite a bit of work put into the synchronised dithering in NINA so if it was some time ago it could be worth having another look at it.

    • Like 1
  6. 6 minutes ago, wimvb said:

     

    As I wrote before, spcc together with background neutralisation will in essence undo any G2V exposure matching. The disadvantage of G2V matching exposures is that each channel needs its own dark master. Since any colour calibration/background neutralisation step will equalise the channels to get a neutral background, you are implementing a more complex workflow in data acquisition that will be undone in post processing. Also, there is no need to match RGB channels with Linear Fit in PI, because even this will be undone in colour calibration. The only possible benefit of doing Linear Fit would be if you have a dominating colour that affects DBE/ABE. The excessive dominance of one channel may make it necessary to set such a high tolerance during background extraction, that it can affect the background extraction of the other two channels.

    (Just as a side note, ZWO once designed a set of RGB filters to match the QE curve of the then popular ASI1600MM, in order to achieve equal flux for all three channels with one single exposure time. They haven't done that for any newer model, afaIk.)

    I am not sure anyone was looking at using g2v AND linearfit. As far as I was aware it was a question about doing something similar to g2v in PI. Whether the anser is linearfit has generated more discussion though.

  7. 10 hours ago, vlaiv said:

    Linear fit is extremely useful tool but it is being misused here.

    It is actually very useful tool for preparing subs for stacking. Over the course of imaging session as target changes position in the sky - two things happen:

    target changes brightness and level of light pollution changes from sub to sub.

    Target changes brightness as it changes altitude while earth rotates and "atmosphere number" changes. This part is multiplicative in nature.

    LP levels also depend on part of the sky in question and as target "tracks" across the sky - LP levels change.

    In general sense, image signal can be written in form:

    a * target + b

    where a is constant that depends on atmosphere thickness and sky transparency and b is constant that represents average level of light pollution - or background signal.

    It is easy to see that above equation is linear (ax+b) and linear fit changes a and b coefficients in one sub to match those of other sub - thus making them stack compatible (same signal strength and same background level).

    This is misused as color calibration when linear fit is performed on 2 other color channels against selected one (like fitting R and B to G). It leads to "gray world" type of artificial white balance and also tries to make background color gray as well. "Gray world" white balance algorithm is based on premise that average color in any image is gray (there is as much blue as red and green so that average pixel value is really gray) - but that is flawed assumption.

    Also - assuming that background LP is gray is also flawed. In most cases it is orange in color and should not be scaled like that - but rather removed altogether from the image.

     

    Apologies @DaveS if this is going OT.

    Thanks again Vlaiv, as always.... Hopefully I'm not misunderstanding but are you then saying that if you have three masters, either RGB or SHO then using linearfit to equalise them before combination is not a good thing to do?

  8. 9 hours ago, vlaiv said:

    Actually no.

    Linear fit is the wrong way to go about things.

    It imparts color of light pollution onto the image.

    One wants to remove background rather than "equalize" it for accurate color information.

    I would say that following would be appropriate order of things:

    1. remove background signal from each channel

    2. (optionally) do G2V scaling

    3. Do method of color calibration of the image data

    In fact - G2V scaling is a color calibration method - although inaccurate and highly rudimentary but certainly beats doing no color calibration.

    Better way would be to use range of star types and do computation of transform matrix rather than just 3 values.

    3 scaling values that we get from G2V calibration is just "diagonal" matrix - or this:

    image.png.4b16fd87a5c6eb89a02805ed35a9d251.png

    Instead of computing all 9 values of matrix - we computer just main diagonal and treat other members as zeroes.

    But this is still not the best way to do color calibration as star colors are rather limited set. Best way would be to derive general type of transform matrix for large chromaticity set and then do above photometric approach as correction factor.

    Thanks Vlaiv but that's why I added "by the sounds of it" 🙂

    It sounded to me like g2v was being used for equalising the channels which is what linearfit does if you look at the numbers. Whether it's the right tool that's needed or whether it does things correctly I wouldn't really know but it did sound like that's what was being asked for 🙂

    • Like 1
  9. I have to say that I've not really noticed folk changing the exposures per filter, whether that's broadband or narrowband apart from one exception that is Luminance.

    Linearfit does exactly what you're doing with G2V by the sound of it. Typically folk choose the lowest background valued frame (usually Sii in narrowband) and use that to apply to Oiii and Ha.

    For my narrowband stuff I'll expose each filter the same, say 300s, and if I want to will then use linearfit to balance them, using Sii as the reference.

    • Thanks 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.