Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,100
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 13 hours ago, Physopto said:

    The target ADU selected should be about 33%  of the saturation level of your camera. This will give the most accurate and noise free raw flats which will result in the best master flat once stacked. Going too high can result in pixels outside the linear range of the CCD and too low can result in poor signal-to-noise in the flat.

    Ok, let's discuss this for a moment.

    It says that 33% of saturation level will result in most noise free raw flats. Is this statement true?

    Most noise free raw flat will be one with best SNR. With flats most dominant type of noise is shot noise. Read noise in comparison is very small (even with old CCD cameras) and due to usual exposure lengths involved, dark current noise is also very small. Therefore we can approximate noise by shot noise associated with light signal.

    Let's now compare two SNR values of usual CCD camera that has 15k FW capacity. One flat at 33% and one flat at 50%. Signal value in 33% flat will have on average value of 5000e. Associated shot noise is sqrt(5000) = ~70.71e and overall SNR is therefore 5000 / sqrt(5000) = 5000 / 70.71e = ~70.71e. On the other hand 50% signal level will be 7500e and associated noise will be ~86.60e and therefore SNR will be ~86.60e.

    We clearly see that there is example where above statement is not true. In fact, if shot noise is dominant noise source - higher signal value always "wins" in SNR and therefore histogram that has peak higher than 33% will result in better SNR and be more noise free than sub with 33% SNR.

    Now let address second part of that statement:

    13 hours ago, Physopto said:

    Going too high can result in pixels outside the linear range of the CCD

    This statement can be interpreted in two ways and we need to address both. In first case it could happen that CCD sensor is not linear in higher range. I can't really argue this case except to say that in recent time that I've been doing astronomy, I have not heard of imaging sensor that is not linear enough over its useful range.

    If you do search on linearity for your particular camera model, I'm pretty sure you will find a graph like this:

    image.png.02839558db3d5be7d3f875d0316ede98.png

    (atik383L+)

    or this:

    image.png.fcc3dbd928c620542511662ea2d5f728.png

    or this for CMOS camera:

    image.png.7511e2f91845dc6b665650503f487ac8.png

    or maybe this:

    image.png.fe986034fc1a2a17cfabf6935619be3b.png

    In second case - it is meant that there could be some sort of saturation and clipping and therefore pixels can register non linear response. This can happen due to shot noise for example. Shot noise can cause signal value to be higher or lower than actual signal level (that is why it is noise). In fact we know what magnitude that noise is, and we can calculate probability that any particular pixel is above saturation point of the sensor.

    Let's take value that I recommend often - 80% and see how likely that any one pixel is above saturation point. We know that one standard deviation is square root of that number or 8.95%. This means that 66.7% of pixels will be within 80% +/- 8.95%, and 95.5% all pixel values will be within 80% +/- 17.9%. This is still within 100% saturation point as 80+17.9 = 97.9%. In fact, if we calculate it precisely 1.27% of pixels has chance to saturate in single pixel. In stack of 20-30 flat subs that will produce negligible error. In fact this will hold true only if flat is perfectly flat - and it never is - there is vignetting and only central part of flat need to be taken into account as other parts of flat usually have lower values and are less likely to saturate  (disclaimer: I used gaussian approximation for poisson distribution when large numbers are involved).

    Edit: I've made mistake of using percentages where quadratic relationship is involved (things are not linear) - fact is that this percentage is even lower - look at my following post for details

    So this statement is certainly true - going too high will saturate, but as we see, even going as high as 80% will make very few pixels saturate (in reality less than 1%) in single subs and resulting error from stacking 20-30 subs will be minimal.

  2. 1 hour ago, AndyThilo said:

    That's the thing my darks do remove amp glow. I showed that in the image on my first post. Adding in the flats caused loads of issues. Maybe I'm not understanding it. The best processing method for me to get the cleanest images is manually in PI with no flats as below

    Darks polluted with light leak will remove amp glow - that is to be expected. In most cases light leak will act as light pollution. Same way you record your target although light pollution has been added - dark will record amp glow. So will light, once you subtract the two - amp glow will cancel out.

    Light leak will not cancel out. That is the issue.

    I don't know what your master dark looks like, but one single dark that you posted has gradient. If I do simple removal of dark from that light without doing anything else (no fiddling around with background subtraction, no flat calibration - nothing, just dark subtraction), I get this:

    image.png.c1c756712c156c421a722b0708456cf1.png

    Now you have gradient to the opposite side than on dark. It is clear that dark subtraction caused gradient from dark to be transferred to flat. Amp glow is gone, and that is ok - both have same amp glow, but dark has this gradient that is not present in light.

    Few things could be happening here that make a difference between my example and one that you gave from PI.

    a)  I'm doing calibration with single dark. Maybe this particular dark is different from other darks for some reason, or maybe each dark is different because of different amount of light leak. You've used average so any differences averaged out, while I used only one that has distinct gradient

    b) You used background wipe in PI and PI removed this gradient in same way it would handle LP gradient.

    In any case dark is what is causing your flat calibration to fail. I can show you with a bit of math what is going on.

    Imagine you have only two pixels rather than whole image (or maybe left and right side of image what ever is easier for you to imagine). One received 70% of light due to vignetting / dust shadow, while other received 90% of light again due to vignetting/shadow/whatever.

    Now imagine that both of these pixels are background pixels that recorded just sky background. I say this because this means they ought to be uniform in intensity - there should not be variation (let's for moment leave LP gradients aside - this is just to understand how dark calibration impacts flat calibration).

    Let's say that background ADU value is 100ADU. This means that first pixel would record 70e and second pixel would record 90e. We need them to have same value in the end if our calibration is good (because we have even sky background).

    These values are just light signal, but light frame also contains bias and dark signal (dark subs contain both as well). Let's say that dark signal is 20e.

    So what we recorded in our light frame would be 90e and 110e (70e+20e, 90e+20e). Now let's do calibration to get even background. Our flat will be 0.7 and 0.9 (because 70% and 90% of light reach sensor).

    Perfect case:

    ((90e, 110e) - (20e, 20e)) / (0.7, 0.9) = (70e, 90e) / (0.7, 0.9) = 70e / 0.7, 90e / 0.9 = 100, 100 - we have equal pixels, or uniform sky brightness. Calibration is good because we had proper dark value.

    Under correction case - dark has larger value than it should (because of light leak, some additional electrons were accumulated):

    ((90e, 110e) - (30e, 30e)) / (0.7, 0.9) = (60e, 80e) / (0.7, 0.9) = 60e/ 0.7, 80e / 0.9 = 85.7143, 88.88888

    We no longer have uniform sky background, calibration failed, and first pixel still has lower value then second although we used proper flat (0.7, 0.9). Because vignetting / dust shadow is still present - we call that under correction - flat did not manage to fully correct image - but not because flat is wrong - it was because dark was wrong - larger than it should be.

    Over correction case - dark has larger value than it should (in reality this rarely happens like that - it happens when lights have light leak and darks have lower value in comparison to lights, but let's do math anyway to show over correction happening):

    ((90e, 110e) - (10e, 10e)) / (0.7, 0.9) = (80e, 100e) / (0.7, 0.9) = 80e/ 0.7, 100e / 0.9 = 114.2857, 111.1111

    Again - no uniform sky background but this time first pixel is brighter than second pixel - "inversion" happened and what was darker in uncalibrated image now is brighter as if flats corrected too much - we call that over correction.

    -----------------------------

    Above was to show that perfect flats can still fail to do flat calibration if there are issues with either lights or darks.

    I believe that both your light and dark subs are polluted with light leak because of:

    1) Dark has gradient. It is very unlikely that dark sub can have such gradient, and also such gradient is missing from light sub (but everything that is in dark should also exist in light - amp glow for example - if it's in the dark sub, it will certainly be in light sub as it is feature of dark signal). This shows that gradient is not feature of dark signal and is in fact "external" (only external signal can really be due to some sort of light / radiation)

    2) You subs when calibrated show Over correction - that can happen if darks are "less strong" then they should be (see above). Since it is highly unlikely that dark current in darks is less strong than in lights (it can be if we have situation where cooling was not set to same temperature - but from what I can tell - it is not case here) then it must be the case where lights are somehow stronger than they should be. This points to light leak again. Lights have some sort of external signal that did not come thru objective of telescope lens. Otherwise it would be corrected by flats because flats describe how such signal behaves (how much it is attenuated).

    Hope this all makes sense.

    1 hour ago, AndyThilo said:

    One other thing i don't understand. Left is PI manual processing without flats. Right is exactly the same but with flats. The right is also the same as I get using BPP. Both are stretched using STF Autostretch. Nothing else.

    You have very red result in your right image because your flat panel is giving off very blue light. It has already been mentioned that red component of flat is very low in value. This can happen due two things. Either camera has very low QE in red part of spectrum (not the case) - or flat source used produces light that has much less of red in it than other two (green and blue). Cool light has this "feature" (with warm light it is opposite - less blue and more red and green).

    Since you have low red signal compared to other two - flat fielding with such flat will produce very strong color dis balance.  Nothing that white balancing can't fix, or flat normalization (process where each of color peaks in your flats is normalized to 1 - that removes any color dis balance that flat panel produces).

  3. 14 minutes ago, AndyThilo said:

    I don’t know how you’re doing that, what software? 

    I'm using ImageJ - it is free software for scientific image analysis / manipulation, but that is not really important here - it just helps me calculate some things about light levels. You can't really use the method that I've used to get proper image - it is just estimate and all things that darks otherwise remove won't be removed like this (see amp glow for example - proper dark will remove it).

  4. 3 minutes ago, carastro said:

    Sorry Vlaiv, I am not a technical person like you, I just do what works, and definitely I find if the flat is too bright it doesn't do it's job.   I was told 1/3 full well when I started imaging and that definitely works for me.  Also I have heard that's what most other people use.  I have never heard of any-one doing 80%.  

    Carole 

    Fair enough, let's not get bogged down in technical discussion - if 1/3 works for you (and it certainly should) no need to change anything.

    24 minutes ago, AndyThilo said:

    I’ll have to do them outside to get my -20, but I’ll get it running tonight and stick a bin bag over it as well for good measure. 

    It looks like you have light leak of some sorts and it is not only present in darks - it is also there in lights as well. Until you address that, you won't be able to properly calibrate your images. Here is what I've found out:

    Here is light / flat:

    image.png.244651fad97d6e537c33fb0aa034afb1.png

    It shows "inverse" vignetting - and that is fine, that is what you would expect from flat fielding when dark has not been removed. Since dark has gradient on it, I will not be using exact dark frame, but I'll try to guess average value of dark frame and subtract that.

    Here is (light-900ADU)/flat - My first guess is that dark needs to be 900ADU:

    image.png.255de809f3a96b339406a54c1fb05201.png

    Maybe a bit better, but still over correction of flats - we need to subtract more. How about another 900ADU - 1800ADU in total:

    image.png.087650710d1f0ca8096b10d642dd3409.png

    That is better, and like I said above about 1800ADU is some signal not related to light coming from aperture. If we continue, we will start going into under correction regime instead of over correction.

    This is about 2000ADU removed:

    image.png.6f087b84ae09f2e898c972e3399b444c.png

    And this is 2100 ADU removed:

    image.png.54f2bbb45ab61e9f372ee5568d75e098.png

    And 2110ADU:

    image.png.486c4f55c1e5cd7b235c70ef2a657453.png

    There last three start to show under correction (edges and corners are darker then the rest of image).

    But here is important bit - ADU that needs to be subtracted is around 1800ADU.

    Average value of dark that has light leak is ~1174ADU. This means

    a) real dark mean value is lower than that, since we know there was light leak - and light leak is going to raise mean value.

    b) there is a light leak in lights as well - because we need to subtract way more than 1174 to get flat fielding to work properly.

    How do you connect your camera to your scope and is there any light source near by when you record your images?

  5. 11 minutes ago, AndyThilo said:

    I’ll have to do them outside to get my -20, but I’ll get it running tonight and stick a bin bag over it as well for good measure. 

    It is definitely due to darks, here is (light - some_value)/flat:

    image.png.990aba5393b9c8fb457817ff2e61e10a.png

    It shows almost perfect flat calibration. I used 1100ADU as a base and kept increasing it until I got good calibration. I needed to remove dark current and bias for flat fielding to work properly so I needed to guess right amount to be subtracted from lights. In fact I ended up subtracting around 1800ADU.

    You can still see amp glow as I in fact did not use dark sub at all.

    13 minutes ago, carastro said:

    I don't agree with 80% Vlaiv, if the flat is too bright it doesn't work as I have found out to my cost in the early years.

    Can you give any explanation why it might not be working?

  6. 6 minutes ago, AndyThilo said:

    Yep there is definitely problem with flats. It was discussed on another forum...

    It is not necessarily a problem that flat source is weak in one component - it just means that flat panel is not producing very white light - it has color cast.

    If red is weak - it just means that flat panel has very cool light (probably leds that try to emulate light of very high temperature >7000K or similar).

    In fact - your flats seem fine otherwise and only thing that I see wrong with your data are in fact darks - it costs nothing to redo your darks, but this time be careful there is no light leak. Maybe take camera off the scope, cap it off and use aluminum foil and place "face down" on a desk while taking your darks.

  7. 10 minutes ago, AndyThilo said:

    All files are straight out of APT.

    I'm not familiar with APT so I don't know if subs are only recorded as 32bit fits for some reason (it just wastes space) or has something been done to them. I wonder why only lights is recorded in this format?

    Ah, sorry, my bad - it looks like I converted file to 32bit without realizing (I always do that with subs and must have done so by impulse when I opened it and then forgot about it).

    All is fine with light sub!

    12 minutes ago, AndyThilo said:

    Darks do have a slight issue with a mark on them. Not sure about light leaks, I did them outside with a black bin bag over everything, and lens cap of course. 

    You should first redo your darks - I suspect they are causing issues rather than your flat panel. It will cost you nothing to do that (maybe a bit of time - but it can be done indoors on a cloudy night so not much is wasted).

    However, you should maybe change your flat panel - it has very odd distribution of light - one component is 1/10 of value of highest component - third histogram peak is very low at about 4000ADU.

    Does your flat panel have distinct color cast to it (maybe bluish light or very warm?)

    5 minutes ago, Physopto said:

    I don't know if it is somehow my down load but the Flat in Maxim under stretch shows 3 distinct peaks very strange!

    Derek

    That is quite normal for OSC sensor - R, G and B components of sensor have different sensitivity and one always ends up with 3 distinct peaks when using color camera

    5 minutes ago, carastro said:

    I am not familiar with either COMS Cameras, or aurora panels.  But I notice your ADU is 30,000.  I know some people do use that, but my understanding is it has to be 1/3 full well depth, and my Atik camera that is 65,000.  1/3 65,000 = 21666.  I always try to keep my flats around 22,000 - 24,000.

    Have you used this panel successfully previously with this ADU?

    Carole 

    Here flats are quite ok as far as saturation is concerned. Not sure where 1/3 rule came from - I've heard it before, but I don't think it is a good rule (unless someone can explain exactly why it is used). Aim at 80% or so histogram is better option. In fact any histogram value is good as long as you don't have low or high clipping. Higher histogram peak value only ensures that you have good SNR for your flats and that you won't be polluting your lights with noise much.

    • Thanks 1
  8. There is also an issue with your darks, or at least it seems so:

    image.png.f7761abd2d0f1d8fc63c154e482ef70c.png

    left is very stretched dark, while right is light sub without alterations stretched (one that you uploaded).

    Dark has gradient over it, while light does not, although light is probably stretched more because it shows amp glow more than dark.

    It could be that there was some sort of light leak when you took your darks. Could this be possible?

  9. 1 hour ago, AndyThilo said:

    I posted it in my original post but DSS is just basic load them in and let it go. PI BPP I used Linear Fit for all, again let it run. For manual, I created Master dark/Bias. Calibrated flats with them then created master flat. Then calibrated lights with all the masters, followed by Cosmetic correction, debayering, star aligning and finally integrated the lights to their final master.

    For tests without flats, I just left them out...

    Here's a single light - https://1drv.ms/u/s!Ari3AWpbmLZ0gvwzUVAUpJrBlcKLww?e=EF9ptN

    Flat - https://1drv.ms/u/s!Ari3AWpbmLZ0gvwyqruJs6BIvrZD6Q?e=2BETne

    Dark - https://1drv.ms/u/s!Ari3AWpbmLZ0gvw0Tel6GxLMhRq8sw?e=b7Ct6L

    Flat Dark - https://1drv.ms/u/s!Ari3AWpbmLZ0gvw18bGKf7pF8y7PBQ?e=zmnkQ1

    Light seems to be already calibrated? It's 32bit format but still 14bit values for some reason???

    Could you provide one light sub straight out of camera? Or in case you are 100% sure this is it - what capture software did you use?

  10. 4 hours ago, dannybgoode said:

    Actually cropping and enlarging is probably as good a route as any. It’s hard to stop thinking in the same manner as normal photography but if your resolution is correct for the camera / scope combo then that is what matters. 
     

    I think I’m right in saying (and I’ll happily be corrected on this) but if you have two camera / scope combos, both with a resolution of 1” per pixel then the target will be the same size for both. 
     

    If one of your scopes is of a longer focal length and it reduces the field of view accordingly then all that is doing is ‘cropping’ the image with the scope rather than in software. 

    You are certainly correct, but issue is that there are people that don't quite understand what is going on because the way images are displayed by devices.

    If you view your image at 1:1 (1 screen pixel for one image pixel, or 100% zoom level), then you are absolutely right - object will have dimensions that correspond to resolution "/px times size of object in arc seconds.

    Problem comes when images are displayed how they are usually displayed - to fit screen of display device (be that computer screen or smart phone or tablet or whatever) - then size of object is determined by FOV. This is where cropping starts to change things - it will not change resolution or original object size in the image - it will change FOV (decrease it) and that in turn will make relative size of object larger when viewed on "fit to screen" zoom setting.

    • Like 1
  11. 9 hours ago, Pompey Monkey said:

    Wow! that's a lot of explanation. Thanks.

    The way my, rather limited, interpretation is if you subtract the bias (and dark if necessary on long exposures), then:

    • Flats (Vignetting/dust bunnies) are a multiplicative correction factor,
    • Gradients are subtractive.

    Yes/no?

    Ok, now I'm confused :D

    We calibrate our subs so all that remains is light signal. In this process flats are always multiplicative factor and should be applied only to light signal as light is only thing that is affected by blockages and shadowing.

    If we look at simplified model, raw sub that comes out of the camera contains bias signal, dark signal, light signal (here we don't discriminate between target and sky and we yet have no idea of vignetting / dust).

    We need to remove bias and dark so only thing that is left is light signal. We do this by taking darks. Raw dark contains all but light as scope aperture is covered when taking these subs - all light is blocked. This means that raw dark subs contain both bias signal and dark current signal. If we subtract this from our lights we have done the job (this is my explanation above, no need to fiddle around with bias here for simple calibration).

    Once you have only light signal left, the you can correct multiplicative factor of blockage / shadows by dividing by master flat (that also needs to be made out of "pure light" with all other signals removed).

    Gradients are quite special - it is guess work rather than calibration. In principle you can't tell if LP gradient is coming from target or not. Both are light and you can't distinguish how many photons belong to sky and how many to the target. You can only guess by using some sort of approximation - like sky is either constant or linear gradient (or maybe simple polynomial of certain degree) and you can't have negative values, but you know that certain parts of the image don't contain target.

    So answer to above question would be something like: Vignetting / dust is always multiplicative but for flat division to work properly you need proper removal of all signals but light signal. Gradients are additive/subtractive and in general don't depend on proper calibration - neither proper calibration helps with their removal nor you need proper calibration to attempt to remove them.

    Did I get your question right?

    • Thanks 1
  12. 4 hours ago, Pompey Monkey said:

    Bias should be subtracted from every image read from the camera i.e. lights, flats and darks. Except bias frames, of course

    That really depends. I most cases you in fact don't need bias to be subtracted and if you follow "standard" workflow you can actually use any file as bias - even Picasso painting digitized to exact size as your subs :D

    Let me explain and show why is that:

    If we observe "regular" calibration procedure being (same thing happens to flats, so we will skip flat calibration for now, and just mention it at the end):

    - Master bias is stack of bias subs (for now, we will later substitute in Picasso painting instead)

    - Master dark is made by stacking "calibrated" dark subs.

    - Calibrated dark sub is dark sub minus master bias

    - Calibrated light =  (light - master bias - master dark) / master flat

    Let's do a bit of substitution

    Calibrated light = (light - master bias - average(dark - master bias) ) / master flat

    Now average is regular average sum and division, and if we have "constant" term we can pull it in front of the brackets so let's do that

    Calibrated light = (light - master bias - (average(dark) - master bias) ) / master flat.

    Let's rearrange that a bit:

    Calibrated light = (light - master bias + master bias - average(dark) ) / master flat

    Now you will probably notice that we have -master bias and +master bias and fact is that two numbers with same absolute value - one negative and one positive added together will give 0, and we can have any old "number" there it won't change a thing so it is safe to also write this:

    Calibrated light = (light - Picasso image + Picasso image - average(dark)) / master flat

    and that is equal to

    Calibrated light = (light - average(dark)) / master flat

    You don't need bias to do proper calibration, and in fact if you use above "standard" calibration flow - you can use any, and I mean literally any sub as master bias - it will make no difference at all.

    You only need bias in very special cases - like mentioned above by @Merlin66 - scaling darks - either in form of different exposure length or when trying to optimize dark calibration (darks at different temperature).

  13. Just now, MartinFransson said:

    OK, that sounds like I might be able to try :) Just one thing, how do I colour  balance a single (mono) channel? In my world only an RGB image is possible to colour balance...

    Thank you for taking your time to explain!

    You color balance 3 mono frames :D - it's like color balancing regular color image but you do it on individual channels.

    Simplest way to explain what is going on and explain how to do it would be:

    take R_raw, G_raw, B_raw and do RGB compose to get Color_raw then color balance that into Color_balanced and then do RGB split to get R_balanced, G_balanced, B_balanced.

    How it actually works is - with pixel math:

    R_balanced = c1 * R_raw + c2 * G_raw + c3 * B_raw

    G_balanced = c4 * R_raw + c5 * G_raw + c6 * B_raw

    B_balanced = c7 * R_raw + c8 * G_raw + c9 * B_raw

    where

    c1, c2, c3
    c4, c5, c6
    c7, c8, c9

    is color space transform matrix and depends on camera / filters and color space you are aiming to transform to. In general it is not easy thing to find this matrix, but you can do single star calibration, or you can solve for number of stars, or you can inspect QE curves and create transform, or you can take tool to do it for you - PI has star color calibration tool - use that compose RGB image out of mono images, do color calibration, split result again into mono images.

     

    • Like 1
  14. 1 minute ago, MartinFransson said:

    Do you mean that each individual sub in R, G and B has to be processed (wiping, offsetting etc) and then stacking them? Or is that a process for the stacked R, G and B images? Seems like A LOT of work to process each individual sub! 😮

    Honestly, I think this process is above my skill level at the moment :) 

    No, no, it is fairly simple, you start with stacked linear raw color images. Let's call those R_raw, G_raw, B_raw. Those are images that you would do regular RGB combine and stretch to create usual image.

    You wipe them so that you remove any gradient and to remove any color cast due to sky flux. Then you color balance them (you don't have to, but if you do - this is the stage when it should be done if you want to do it properly).

    This creates R_regular, G_regular, B_regular. Next you create ratio images as:

    R_ratio = R_regular / max(R_regular, G_regular, B_regular)

    G_ratio = G_regular / max(R_regular, G_regular, B_regular)

    B_ratio = B_regular / max(R_regular, G_regular, B_regular)

    simplest way to get above is to create mini stack of only 3 subs - three channel images and stack them with max function. This is only stacking involved and it is not regular stacking - it is just trick to get max values out of three images.

    In the end ratio combination goes like this:

    R_stretched = L_stretched * R_ratio

    G_stretched = L_stretched * G_ratio

    B_stretched = L_stretched * B_ratio

    and that is it.

    If you want to be correct in your color handling than you need to modify above with two changes.

    1) when color balancing, you need to make sure you are color balancing to sRGB linear color space

    2) Final combination should account for sRGB gamma in following way: R_stretched = gamma(inverse_gamma(L_stretched) * R_ratio) instead of R_stretched = L_stretched * R_ratio

    That is it.

    • Like 1
  15. 3 minutes ago, MartinFransson said:

    Thank you! I´m not sure I follow what you mean by "do RGB ratio to transfer color to those stars" but I´ll give it a try :) I use Pixinsight and Photoshop.

    It's fairly simple technique. I'll describe it in "pixel math" terms, but you can accomplish it in Photoshop with layers as well.

    You take your R, G and B subs, wipe them (this is not needed if you aim only for star color, but if you do regular LRGB composing - it should be done) - meaning remove background and gradients from LP, and do color balance (however you do it - whether it is single star / multiple star color calibration or color ratio or whatever).

    Next thing you do is add small offset to each of channels. It needs to be same and needs to be just enough so it moves all pixel values to be positive (this is in case that your wipe puts background at 0 and due to noise there are some negative values, you don't want any negative values in any of your subs, and also adding small offset reduces color saturation of background which is good thing as it reduces color noise in background sky if you do full LRGB image - again, not as important for stars only).

    Then you stack those three channels with max function (not average but max). Next you divide each of colors with this max stack. This has prepared your color data for "transfer".

    Next you take your stretched luminance layer and just simply multiply it with each of color ratio channels to give you proper R, G and B channel of final image. Btw - stretched luminance needs to be in 0-1 range, and above process of creating ratios will certainly put each ratio sub in 0-1 range.

    Not above is not strictly correct way to do it in terms of color accuracy. Proper way to do it would be to first apply inverse gamma to lumiance, then multiply then apply proper gamma to result (multiplication needs to be done on linear data) - but that will create proper colors and most people will think that something is wrong with your image as they are used to much more colorful and saturated images :D. Most images in proper color look rather dull compared to renditions that we usually see and are used to it.

    • Like 1
  16. 25 minutes ago, MartinFransson said:

    Thinking about collecting some LRGB but I find it hard to blend that in without the stars overpowering the image. Thoughts?

    I think that following approach is worth a try (you can even try it out with this data set to see how you like it):

    Make starless version of both OIII and Ha with starnet++ and combine them to produce color version of nebulosity. From Ha stretched version and it's starless counterpart extract stars (subtract layers).

    In this version just layer stars on top of combined nebulosity with lighten blend mode (this will give you pure white stars).

    In RGB version collect only RGB data as you won't need luminance. Use stars only as luminance and do RGB ratio to transfer color to those stars. Again, transfer RGB colorful stars on top of above nebulosity.

    In both bases (with RGB and without) your stars will be as tight as in Ha image, and in most cases those stars tend to be fairly tight.

    • Like 1
  17. Could be.

    I can't really discern what sort of aberrations are mixed in there, but one thing is for sure - it's not single aberration that is causing such star shapes. Might be coma combined with tilt.

    I think that first step should be visual inspection of star pattern. Can you take high power eyepiece and point scope to rather bright star and inspect in focus / out focus and perfect focus star images by eye to see if scope is properly collimated.

    If star shapes look rather good visually, then I would say focuser would be main suspect.

  18. I second above recommendation, but do have question for you:

    When you say that your scope is limited for imaging smaller galaxies - what exactly do you mean?

    If you are referring to resolution - ASI1600 (from your sig) coupled with ED80 at native focal length will provide you with 1.3"/px. With Heq5 it is very unlikely that you will want to go any lower than that regardless of the scope that sits on it.

    For example, if you choose to go with above scope (and I think that you should) - you will be working on native 0.57"/px with ASI1600 - that is too much, and you will want to bin in software by factor of at least x2.

    That will give you resolution of 1.14"/px - which is still too much I think.

    Now if you don't have enough light grasp to go for fainter things, they yes - going with 6" scope will give you more light gathering capability. Just don't forget to adjust your sampling rate to something reasonable (1.2"-1.5" for high power work with 6" scope and heq5).

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.